NASA Astrophysics Data System (ADS)
Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia
2018-06-01
Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.
NASA Technical Reports Server (NTRS)
Suarex, Max J. (Editor); Chou, Ming-Dah
1994-01-01
A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.
Spectral cumulus parameterization based on cloud-resolving model
NASA Astrophysics Data System (ADS)
Baba, Yuya
2018-02-01
We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
2011-12-01
the designed parameterization scheme and adaptive observer. A cylindri- cal battery thermal model in Eq. (1) with parameters of an A123 32157 LiFePO4 ...Morcrette, M. and Delacourt, C. (2010) Thermal modeling of a cylindrical LiFePO4 /graphite lithium-ion battery. Journal of Power Sources. 195, 2961
Mihailovic, Dragutin T; Alapaty, Kiran; Podrascanin, Zorica
2009-03-01
Improving the parameterization of processes in the atmospheric boundary layer (ABL) and surface layer, in air quality and chemical transport models. To do so, an asymmetrical, convective, non-local scheme, with varying upward mixing rates is combined with the non-local, turbulent, kinetic energy scheme for vertical diffusion (COM). For designing it, a function depending on the dimensionless height to the power four in the ABL is suggested, which is empirically derived. Also, we suggested a new method for calculating the in-canopy resistance for dry deposition over a vegetated surface. The upward mixing rate forming the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. The vertical eddy diffusivity is parameterized using the mean turbulent velocity scale that is obtained by the vertical integration within the ABL. In-canopy resistance is calculated by integration of inverse turbulent transfer coefficient inside the canopy from the effective ground roughness length to the canopy source height and, further, from its the canopy height. This combination of schemes provides a less rapid mass transport out of surface layer into other layers, during convective and non-convective periods, than other local and non-local schemes parameterizing mixing processes in the ABL. The suggested method for calculating the in-canopy resistance for calculating the dry deposition over a vegetated surface differs remarkably from the commonly used one, particularly over forest vegetation. In this paper, we studied the performance of a non-local, turbulent, kinetic energy scheme for vertical diffusion combined with a non-local, convective mixing scheme with varying upward mixing in the atmospheric boundary layer (COM) and its impact on the concentration of pollutants calculated with chemical and air-quality models. In addition, this scheme was also compared with a commonly used, local, eddy-diffusivity scheme. Simulated concentrations of NO2 by the COM scheme and new parameterization of the in-canopy resistance are closer to the observations when compared to those obtained from using the local eddy-diffusivity scheme. Concentrations calculated with the COM scheme and new parameterization of in-canopy resistance, are in general higher and closer to the observations than those obtained by the local, eddy-diffusivity scheme (on the order of 15-22%). To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO2) were compared for the years 1999 and 2002. The comparison was made for the entire domain used in simulations performed by the chemical European Monitoring and Evaluation Program Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.
1993-10-01
One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert;
2017-01-01
This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
NASA Astrophysics Data System (ADS)
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
Prediction of convective activity using a system of parasitic-nested numerical models
NASA Technical Reports Server (NTRS)
Perkey, D. J.
1976-01-01
A limited area, three dimensional, moist, primitive equation (PE) model is developed to test the sensitivity of quantitative precipitation forecasts to the initial relative humidity distribution. Special emphasis is placed on the squall-line region. To accomplish the desired goal, time dependent lateral boundaries and a general convective parameterization scheme suitable for mid-latitude systems were developed. The sequential plume convective parameterization scheme presented is designed to have the versatility necessary in mid-latitudes and to be applicable for short-range forecasts. The results indicate that the scheme is able to function in the frontally forced squallline region, in the gently rising altostratus region ahead of the approaching low center, and in the over-riding region ahead of the warm front. Three experiments are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Yuxing; Fan, Jiwen; Xiao, Heng
Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32more » km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.« less
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation
NASA Astrophysics Data System (ADS)
Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.
2014-12-01
Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.
Cloud microphysics modification with an online coupled COSMO-MUSCAT regional model
NASA Astrophysics Data System (ADS)
Sudhakar, D.; Quaas, J.; Wolke, R.; Stoll, J.; Muehlbauer, A. D.; Tegen, I.
2015-12-01
Abstract: The quantification of clouds, aerosols, and aerosol-cloud interactions in models, continues to be a challenge (IPCC, 2013). In this scenario two-moment bulk microphysical scheme is used to understand the aerosol-cloud interactions in the regional model COSMO (Consortium for Small Scale Modeling). The two-moment scheme in COSMO has been especially designed to represent aerosol effects on the microphysics of mixed-phase clouds (Seifert et al., 2006). To improve the model predictability, the radiation scheme has been coupled with two-moment microphysical scheme. Further, the cloud microphysics parameterization has been modified via coupling COSMO with MUSCAT (MultiScale Chemistry Aerosol Transport model, Wolke et al., 2004). In this study, we will be discussing the initial result from the online-coupled COSMO-MUSCAT model system with modified two-moment parameterization scheme along with COSP (CFMIP Observational Simulator Package) satellite simulator. This online coupled model system aims to improve the sub-grid scale process in the regional weather prediction scenario. The constant aerosol concentration used in the Seifert and Beheng, (2006) parameterizations in COSMO model has been replaced by aerosol concentration derived from MUSCAT model. The cloud microphysical process from the modified two-moment scheme is compared with stand-alone COSMO model. To validate the robustness of the model simulation, the coupled model system is integrated with COSP satellite simulator (Muhlbauer et al., 2012). Further, the simulations are compared with MODIS (Moderate Resolution Imaging Spectroradiometer) and ISCCP (International Satellite Cloud Climatology Project) satellite products.
NASA Technical Reports Server (NTRS)
Miller, Timothy L.; Robertson, Franklin R.; Cohen, Charles; Mackaro, Jessica
2009-01-01
The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models that have been developed at Goddard Space Flight Center to support NASA's earth science research in data analysis, observing system modeling and design, climate and weather prediction, and basic research. The work presented used GEOS-5 with 0.25o horizontal resolution and 72 vertical levels (up to 0.01 hP) resolving both the troposphere and stratosphere, with closer packing of the levels close to the surface. The model includes explicit (grid-scale) moist physics, as well as convective parameterization schemes. Results will be presented that will demonstrate strong dependence in the results of modeling of a strong hurricane on the type of convective parameterization scheme used. The previous standard (default) option in the model was the Relaxed Arakawa-Schubert (RAS) scheme, which uses a quasi-equilibrium closure. In the cases shown, this scheme does not permit the efficient development of a strong storm in comparison with observations. When this scheme is replaced by a modified version of the Kain-Fritsch scheme, which was originally developed for use on grids with intervals of order 25 km such as the present one, the storm is able to develop to a much greater extent, closer to that of reality. Details of the two cases will be shown in order to elucidate the differences in the two modeled storms.
NASA Astrophysics Data System (ADS)
Zhang, Junhua; Lohmann, Ulrike
2003-08-01
The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.
A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme
Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...
Adaptive Aft Signature Shaping of a Low-Boom Supersonic Aircraft Using Off-Body Pressures
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Li, Wu
2012-01-01
The design and optimization of a low-boom supersonic aircraft using the state-of-the- art o -body aerodynamics and sonic boom analysis has long been a challenging problem. The focus of this paper is to demonstrate an e ective geometry parameterization scheme and a numerical optimization approach for the aft shaping of a low-boom supersonic aircraft using o -body pressure calculations. A gradient-based numerical optimization algorithm that models the objective and constraints as response surface equations is used to drive the aft ground signature toward a ramp shape. The design objective is the minimization of the variation between the ground signature and the target signature subject to several geometric and signature constraints. The target signature is computed by using a least-squares regression of the aft portion of the ground signature. The parameterization and the deformation of the geometry is performed with a NASA in- house shaping tool. The optimization algorithm uses the shaping tool to drive the geometric deformation of a horizontal tail with a parameterization scheme that consists of seven camber design variables and an additional design variable that describes the spanwise location of the midspan section. The demonstration cases show that numerical optimization using the state-of-the-art o -body aerodynamic calculations is not only feasible and repeatable but also allows the exploration of complex design spaces for which a knowledge-based design method becomes less effective.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson-Sellers, A.
Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology andmore » (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.« less
NASA Astrophysics Data System (ADS)
Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.
2018-04-01
Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.
NASA Astrophysics Data System (ADS)
Johnson, E. S.; Rupper, S.; Steenburgh, W. J.; Strong, C.; Kochanski, A.
2017-12-01
Climate model outputs are often used as inputs to glacier energy and mass balance models, which are essential glaciological tools for testing glacier sensitivity, providing mass balance estimates in regions with little glaciological data, and providing a means to model future changes. Climate model outputs, however, are sensitive to the choice of physical parameterizations, such as those for cloud microphysics, land-surface schemes, surface layer options, etc. Furthermore, glacier mass balance (MB) estimates that use these climate model outputs as inputs are likely sensitive to the specific parameterization schemes, but this sensitivity has not been carefully assessed. Here we evaluate the sensitivity of glacier MB estimates across the Indus Basin to the selection of cloud microphysics parameterizations in the Weather Research and Forecasting Model (WRF). Cloud microphysics parameterizations differ in how they specify the size distributions of hydrometeors, the rate of graupel and snow production, their fall speed assumptions, the rates at which they convert from one hydrometeor type to the other, etc. While glacier MB estimates are likely sensitive to other parameterizations in WRF, our preliminary results suggest that glacier MB is highly sensitive to the timing, frequency, and amount of snowfall, which is influenced by the cloud microphysics parameterization. To this end, the Indus Basin is an ideal study site, as it has both westerly (winter) and monsoonal (summer) precipitation influences, is a data-sparse region (so models are critical), and still has lingering questions as to glacier importance for local and regional resources. WRF is run at a 4 km grid scale using two commonly used parameterizations: the Thompson scheme and the Goddard scheme. On average, these parameterizations result in minimal differences in annual precipitation. However, localized regions exhibit differences in precipitation of up to 3 m w.e. a-1. The different schemes also impact the radiative budgets over the glacierized areas. Our results show that glacier MB estimates can differ by up to 45% depending on the chosen cloud microphysics scheme. These findings highlight the need to better account for uncertainties in meteorological inputs into glacier energy and mass balance models.
Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.
2013-02-01
In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Lau, William K. M. (Technical Monitor)
2002-01-01
Previous studies (Chao 2000, Chao and Chen 2001, Kirtman and Schneider 2000, Sumi 1992) have shown that, by means of one of several model design changes, the structure of the ITCZ in an aqua-planet model with globally uniform SST and solar angle (U-SST-SA) can change between a single ITCZ at the equator and a double ITCZ straddling the equator. These model design changes include switching to a different cumulus parameterization scheme (e.g., from relaxed Arakawa Schubert scheme (RAS) to moist convective adjustment scheme (MCA)), changes within the cumulus parameterization scheme, and changes in other aspects of the model, such as horizontal resolution. Sometimes only one component of the double ITCZ shows up; but still this is an ITCZ away from the equator, quite distinct from a single ITCZ over the equator. Since these model results were obtained by different investigators using different models which have yielded reasonable general circulation, they are considered as reliable. Chao and Chen (2001; hereafter CC01) have made an initial attempt to interpret these findings based on the concept of rotational ITCZ attractors that they introduced. The purpose of this paper is to offer a more complete interpretation.
Toward computational models of magma genesis and geochemical transport in subduction zones
NASA Astrophysics Data System (ADS)
Katz, R.; Spiegelman, M.
2003-04-01
The chemistry of material erupted from subduction-related volcanoes records important information about the processes that lead to its formation at depth in the Earth. Self-consistent numerical simulations provide a useful tool for interpreting this data as they can explore the non-linear feedbacks between processes that control the generation and transport of magma. A model capable of addressing such issues should include three critical components: (1) a variable viscosity solid flow solver with smooth and accurate pressure and velocity fields, (2) a parameterization of mass transfer reactions between the solid and fluid phases and (3) a consistent fluid flow and reactive transport code. We report on progress on each of these parts. To handle variable-viscosity solid-flow in the mantle wedge, we are adapting a Patankar-based FAS multigrid scheme developed by Albers (2000, J. Comp. Phys.). The pressure field in this scheme is the solution to an elliptic equation on a staggered grid. Thus we expect computed pressure fields to have smooth gradient fields suitable for porous flow calculations, unlike those of commonly used penalty-method schemes. Use of a temperature and strain-rate dependent mantle rheology has been shown to have important consequences for the pattern of flow and the temperature structure in the wedge. For computing thermal structure we present a novel scheme that is a hybrid of Crank-Nicholson (CN) and Semi-Lagrangian (SL) methods. We have tested the SLCN scheme on advection across a broad range of Peclet numbers and show the results. This scheme is also useful for low-diffusivity chemical transport. We also describe our parameterization of hydrous mantle melting [Katz et. al., G3, 2002 in review]. This parameterization is designed to capture the melting behavior of peridotite--water systems over parameter ranges relevant to subduction. The parameterization incorporates data and intuition gained from laboratory experiments and thermodynamic calculations yet it remains flexible and computationally efficient. Given accurate solid-flow fields, a parameterization of hydrous melting and a method for calculating thermal structure (enforcing energy conservation), the final step is to integrate these components into a consistent framework for reactive-flow and chemical transport in deformable porous media. We present preliminary results for reactive flow in 2-D static and upwelling columns and discuss possible mechanical and chemical consequences of open system reactive melting with application to arcs.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
NASA Astrophysics Data System (ADS)
Salimun, Ester; Tangang, Fredolin; Juneng, Liew
2010-06-01
A comparative study has been conducted to investigate the skill of four convection parameterization schemes, namely the Anthes-Kuo (AK), the Betts-Miller (BM), the Kain-Fritsch (KF), and the Grell (GR) schemes in the numerical simulation of an extreme precipitation episode over eastern Peninsular Malaysia using the Pennsylvania State University—National Center for Atmospheric Research Center (PSU-NCAR) Fifth Generation Mesoscale Model (MM5). The event is a commonly occurring westward propagating tropical depression weather system during a boreal winter resulting from an interaction between a cold surge and the quasi-stationary Borneo vortex. The model setup and other physical parameterizations are identical in all experiments and hence any difference in the simulation performance could be associated with the cumulus parameterization scheme used. From the predicted rainfall and structure of the storm, it is clear that the BM scheme has an edge over the other schemes. The rainfall intensity and spatial distribution were reasonably well simulated compared to observations. The BM scheme was also better in resolving the horizontal and vertical structures of the storm. Most of the rainfall simulated by the BM simulation was of the convective type. The failure of other schemes (AK, GR and KF) in simulating the event may be attributed to the trigger function, closure assumption, and precipitation scheme. On the other hand, the appropriateness of the BM scheme for this episode may not be generalized for other episodes or convective environments.
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
NASA Astrophysics Data System (ADS)
De Meij, A.; Vinuesa, J.-F.; Maupas, V.
2018-05-01
The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.
NASA Astrophysics Data System (ADS)
Anurose, T. J.; Bala Subrahamanyam, D.
2014-06-01
The performance of a surface-layer parameterization scheme in a high-resolution regional model (HRM) is carried out by comparing the model-simulated sensible heat flux (H) with the concurrent in situ measurements recorded at Thiruvananthapuram (8.5° N, 76.9° E), a coastal station in India. With a view to examining the role of atmospheric stability in conjunction with the roughness lengths in the determination of heat exchange coefficient (CH) and H for varying meteorological conditions, the model simulations are repeated by assigning different values to the ratio of momentum and thermal roughness lengths (i.e. z0m/z0h) in three distinct configurations of the surface-layer scheme designed for the present study. These three configurations resulted in differential behaviour for the varying meteorological conditions, which is attributed to the sensitivity of CH to the bulk Richardson number (RiB) under extremely unstable, near-neutral and stable stratification of the atmosphere.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2006-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2005-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
NASA Technical Reports Server (NTRS)
Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.
2015-01-01
We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.
NASA Astrophysics Data System (ADS)
Pytharoulis, I.; Karagiannidis, A. F.; Brikas, D.; Katsafados, P.; Papadopoulos, A.; Mavromatidis, E.; Kotsopoulos, S.; Karacostas, T. S.
2010-09-01
Contemporary atmospheric numerical models contain a large number of physical parameterization schemes in order to represent the various atmospheric processes that take place in sub-grid scales. The choice of the proper combination of such schemes is a challenging task for research and particularly for operational purposes. This choice becomes a very important decision in cases of high impact weather in which the forecast errors and the concomitant societal impacts are expected to be large. Moreover, it is well known that one of the hardest tasks for numerical models is to predict precipitation with a high degree of accuracy. The use of complex and sophisticated schemes usually requires more computational time and resources, but it does not necessarily lead to better forecasts. The aim of this study is to investigate the sensitivity of the model predicted precipitation on the microphysical and boundary layer parameterizations during extreme events. The nonhydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW Version 3.1.1) is utilized. It is a flexible, state-of-the-art numerical weather prediction system designed to operate in both research and operational mode in global and regional scales. Nine microphysical and two boundary layer schemes are combined in the sensitivity experiments. The 9 microphysical schemes are: i) Lin, ii) WRF Single Moment 5-classes, iii) Ferrier new Eta, iv) WRF Single Moment 6-classes, v) Goddard, vi) New Thompson V3.1, vii) WRF Double Moment 5-classes, viii) WRF Double Moment 6-classes, ix) Morrison. The boundary layer is parameterized using the schemes of: i) Mellor-Yamada-Janjic (MYJ) and ii) Mellor-Yamada-Nakanishi-Niino (MYNN) level 2.5. The model is integrated at very high horizontal resolution (2 km x 2 km in the area of interest) utilizing 38 vertical levels. Three cases of high impact weather in Eastern Mediterranean, associated with strong synoptic scale forcing, are employed in the numerical experiments. These events are characterized by strong precipitation with daily amounts exceeding 100 mm. For example, the case of 24 to 26 October 2009 was associated with floods in the eastern mainland of Greece. In Pieria (northern Greece), that was the most afflicted area, one individual perished in the overflowed Esonas river and significant damages were caused in both the infrastructure and cultivations. Precipitation amounts of 347 mm in 3 days were measured in the station of Vrontou, Pieria (which is at an elevation of only 120 m). The model results are statistically analysed and compared to the available surface observations and satellite derived precipitation data in order to identify the parameterizations (and their combinations) that provide the best representation of the spatiotemporal variability of precipitation in extreme conditions. Preliminary results indicate that the MYNN boundary layer parameterization outperforms the one of MYJ. However, the best results are produced by the combination of the Ferrier new Eta microphysics with the MYJ scheme, which are the default schemes of the well-known and reliable ETA and WRF-NMM models. Similarly, good results are produced by the combination of the New Thompson V3.1 microphysics with MYNN boundary layer scheme. On the other hand, the worst results (with mean absolute error up to about 150 mm/day) appear when the WRF Single Moment 5-classes scheme is used with MYJ. Finally, an effort is made to identify and analyze the main factors that are responsible for the aforementioned differences.
NASA Astrophysics Data System (ADS)
Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.
2016-12-01
Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.
Evaluation of Surface Flux Parameterizations with Long-Term ARM Observations
Liu, Gang; Liu, Yangang; Endo, Satoshi
2013-02-01
Surface momentum, sensible heat, and latent heat fluxes are critical for atmospheric processes such as clouds and precipitation, and are parameterized in a variety of models ranging from cloud-resolving models to large-scale weather and climate models. However, direct evaluation of the parameterization schemes for these surface fluxes is rare due to limited observations. This study takes advantage of the long-term observations of surface fluxes collected at the Southern Great Plains site by the Department of Energy Atmospheric Radiation Measurement program to evaluate the six surface flux parameterization schemes commonly used in the Weather Research and Forecasting (WRF) model and threemore » U.S. general circulation models (GCMs). The unprecedented 7-yr-long measurements by the eddy correlation (EC) and energy balance Bowen ratio (EBBR) methods permit statistical evaluation of all six parameterizations under a variety of stability conditions, diurnal cycles, and seasonal variations. The statistical analyses show that the momentum flux parameterization agrees best with the EC observations, followed by latent heat flux, sensible heat flux, and evaporation ratio/Bowen ratio. The overall performance of the parameterizations depends on atmospheric stability, being best under neutral stratification and deteriorating toward both more stable and more unstable conditions. Further diagnostic analysis reveals that in addition to the parameterization schemes themselves, the discrepancies between observed and parameterized sensible and latent heat fluxes may stem from inadequate use of input variables such as surface temperature, moisture availability, and roughness length. The results demonstrate the need for improving the land surface models and measurements of surface properties, which would permit the evaluation of full land surface models.« less
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
NASA Astrophysics Data System (ADS)
Lamraoui, F.; Booth, J. F.; Naud, C. M.
2017-12-01
The representation of subgrid-scale processes of low-level marine clouds located in the post-cold-frontal region poses a serious challenge for climate models. More precisely, the boundary layer parameterizations are predominantly designed for individual regimes that can evolve gradually over time and does not accommodate the cold front passage that can overly modify the boundary layer rapidly. Also, the microphysics schemes respond differently to the quick development of the boundary layer schemes, especially under unstable conditions. To improve the understanding of cloud physics in the post-cold frontal region, the present study focuses on exploring the relationship between cloud properties, the local processes and large-scale conditions. In order to address these questions, we explore the WRF sensitivity to the interaction between various combinations of the boundary layer and microphysics parameterizations, including the Community Atmospheric Model version 5 (CAM5) physical package in a perturbed physics ensemble. Then, we evaluate these simulations against ground-based ARM observations over the Azores. The WRF-based simulations demonstrate particular sensitivities of the marine cold front passage and the associated post-cold frontal clouds to the domain size, the resolution and the physical parameterizations. First, it is found that in multiple different case studies the model cannot generate the cold front passage when the domain size is larger than 3000 km2. Instead, the modeled cold front stalls, which shows the importance of properly capturing the synoptic scale conditions. The simulation reveals persistent delay in capturing the cold front passage and also an underestimated duration of the post-cold-frontal conditions. Analysis of the perturbed physics ensemble shows that changing the microphysics scheme leads to larger differences in the modeled clouds than changing the boundary layer scheme. The in-cloud heating tendencies are analyzed to explain this sensitivity.
NASA Technical Reports Server (NTRS)
Ferrier, Brad S.; Tao, Wei-Kuo; Simpson, Joanne
1991-01-01
The basic features of a new and improved bulk-microphysical parameterization capable of simulating the hydrometeor structure of convective systems in all types of large-scale environments (with minimal adjustment of coefficients) are studied. Reflectivities simulated from the model are compared with radar observations of an intense midlatitude convective system. Simulated reflectivities using the novel four-class ice scheme with a microphysical parameterization rain distribution at 105 min are illustrated. Preliminary results indicate that this new ice scheme works efficiently in simulating midlatitude continental storms.
ARM - Midlatitude Continental Convective Clouds
Jensen, Mike; Bartholomew, Mary Jane; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-19
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
ARM - Midlatitude Continental Convective Clouds (comstock-hvps)
Jensen, Mike; Comstock, Jennifer; Genio, Anthony Del; Giangrande, Scott; Kollias, Pavlos
2012-01-06
Convective processes play a critical role in the Earth's energy balance through the redistribution of heat and moisture in the atmosphere and their link to the hydrological cycle. Accurate representation of convective processes in numerical models is vital towards improving current and future simulations of Earths climate system. Despite improvements in computing power, current operational weather and global climate models are unable to resolve the natural temporal and spatial scales important to convective processes and therefore must turn to parameterization schemes to represent these processes. In turn, parameterization schemes in cloud-resolving models need to be evaluated for their generality and application to a variety of atmospheric conditions. Data from field campaigns with appropriate forcing descriptors have been traditionally used by modelers for evaluating and improving parameterization schemes.
NASA Astrophysics Data System (ADS)
Zepka, G. D.; Pinto, O.
2010-12-01
The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.
Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation
NASA Astrophysics Data System (ADS)
Liu, S.; Liang, X.
2011-12-01
Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the CAM is generally more realistic. QNSE, MYJ, GFS and BouLac PBL parameterizations are identified as obvious outliers of overall performance in representing precipitation, surface air temperature or PBL height variations. Their poor performance may result from deficiencies in physical formulations, dependences on applicable scales, or trouble numerical implementations, requiring future detailed investigation to isolate the actual cause.
NASA Astrophysics Data System (ADS)
Kan, Yu; Chen, Bo; Shen, Tao; Liu, Chaoshun; Qiao, Fengxue
2017-09-01
It has been a longstanding problem for current weather/climate models to accurately predict summer heavy precipitation over the Yangtze-Huaihe Region (YHR) which is the key flood-prone area in China with intensive population and developed economy. Large uncertainty has been identified with model deficiencies in representing precipitation processes such as microphysics and cumulus parameterizations. This study focuses on examining the effects of microphysics parameterization on the simulation of different type of heavy precipitation over the YHR taking into account two different cumulus schemes. All regional persistent heavy precipitation events over the YHR during 2008-2012 are classified into three types according to their weather patterns: the type I associated with stationary front, the type II directly associated with typhoon or with its spiral rain band, and the type III associated with strong convection along the edge of the Subtropical High. Sixteen groups of experiments are conducted for three selected cases with different types and a local short-time rainstorm in Shanghai, using the WRF model with eight microphysics and two cumulus schemes. Results show that microphysics parameterization has large but different impacts on the location and intensity of regional heavy precipitation centers. The Ferrier (microphysics) -BMJ (cumulus) scheme and Thompson (microphysics) - KF (cumulus) scheme most realistically simulates the rain-bands with the center location and intensity for type I and II respectively. For type III, the Lin microphysics scheme shows advantages in regional persistent cases over YHR, while the WSM5 microphysics scheme is better in local short-term case, both with the BMJ cumulus scheme.
Accuracy of parameterized proton range models; A comparison
NASA Astrophysics Data System (ADS)
Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.
2018-03-01
An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.
NASA Astrophysics Data System (ADS)
Iakshina, D. F.; Golubeva, E. N.
2017-11-01
The vertical distribution of the hydrological characteristics in the upper ocean layer is mostly formed under the influence of turbulent and convective mixing, which are not resolved in the system of equations for large-scale ocean. Therefore it is necessary to include additional parameterizations of these processes into the numerical models. In this paper we carry out a comparative analysis of the different vertical mixing parameterizations in simulations of climatic variability of the Arctic water and sea ice circulation. The 3D regional numerical model for the Arctic and North Atlantic developed in the ICMMG SB RAS (Institute of Computational Mathematics and Mathematical Geophysics of the Siberian Branch of the Russian Academy of Science) and package GOTM (General Ocean Turbulence Model1,2, http://www.gotm.net/) were used as the numerical instruments . NCEP/NCAR reanalysis data were used for determination of the surface fluxes related to ice and ocean. The next turbulence closure schemes were used for the vertical mixing parameterizations: 1) Integration scheme based on the Richardson criteria (RI); 2) Second-order scheme TKE with coefficients Canuto-A3 (CANUTO); 3) First-order scheme TKE with coefficients Schumann and Gerz4 (TKE-1); 4) Scheme KPP5 (KPP). In addition we investigated some important characteristics of the Arctic Ocean state including the intensity of Atlantic water inflow, ice cover state and fresh water content in Beaufort Sea.
NASA Astrophysics Data System (ADS)
Alapaty, K.; Zhang, G. J.; Song, X.; Kain, J. S.; Herwehe, J. A.
2012-12-01
Short lived pollutants such as aerosols play an important role in modulating not only the radiative balance but also cloud microphysical properties and precipitation rates. In the past, to understand the interactions of aerosols with clouds, several cloud-resolving modeling studies were conducted. These studies indicated that in the presence of anthropogenic aerosols, single-phase deep convection precipitation is reduced or suppressed. On the other hand, anthropogenic aerosol pollution led to enhanced precipitation for mixed-phase deep convective clouds. To date, there have not been many efforts to incorporate such aerosol indirect effects (AIE) in mesoscale models or global models that use parameterization schemes for deep convection. Thus, the objective of this work is to implement a diagnostic cloud microphysical scheme directly into a deep convection parameterization facilitating aerosol indirect effects in the WRF-CMAQ integrated modeling systems. Major research issues addressed in this study are: What is the sensitivity of a deep convection scheme to cloud microphysical processes represented by a bulk double-moment scheme? How close are the simulated cloud water paths as compared to observations? Does increased aerosol pollution lead to increased precipitation for mixed-phase clouds? These research questions are addressed by performing several WRF simulations using the Kain-Fritsch convection parameterization and a diagnostic cloud microphysical scheme. In the first set of simulations (control simulations) the WRF model is used to simulate two scenarios of deep convection over the continental U.S. during two summer periods at 36 km grid resolution. In the second set, these simulations are repeated after incorporating a diagnostic cloud microphysical scheme to study the impacts of inclusion of cloud microphysical processes. Finally, in the third set, aerosol concentrations simulated by the CMAQ modeling system are supplied to the embedded cloud microphysical scheme to study impacts of aerosol concentrations on precipitation and radiation fields. Observations available from the ARM microbase data, the SURFRAD network, GOES imagery, and other reanalysis and measurements will be used to analyze the impacts of a cloud microphysical scheme and aerosol concentrations on parameterized convection.
A ubiquitous ice size bias in simulations of tropical deep convection
NASA Astrophysics Data System (ADS)
Stanford, McKenna W.; Varble, Adam; Zipser, Ed; Strapp, J. Walter; Leroy, Delphine; Schwarzenboeck, Alfons; Potts, Rodney; Protat, Alain
2017-08-01
The High Altitude Ice Crystals - High Ice Water Content (HAIC-HIWC) joint field campaign produced aircraft retrievals of total condensed water content (TWC), hydrometeor particle size distributions (PSDs), and vertical velocity (w) in high ice water content regions of mature and decaying tropical mesoscale convective systems (MCSs). The resulting dataset is used here to explore causes of the commonly documented high bias in radar reflectivity within cloud-resolving simulations of deep convection. This bias has been linked to overly strong simulated convective updrafts lofting excessive condensate mass but is also modulated by parameterizations of hydrometeor size distributions, single particle properties, species separation, and microphysical processes. Observations are compared with three Weather Research and Forecasting model simulations of an observed MCS using different microphysics parameterizations while controlling for w, TWC, and temperature. Two popular bulk microphysics schemes (Thompson and Morrison) and one bin microphysics scheme (fast spectral bin microphysics) are compared. For temperatures between -10 and -40 °C and TWC > 1 g m-3, all microphysics schemes produce median mass diameters (MMDs) that are generally larger than observed, and the precipitating ice species that controls this size bias varies by scheme, temperature, and w. Despite a much greater number of samples, all simulations fail to reproduce observed high-TWC conditions ( > 2 g m-3) between -20 and -40 °C in which only a small fraction of condensate mass is found in relatively large particle sizes greater than 1 mm in diameter. Although more mass is distributed to large particle sizes relative to those observed across all schemes when controlling for temperature, w, and TWC, differences with observations are significantly variable between the schemes tested. As a result, this bias is hypothesized to partly result from errors in parameterized hydrometeor PSD and single particle properties, but because it is present in all schemes, it may also partly result from errors in parameterized microphysical processes present in all schemes. Because of these ubiquitous ice size biases, the frequently used microphysical parameterizations evaluated in this study inherently produce a high bias in convective reflectivity for a wide range of temperatures, vertical velocities, and TWCs.
Numerical Study of the Role of Shallow Convection in Moisture Transport and Climate
NASA Technical Reports Server (NTRS)
Seaman, Nelson L.; Stauffer, David R.; Munoz, Ricardo C.
2001-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins of the Southern Great Plains (SGP) using a 3-D mesoscale model, the PSU/NCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. At the beginning of the study, it was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having high-quality parameterizations for the key physical processes controlling the water cycle. These included a detailed land-surface parameterization (the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) sub-model of Wetzel and Boone), an advanced boundary-layer parameterization (the 1.5-order turbulent kinetic energy (TKE) predictive scheme of Shafran et al.), and a more complete shallow convection parameterization (the hybrid-closure scheme of Deng et al.) than are available in most current models. PLACE is a product of researchers working at NASA's Goddard Space Flight Center in Greenbelt, MD. The TKE and shallow-convection schemes are the result of model development at Penn State. The long-range goal is to develop an integrated suite of physical sub-models that can be used for regional and perhaps global climate studies of the water budget. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the SGP. These schemes have been tested extensively through the course of this study and the latter two have been improved significantly as a consequence.
Inclusion of Solar Elevation Angle in Land Surface Albedo Parameterization Over Bare Soil Surface.
Zheng, Zhiyuan; Wei, Zhigang; Wen, Zhiping; Dong, Wenjie; Li, Zhenchao; Wen, Xiaohang; Zhu, Xian; Ji, Dong; Chen, Chen; Yan, Dongdong
2017-12-01
Land surface albedo is a significant parameter for maintaining a balance in surface energy. It is also an important parameter of bare soil surface albedo for developing land surface process models that accurately reflect diurnal variation characteristics and the mechanism behind the solar spectral radiation albedo on bare soil surfaces and for understanding the relationships between climate factors and spectral radiation albedo. Using a data set of field observations, we conducted experiments to analyze the variation characteristics of land surface solar spectral radiation and the corresponding albedo over a typical Gobi bare soil underlying surface and to investigate the relationships between the land surface solar spectral radiation albedo, solar elevation angle, and soil moisture. Based on both solar elevation angle and soil moisture measurements simultaneously, we propose a new two-factor parameterization scheme for spectral radiation albedo over bare soil underlying surfaces. The results of numerical simulation experiments show that the new parameterization scheme can more accurately depict the diurnal variation characteristics of bare soil surface albedo than the previous schemes. Solar elevation angle is one of the most important factors for parameterizing bare soil surface albedo and must be considered in the parameterization scheme, especially in arid and semiarid areas with low soil moisture content. This study reveals the characteristics and mechanism of the diurnal variation of bare soil surface solar spectral radiation albedo and is helpful in developing land surface process models, weather models, and climate models.
A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals
NASA Technical Reports Server (NTRS)
VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.
2014-01-01
A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.
Numerical simulation and analysis of the April 2013 Chicago floods
Campos, Edwin; Wang, Jiali
2015-09-08
The weather event associated to record Chicago floods on April 2013 is investigated by using the Weather Research and Forecasting (WRF) model. Observations at Argonne National Laboratory and multi-sensor (weather radar and rain gauge) precipitation data from the National Weather Service were employed to evaluate the model’s performance. The WRF model captured the synoptic-scale atmospheric features well, but the simulated 24-h accumulated precipitation and short-period temporal evolution of precipitation over the heavy-rain region were less successful. To investigate the potential reasons for the model bias, four supplementary sensitivity experiments using various microphysics schemes and cumulus parameterizations were designed. Of themore » five tested parameterizations, the WRF Single-Moment 6-class (WSM6) graupel scheme and Kain-Fritsch (KF) cumulus parameterization outperformed the others, such as Grell-Dévényi (GD) cumulus parameterization, which underestimated the precipitation by 30–50% on a regional-average scale. Morrison microphysics and KF outperformed the others for the spatial patterns of 24-h accumulated precipitation. The spatial correlation between observation and Morrison-KF was 0.45, higher than those for other simulations. All of the simulations underestimated the precipitation over northeastern Illinois (especially at Argonne) during 0400–0800 UTC 18 April because of weak ascending motion or small moisture. In conclusion, all of the simulations except WSM6-GD also underestimated the precipitation during 1200–1600 UTC 18 April because of weak southerly flow.« less
Qian, Yun; Yan, Huiping; Berg, Larry K.; ...
2016-10-28
Accuracy of turbulence parameterization in representing Planetary Boundary Layer (PBL) processes in climate models is critical for predicting the initiation and development of clouds, air quality issues, and underlying surface-atmosphere-cloud interactions. In this study, we 1) evaluate WRF model-simulated spatial patterns of precipitation and surface fluxes, as well as vertical profiles of potential temperature, humidity, moist static energy and moisture tendency terms as simulated by WRF at various spatial resolutions and with PBL, surface layer and shallow convection schemes against measurements, 2) identify model biases by examining the moisture tendency terms contributed by PBL and convection processes through nudging experiments,more » and 3) evaluate the dependence of modeled surface latent heat (LH) fluxes onPBL and surface layer schemes over the tropical ocean. The results show that PBL and surface parameterizations have surprisingly large impacts on precipitation, convection initiation and surface moisture fluxes over tropical oceans. All of the parameterizations tested tend to overpredict moisture in PBL and free atmosphere, and consequently result in larger moist static energy and precipitation. Moisture nudging tends to suppress the initiation of convection and reduces the excess precipitation. The reduction in precipitation bias in turn reduces the surface wind and LH flux biases, which suggests that the model drifts at least partly because of a positive feedback between precipitation and surface fluxes. The updated shallow convection scheme KF-CuP tends to suppress the initiation and development of deep convection, consequently decreasing precipitation. The Eta surface layer scheme predicts more reasonable LH fluxes and the LH-Wind Speed relationship than the MM5 scheme, especially when coupled with the MYJ scheme. By examining various parameterization schemes in WRF, we identify sources of biases and weaknesses of current PBL, surface layer and shallow convection schemes in reproducing PBL processes, the initiation of convection and intra-seasonal variability of precipitation.« less
NASA Technical Reports Server (NTRS)
Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.
2012-01-01
The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.
NASA Astrophysics Data System (ADS)
Lv, M.; Li, C.; Lu, H.; Yang, K.; Chen, Y.
2017-12-01
The parameterization of vegetation cover fraction (VCF) is an important component of land surface models. This paper investigates the impacts of three VCF parameterization schemes on land surface temperature (LST) simulation by the Common Land Model (CoLM) in the Tibetan Plateau (TP). The first scheme is a simple land cover (LC) based method; the second one is based on remote sensing observation (hereafter named as RNVCF) , in which multi-year climatology VCFs is derived from Moderate-resolution Imaging Spectroradiometer (MODIS) NDVI (Normalized Difference Vegetation Index); the third VCF parameterization scheme derives VCF from the LAI simulated by LSM and clump index at every model time step (hereafter named as SMVCF). Simulated land surface temperature(LST) and soil temperature by CoLM with three VCF parameterization schemes were evaluated by using satellite LST observation and in situ soil temperature observation, respectively, during the period of 2010 to 2013. The comparison against MODIS Aqua LST indicates that (1) CTL produces large biases for both four seasons in early afternoon (about 13:30, local solar time), while the mean bias in spring reach to 12.14K; (2) RNVCF and SMVCF reduce the mean bias significantly, especially in spring as such reduce is about 6.5K. Surface soil temperature observed at 5 cm depth from three soil moisture and temperature monitoring networks is also employed to assess the skill of three VCF schemes. The three networks, crossing TP from West to East, have different climate and vegetation conditions. In the Ngari network, located in the Western TP with an arid climate, there are not obvious differences among three schemes. In Naqu network, located in central TP with a semi-arid climate condition, CTL shows a severe overestimates (12.1 K), but such overestimations can be reduced by 79% by RNVCF and 87% by SMVCF. In the third humid network (Maqu in eastern TP), CoLM performs similar to Naqu. However, at both Naqu and Maqu networks, RNVCF shows significant overestimation in summer, perhaps due to RNVCF ignores the growing characteristics of vegetation (mainly grass) in these two regions. Our results demonstrate that VCF schemes have significant influence on LSM performance, and indicate that it is important to consider vegetation growing characteristics in VCF schemes for different LCs.
Midgley, S M
2004-01-21
A novel parameterization of x-ray interaction cross-sections is developed, and employed to describe the x-ray linear attenuation coefficient and mass energy absorption coefficient for both elements and mixtures. The new parameterization scheme addresses the Z-dependence of elemental cross-sections (per electron) using a simple function of atomic number, Z. This obviates the need for a complicated mathematical formalism. Energy dependent coefficients describe the Z-direction curvature of the cross-sections. The composition dependent quantities are the electron density and statistical moments describing the elemental distribution. We show that it is possible to describe elemental cross-sections for the entire periodic table and at energies above the K-edge (from 6 keV to 125 MeV), with an accuracy of better than 2% using a parameterization containing not more than five coefficients. For the biologically important elements 1 < or = Z < or = 20, and the energy range 30-150 keV, the parameterization utilizes four coefficients. At higher energies, the parameterization uses fewer coefficients with only two coefficients needed at megavoltage energies.
NASA Astrophysics Data System (ADS)
Niu, Hailin; Zhang, Xiaotong; Liu, Qiang; Feng, Youbin; Li, Xiuhong; Zhang, Jialin; Cai, Erli
2015-12-01
The ocean surface albedo (OSA) is a deciding factor on ocean net surface shortwave radiation (ONSSR) estimation. Several OSA schemes have been proposed successively, but there is not a conclusion for the best OSA scheme of estimating the ONSSR. On the base of analyzing currently existing OSA parameterization, including Briegleb et al.(B), Taylor et al.(T), Hansen et al.(H), Jin et al.(J), Preisendorfer and Mobley(PM86), Feng's scheme(F), this study discusses the difference of OSA's impact on ONSSR estimation in condition of actual downward shortwave radiation(DSR). Then we discussed the necessity and applicability for the climate models to integrate the more complicated OSA scheme. It is concluded that the SZA and the wind speed are the two most significant effect factor to broadband OSA, thus the different OSA parameterizations varies violently in the regions of both high latitudes and strong winds. The OSA schemes can lead the ONSSR results difference of the order of 20 w m-2. The Taylor's scheme shows the best estimate, and Feng's result just following Taylor's. However, the accuracy of the estimated instantaneous OSA changes at different local time. Jin's scheme has the best performance generally at noon and in the afternoon, and PM86's is the best of all in the morning, which indicate that the more complicated OSA schemes reflect the temporal variation of OWA better than the simple ones.
Sea breeze: Induced mesoscale systems and severe weather
NASA Technical Reports Server (NTRS)
Nicholls, M. E.; Pielke, R. A.; Cotton, W. R.
1990-01-01
Sea-breeze-deep convective interactions over the Florida peninsula were investigated using a cloud/mesoscale numerical model. The objective was to gain a better understanding of sea-breeze and deep convective interactions over the Florida peninsula using a high resolution convectively explicit model and to use these results to evaluate convective parameterization schemes. A 3-D numerical investigation of Florida convection was completed. The Kuo and Fritsch-Chappell parameterization schemes are summarized and evaluated.
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1985-01-01
Methods being used to increase the horizontal and vertical resolution and to implement more sophisticated parameterization schemes for general circulation models (GCM) run on newer, more powerful computers are described. Attention is focused on the NASA-Goddard Laboratory for Atmospherics fourth order GCM. A new planetary boundary layer (PBL) model has been developed which features explicit resolution of two or more layers. Numerical models are presented for parameterizing the turbulent vertical heat, momentum and moisture fluxes at the earth's surface and between the layers in the PBL model. An extended Monin-Obhukov similarity scheme is applied to express the relationships between the lowest levels of the GCM and the surface fluxes. On-line weather prediction experiments are to be run to test the effects of the higher resolution thereby obtained for dynamic atmospheric processes.
Intercomparison of land-surface parameterizations launched
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Dickinson, R. E.
One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.
Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows
Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; ...
2015-12-08
In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocitymore » and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.« less
Performance of multi-physics ensembles in convective precipitation events over northeastern Spain
NASA Astrophysics Data System (ADS)
García-Ortega, E.; Lorenzana, J.; Merino, A.; Fernández-González, S.; López, L.; Sánchez, J. L.
2017-07-01
Convective precipitation with hail greatly affects southwestern Europe, causing major economic losses. The local character of this meteorological phenomenon is a serious obstacle to forecasting. Therefore, the development of reliable short-term forecasts constitutes an essential challenge to minimizing and managing risks. However, deterministic outcomes are affected by different uncertainty sources, such as physics parameterizations. This study examines the performance of different combinations of physics schemes of the Weather Research and Forecasting model to describe the spatial distribution of precipitation in convective environments with hail falls. Two 30-member multi-physics ensembles, with two and three domains of maximum resolution 9 and 3km each, were designed using various combinations of cumulus, microphysics and radiation schemes. The experiment was evaluated for 10 convective precipitation days with hail over 2005-2010 in northeastern Spain. Different indexes were used to evaluate the ability of each ensemble member to capture the precipitation patterns, which were compared with observations of a rain-gauge network. A standardized metric was constructed to identify optimal performers. Results show interesting differences between the two ensembles. In two domain simulations, the selection of cumulus parameterizations was crucial, with the Betts-Miller-Janjic scheme the best. In contrast, the Kain-Fristch cumulus scheme gave the poorest results, suggesting that it should not be used in the study area. Nevertheless, in three domain simulations, the cumulus schemes used in coarser domains were not critical and the best results depended mainly on microphysics schemes. The best performance was shown by Morrison, New Thomson and Goddard microphysics.
The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.
2008-01-01
Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
Parameterization of turbulence and the planetary boundary layer in the GLA Fourth Order GCM
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1985-01-01
A new scheme has been developed to model the planetary boundary layer in the GLAS Fourth Order GCM through explicit resolution of its vertical structure into two or more vertical layers. This involves packing the lowest layers of the GCM close to the ground and developing new parameterization schemes that can express the turbulent vertical fluxes of heat, momentum and moisture at the earth's surface and between the layers that are contained with the PBL region. Offline experiments indicate that the combination of the modified level 2.5 second-order turbulent closure scheme and the 'extended surface layer' similarity scheme should work well to simulate the behavior of the turbulent PBL even at the coarsest vertical resolution with which such schemes will conceivably be used in the GLA Fourth Order GCM.
NASA Astrophysics Data System (ADS)
Campbell, Lucy J.; Shepherd, Theodore G.
2005-12-01
Parameterization schemes for the drag due to atmospheric gravity waves are discussed and compared in the context of a simple one-dimensional model of the quasi-biennial oscillation (QBO). A number of fundamental issues are examined in detail, with the goal of providing a better understanding of the mechanism by which gravity wave drag can produce an equatorial zonal wind oscillation. The gravity wave driven QBOs are compared with those obtained from a parameterization of equatorial planetary waves. In all gravity wave cases, it is seen that the inclusion of vertical diffusion is crucial for the descent of the shear zones and the development of the QBO. An important difference between the schemes for the two types of waves is that in the case of equatorial planetary waves, vertical diffusion is needed only at the lowest levels, while for the gravity wave drag schemes it must be included at all levels. The question of whether there is downward propagation of influence in the simulated QBOs is addressed. In the gravity wave drag schemes, the evolution of the wind at a given level depends on the wind above, as well as on the wind below. This is in contrast to the parameterization for the equatorial planetary waves in which there is downward propagation of phase only. The stability of a zero-wind initial state is examined, and it is determined that a small perturbation to such a state will amplify with time to the extent that a zonal wind oscillation is permitted.
A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur
2009-07-01
For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).
Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation
Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton
2016-01-01
Abstract A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model‐dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model‐dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low‐level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales. PMID:27668040
Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation
NASA Astrophysics Data System (ADS)
Sandu, Irina; Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton
2016-03-01
A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model-dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model-dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low-level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales.
NASA Technical Reports Server (NTRS)
Fritsch, J. Michael; Kain, John S.
1996-01-01
Research efforts focused on numerical simulations of two convective systems with the Penn State/NCAR mesoscale model. The first of these systems was tropical cyclone Irma, which occurred in 1987 in Australia's Gulf of Carpentaria during the AMEX field program. Comparison simulations of this system were done with two different convective parameterization schemes (CPS's), the Kain-Fritsch (KF) and the Betts-Miller (BM) schemes. The second system was the June 10-11, 1985 squall line simulation, which occurred over the Kansas-Oklahoma region during the PRE-STORM experiment. Simulations of this system using the KF scheme were examined in detail.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Coleman, D.; Palmer, T.
2015-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models to represent the variability of unresolved sub-grid processes. They have a beneficial effect on the spread and mean state of medium- and extended-range forecasts (Buizza et al. 1999, Palmer et al. 2009). There is also increasing evidence that stochastic parameterization of unresolved processes could be beneficial for the climate of an atmospheric model through noise enhanced variability, noise-induced drift (Berner et al. 2008), and by enabling the climate simulator to explore other flow regimes (Christensen et al. 2015; Dawson and Palmer 2015). We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. The SPPT scheme accounts for uncertainty in the CAM physical parameterization schemes, including the convection scheme, by perturbing the parametrised temperature, moisture and wind tendencies with a multiplicative noise term. SPPT results in a large improvement in the variability of the CAM4 modeled climate. In particular, SPPT results in a significant improvement to the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. References: Berner, J., Doblas-Reyes, F. J., Palmer, T. N., Shutts, G. J., & Weisheimer, A., 2008. Phil. Trans. R. Soc A, 366, 2559-2577 Buizza, R., Miller, M. and Palmer, T. N., 1999. Q.J.R. Meteorol. Soc., 125, 2887-2908. Christensen, H. M., I. M. Moroz & T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2239-9 Dawson, A. and T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2238-x Palmer, T.N., R. Buizza, F. Doblas-Reyes, et al., 2009, ECMWF technical memorandum 598.
Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation
Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...
NASA Astrophysics Data System (ADS)
Singh, K. S.; Bonthu, Subbareddy; Purvaja, R.; Robin, R. S.; Kannan, B. A. M.; Ramesh, R.
2018-04-01
This study attempts to investigate the real-time prediction of a heavy rainfall event over the Chennai Metropolitan City, Tamil Nadu, India that occurred on 01 December 2015 using Advanced Research Weather Research and Forecasting (WRF-ARW) model. The study evaluates the impact of six microphysical (Lin, WSM6, Goddard, Thompson, Morrison and WDM6) parameterization schemes of the model on prediction of heavy rainfall event. In addition, model sensitivity has also been evaluated with six Planetary Boundary Layer (PBL) and two Land Surface Model (LSM) schemes. Model forecast was carried out using nested domain and the impact of model horizontal grid resolutions were assessed at 9 km, 6 km and 3 km. Analysis of the synoptic features using National Center for Environmental Prediction Global Forecast System (NCEP-GFS) analysis data revealed strong upper-level divergence and high moisture content at lower level were favorable for the occurrence of heavy rainfall event over the northeast coast of Tamil Nadu. The study signified that forecasted rainfall was more sensitive to the microphysics and PBL schemes compared to the LSM schemes. The model provided better forecast of the heavy rainfall event using the logical combination of Goddard microphysics, YSU PBL and Noah LSM schemes, and it was mostly attributed to timely initiation and development of the convective system. The forecast with different horizontal resolutions using cumulus parameterization indicated that the rainfall prediction was not well represented at 9 km and 6 km. The forecast with 3 km horizontal resolution provided better prediction in terms of timely initiation and development of the event. The study highlights that forecast of heavy rainfall events using a high-resolution mesoscale model with suitable representations of physical parameterization schemes are useful for disaster management and planning to minimize the potential loss of life and property.
NASA Astrophysics Data System (ADS)
Tan, Z.; Schneider, T.; Teixeira, J.; Lam, R.; Pressel, K. G.
2014-12-01
Sub-grid scale (SGS) closures in current climate models are usually decomposed into several largely independent parameterization schemes for different cloud and convective processes, such as boundary layer turbulence, shallow convection, and deep convection. These separate parameterizations usually do not converge as the resolution is increased or as physical limits are taken. This makes it difficult to represent the interactions and smooth transition among different cloud and convective regimes. Here we present an eddy-diffusivity mass-flux (EDMF) closure that represents all sub-grid scale turbulent, convective, and cloud processes in a unified parameterization scheme. The buoyant updrafts and precipitative downdrafts are parameterized with a prognostic multiple-plume mass-flux (MF) scheme. The prognostic term for the mass flux is kept so that the life cycles of convective plumes are better represented. The interaction between updrafts and downdrafts are parameterized with the buoyancy-sorting model. The turbulent mixing outside plumes is represented by eddy diffusion, in which eddy diffusivity (ED) is determined from a turbulent kinetic energy (TKE) calculated from a TKE balance that couples the environment with updrafts and downdrafts. Similarly, tracer variances are decomposed consistently between updrafts, downdrafts and the environment. The closure is internally coupled with a probabilistic cloud scheme and a simple precipitation scheme. We have also developed a relatively simple two-stream radiative scheme that includes the longwave (LW) and shortwave (SW) effects of clouds, and the LW effect of water vapor. We have tested this closure in a single-column model for various regimes spanning stratocumulus, shallow cumulus, and deep convection. The model is also run towards statistical equilibrium with climatologically relevant large-scale forcings. These model tests are validated against large-eddy simulation (LES) with the same forcings. The comparison of results verifies the capacity of this closure to realistically represent different cloud and convective processes. Implementation of the closure in an idealized GCM allows us to study cloud feedbacks to climate change and to study the interactions between clouds, convections, and the large-scale circulation.
2013-10-07
OLEs and Terrain Effects Within the Coastal Zone in the EDMF Parameterization Scheme: An Airborne Doppler Wind Lidar Perspective Annual Report Under...UPP related investigations that will be carried out in Year 3. RELATED PROJECTS ONR contract to study the utilization of Doppler wind lidar (DWL...MATERHORN2012) Paper presented at the Coherent Laser Radar Conference, June 2013 Airborne DWL investigations of flow over complex terrain (MATERHORN
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)
2001-01-01
It has been known for more than a decade that an aqua-planet model with globally uniform sea surface temperature and solar insolation angle can generate ITCZ (intertropical convergence zone). Previous studies have shown that the ITCZ under such model settings can be changed between a single ITCZ over the equator and a double ITCZ straddling the equator through one of several measures. These measures include switching to a different cumulus parameterization scheme, changes within the cumulus parameterization scheme, and changes in other aspects of the model design such as horizontal resolution. In this paper an interpretation for these findings is offered. The latitudinal location of the ITCZ is the latitude where the balance of two types of attraction on the ITCZ, both due to earth's rotation, exists. The first type is equator-ward and is directly related to the earth's rotation and thus not sensitive to model design changes. The second type is poleward and is related to the convective circulation and thus is sensitive to model design changes. Due to the shape of the attractors, the balance of the two types of attractions is reached either at the equator or more than 10 degrees away from the equator. The former case results in a single ITCZ over the equator and the latter case a double ITCZ straddling the equator.
Implementing a warm cloud microphysics parameterization for convective clouds in NCAR CESM
NASA Astrophysics Data System (ADS)
Shiu, C.; Chen, Y.; Chen, W.; Li, J. F.; Tsai, I.; Chen, J.; Hsu, H.
2013-12-01
Most of cumulus convection schemes use simple empirical approaches to convert cloud liquid mass to rain water or cloud ice to snow e.g. using a constant autoconversion rate and dividing cloud liquid mass into cloud water and ice as function of air temperature (e.g. Zhang and McFarlane scheme in NCAR CAM model). There are few studies trying to use cloud microphysical schemes to better simulate such precipitation processes in the convective schemes of global models (e.g. Lohmann [2008] and Song, Zhang, and Li [2012]). A two-moment warm cloud parameterization (i.e. Chen and Liu [2004]) is implemented into the deep convection scheme of CAM5.2 of CESM model for treatment of conversion of cloud liquid water to rain water. Short-term AMIP type global simulations are conducted to evaluate the possible impacts from the modification of this physical parameterization. Simulated results are further compared to observational results from AMWG diagnostic package and CloudSAT data sets. Several sensitivity tests regarding to changes in cloud top droplet concentration (here as a rough testing for aerosol indirect effects) and changes in detrained cloud size of convective cloud ice are also carried out to understand their possible impacts on the cloud and precipitation simulations.
NASA Astrophysics Data System (ADS)
Ma, Leiming
2015-04-01
Planetary Boundary Layer (PBL) plays an important role in transferring the energy and moisture from ocean to tropical cyclone (TC). Thus, the accuracy of PBL parameterization determines the performance of numerical model on TC prediction to a large extent. Among various components of PBL parameterization, the definition on the height of PBL is the first should be concerned, which determines the vertical scale of PBL and the associated processes of turbulence in different scales. However, up to now, there is lacked consensus on how to define the height of PBL in the TC research community. The PBL heights represented by current numerical models usually exhibits significant difference with TC observation (e.g., Zhang et al., 2011; Storm et al., 2008), leading to the rapid growth of error in TC prediction. In an effort to narrow the gap between PBL parameterization and reality, this study presents a new parameterization scheme for the definition of PBL height. Instead of using traditional definition for PBL height with Richardson number, which has been verified not appropriate for the strongly sheared structure of TC PBL in recent observation studies, the new scheme employs a dynamical definition based on the conception of helicity. In this sense the spiral structures associated with inflow layer and rolls are expected to be represented in PBL parameterization. By defining the PBL height at each grid point, the new scheme also avoids to assume the symmetric inflow layer that is usually implemented in observational studies. The new scheme is applied to the Yonsei University (YSU) scheme in the Weather Research and Forecasting (WRF) model of US National Center for Atmospheric Research (NCAR) and verified with numerical experiments on TC Morakot (2009), which brought torrential rainfall and disaster to Taiwan and China mainland during landfall. The Morakot case is selected in this study to examine the performance of the new scheme in representing various structures of PBL over land and ocean. The results of simulations show that, in addition to enhancing the PBL height in the situation of intensive convection, the new scheme also significantly reduces the PBL height and 2m-temperature over land during the night time, a well-known problem for YSU scheme according to previous studies. The activity of PBL processes are modulated due to the improved PBL height, which ultimately leads to the improvement of prediction on TC Morakot. Key Words: PBL; Parameterization; Numerical Prediction; Tropical Cyclone Acknowledgements. This study was jointly supported by the Chinese National 973 Project (No. 2013CB430300, and No. 2009CB421500) and grant from the National Natural Science Foundation (No. 41475059). References Zhang, J. A., R. F. Rogers, D. S. Nolan, and F. D. Marks Jr., 2011: On the characteristic height scales of the hurricane boundary layer, Mon. Weather Rev., 139, 2523-2535. Storm B., J. Dudhia, S. Basu, et al., 2008: Evaluation of the Weather Research and Forecasting Model on forecasting Low-level Jets: Implications for Wind Energy. Wind Energ., DOI: 10.1002/we.
Regional Climate Model sesitivity to different parameterizations schemes with WRF over Spain
NASA Astrophysics Data System (ADS)
García-Valdecasas Ojeda, Matilde; Raquel Gámiz-Fortis, Sonia; Hidalgo-Muñoz, Jose Manuel; Argüeso, Daniel; Castro-Díez, Yolanda; Jesús Esteban-Parra, María
2015-04-01
The ability of the Weather Research and Forecasting (WRF) model to simulate the regional climate depends on the selection of an adequate combination of parameterization schemes. This study assesses WRF sensitivity to different parameterizations using six different runs that combined three cumulus, two microphysics and three surface/planetary boundary layer schemes in a topographically complex region such as Spain, for the period 1995-1996. Each of the simulations spanned a period of two years, and were carried out at a spatial resolution of 0.088° over a domain encompassing the Iberian Peninsula and nested in the coarser EURO-CORDEX domain (0.44° resolution). The experiments were driven by Interim ECMWF Re-Analysis (ERA-Interim) data. In addition, two different spectral nudging configurations were also analysed. The simulated precipitation and maximum and minimum temperatures from WRF were compared with Spain02 version 4 observational gridded datasets. The comparison was performed at different time scales with the purpose of evaluating the model capability to capture mean values and high-order statistics. ERA-Interim data was also compared with observations to determine the improvement obtained using dynamical downscaling with respect to the driving data. For this purpose, several parameters were analysed by directly comparing grid-points. On the other hand, the observational gridded data were grouped using a multistep regionalization to facilitate the comparison in term of monthly annual cycle and the percentiles of daily values analysed. The results confirm that no configuration performs best, but some combinations that produce better results could be chosen. Concerning temperatures, WRF provides an improvement over ERA-Interim. Overall, model outputs reduce the biases and the RMSE for monthly-mean maximum and minimum temperatures and are higher correlated with observations than ERA-Interim. The analysis shows that the Yonsei University planetary boundary layer scheme is the most appropriate parameterization in term of temperatures because it better describes monthly minimum temperatures and seems to perform well for maximum temperatures. Regarding precipitation, ERA-Interim time series are slightly higher correlated with observations than WRF, but the bias and the RMSE are largely worse. These results also suggest that CAM V.5.1 2-moment 5-class microphysics schemes should not be used due to the computational cost with no apparent gain with respect to simpler schemes such as WRF single-moment 3-class. For the convection scheme, this study suggests that Betts-Miller-Janjic scheme is an appropriate choice due to its robustness and Kain-Fritsch cumulus scheme should not be used over this region. KEY WORDS: Regional climate modelling, physics schemes, parameterizations, WRF. ACKNOWLEDGEMENTS This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).
NASA Astrophysics Data System (ADS)
Elsayed Yousef, Ahmed; Ehsan, M. Azhar; Almazroui, Mansour; Assiri, Mazen E.; Al-Khalaf, Abdulrahman K.
2017-02-01
A new closure and a modified detrainment for the simplified Arakawa-Schubert (SAS) cumulus parameterization scheme are proposed. In the modified convective scheme which is named as King Abdulaziz University (KAU) scheme, the closure depends on both the buoyancy force and the environment mean relative humidity. A lateral entrainment rate varying with environment relative humidity is proposed and tends to suppress convection in a dry atmosphere. The detrainment rate also varies with environment relative humidity. The KAU scheme has been tested in a single column model (SCM) and implemented in a coupled global climate model (CGCM). Increased coupling between environment and clouds in the KAU scheme results in improved sensitivity of the depth and strength of convection to environmental humidity compared to the original SAS scheme. The new scheme improves precipitation simulation with better representations of moisture and temperature especially during suppressed convection periods. The KAU scheme implemented in the Seoul National University (SNU) CGCM shows improved precipitation over the tropics. The simulated precipitation pattern over the Arabian Peninsula and Northeast African region is also improved.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shi, J.; Chen, S. S>
2007-01-01
Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)
NASA Astrophysics Data System (ADS)
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.
2018-03-01
Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Teixeira, João
2018-01-01
Abstract Large‐scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid‐scale turbulence and convection—such as that they adjust instantaneously to changes in resolved‐scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary‐layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large‐scale models. Here we lay the theoretical foundations for an extended eddy‐diffusivity mass‐flux (EDMF) scheme that has explicit time‐dependence and memory of subgrid‐scale variables and is designed to represent all subgrid‐scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross‐sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large‐scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time‐dependent life cycle. PMID:29780442
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
Wang, Yong; Zhang, Guang J.
2016-09-29
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yong; Zhang, Guang J.
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
NASA Technical Reports Server (NTRS)
Fritsch, J. Michael (Principal Investigator); Kain, John S.
1995-01-01
Research efforts during the first year focused on numerical simulations of two convective systems with the Penn State/NCAR mesoscale model. The first of these systems was tropical cyclone Irma, which occurred in 1987 in Australia's Gulf of Carpentaria during the AMEX field program. Comparison simulations of this system were done with two different convective parameterization schemes (CPS's), the Kain-Fritsch (1993 - KF) and the Betts-Miller (Betts 1986- BM) schemes. The second system was the June 10-11 1985 squall line simulation, which occurred over the Kansas-Oklahoma region during the PRE-STORM experiment. Simulations of this system using the KF scheme were examined in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, X.; Klein, S. A.; Ma, H. -Y.
The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less
Zheng, X.; Klein, S. A.; Ma, H. -Y.; ...
2017-08-24
The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
Robust Stabilization of T-S Fuzzy Stochastic Descriptor Systems via Integral Sliding Modes.
Li, Jinghao; Zhang, Qingling; Yan, Xing-Gang; Spurgeon, Sarah K
2017-09-19
This paper addresses the robust stabilization problem for T-S fuzzy stochastic descriptor systems using an integral sliding mode control paradigm. A classical integral sliding mode control scheme and a nonparallel distributed compensation (Non-PDC) integral sliding mode control scheme are presented. It is shown that two restrictive assumptions previously adopted developing sliding mode controllers for Takagi-Sugeno (T-S) fuzzy stochastic systems are not required with the proposed framework. A unified framework for sliding mode control of T-S fuzzy systems is formulated. The proposed Non-PDC integral sliding mode control scheme encompasses existing schemes when the previously imposed assumptions hold. Stability of the sliding motion is analyzed and the sliding mode controller is parameterized in terms of the solutions of a set of linear matrix inequalities which facilitates design. The methodology is applied to an inverted pendulum model to validate the effectiveness of the results presented.
NASA Technical Reports Server (NTRS)
Sud, Y.; Molod, A.
1988-01-01
The Goddard Laboratory for Atmospheres GCM is used to study the sensitivity of the simulated July circulation to modifications in the parameterization of dry and moist convection, evaporation from falling raindrops, and cloud-radiation interaction. It is shown that the Arakawa-Schubert (1974) cumulus parameterization and a more realistic dry convective mixing calculation yielded a better intertropical convergence zone over North Africa than the previous convection scheme. It is found that the physical mechanism for the improvement was the upward mixing of PBL moisture by vigorous dry convective mixing. A modified rain-evaporation parameterization which accounts for raindrop size distribution, the atmospheric relative humidity, and a typical spatial rainfall intensity distribution for convective rain was developed and implemented. This scheme led to major improvements in the monthly mean vertical profiles of relative humidity and temperature, convective and large-scale cloudiness, rainfall distributions, and mean relative humidity in the PBL.
NASA Astrophysics Data System (ADS)
Tariku, Tebikachew Betru; Gan, Thian Yew
2018-06-01
Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional climate of NRB over 1980-2001 which include a combination of wet and dry years of the NRB.
NASA Astrophysics Data System (ADS)
Tariku, Tebikachew Betru; Gan, Thian Yew
2017-08-01
Regional climate models (RCMs) have been used to simulate rainfall at relatively high spatial and temporal resolutions useful for sustainable water resources planning, design and management. In this study, the sensitivity of the RCM, weather research and forecasting (WRF), in modeling the regional climate of the Nile River Basin (NRB) was investigated using 31 combinations of different physical parameterization schemes which include cumulus (Cu), microphysics (MP), planetary boundary layer (PBL), land-surface model (LSM) and radiation (Ra) schemes. Using the European Centre for Medium-Range Weather Forecast (ECMWF) ERA-Interim reanalysis data as initial and lateral boundary conditions, WRF was configured to model the climate of NRB at a resolution of 36 km with 30 vertical levels. The 1999-2001 simulations using WRF were compared with satellite data combined with ground observation and the NCEP reanalysis data for 2 m surface air temperature (T2), rainfall, short- and longwave downward radiation at the surface (SWRAD, LWRAD). Overall, WRF simulated more accurate T2 and LWRAD (with correlation coefficients >0.8 and low root-mean-square error) than SWRAD and rainfall for the NRB. Further, the simulation of rainfall is more sensitive to PBL, Cu and MP schemes than other schemes of WRF. For example, WRF simulated less biased rainfall with Kain-Fritsch combined with MYJ than with YSU as the PBL scheme. The simulation of T2 is more sensitive to LSM and Ra than to Cu, PBL and MP schemes selected, SWRAD is more sensitive to MP and Ra than to Cu, LSM and PBL schemes, and LWRAD is more sensitive to LSM, Ra and PBL than Cu, and MP schemes. In summary, the following combination of schemes simulated the most representative regional climate of NRB: WSM3 microphysics, KF cumulus, MYJ PBL, RRTM longwave radiation and Dudhia shortwave radiation schemes, and Noah LSM. The above configuration of WRF coupled to the Noah LSM has also been shown to simulate representative regional climate of NRB over 1980-2001 which include a combination of wet and dry years of the NRB.
NASA Astrophysics Data System (ADS)
Zhai, Guoqing; Li, Xiaofan
2015-04-01
The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.
Investigating the scale-adaptivity of a shallow cumulus parameterization scheme with LES
NASA Astrophysics Data System (ADS)
Brast, Maren; Schemann, Vera; Neggers, Roel
2017-04-01
In this study we investigate the scale-adaptivity of a new parameterization scheme for shallow cumulus clouds in the gray zone. The Eddy-Diffusivity Multiple Mass-Flux (or ED(MF)n ) scheme is a bin-macrophysics scheme, in which subgrid transport is formulated in terms of discretized size densities. While scale-adaptivity in the ED-component is achieved using a pragmatic blending approach, the MF-component is filtered such that only the transport by plumes smaller than the grid size is maintained. For testing, ED(MF)n is implemented in a large-eddy simulation (LES) model, replacing the original subgrid-scheme for turbulent transport. LES thus plays the role of a non-hydrostatic testing ground, which can be run at different resolutions to study the behavior of the parameterization scheme in the boundary-layer gray zone. In this range convective cumulus clouds are partially resolved. We find that at high resolutions the clouds and the turbulent transport are predominantly resolved by the LES, and the transport represented by ED(MF)n is small. This partitioning changes towards coarser resolutions, with the representation of shallow cumulus clouds becoming exclusively carried by the ED(MF)n. The way the partitioning changes with grid-spacing matches the results of previous LES studies, suggesting some scale-adaptivity is captured. Sensitivity studies show that a scale-inadaptive ED component stays too active at high resolutions, and that the results are fairly insensitive to the number of transporting updrafts in the ED(MF)n scheme. Other assumptions in the scheme, such as the distribution of updrafts across sizes and the value of the area fraction covered by updrafts, are found to affect the location of the gray zone.
This study considers the performance of 7 of the Weather Research and Forecast model boundary-layer (BL) parameterization schemes in a complex...schemes performed best. The surface parameters, planetary BL structure, and vertical profiles are important for US Army Research Laboratory
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less
Shortwave radiation parameterization scheme for subgrid topography
NASA Astrophysics Data System (ADS)
Helbig, N.; LöWe, H.
2012-02-01
Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.
NASA Astrophysics Data System (ADS)
Anurose, J. T.; Subrahamanyam, Bala D.
2012-07-01
As part of the ocean/land-atmosphere interaction, more than half of the total kinetic energy is lost within the lowest part of atmosphere, often referred to as the planetary boundary layer (PBL). A comprehensive understanding of the energetics of this layer and turbulent processes responsible for dissipation of kinetic energy within the PBL require accurate estimation of sensible and latent heat flux and momentum flux. In numerical weather prediction (NWP) models, these quantities are estimated through different surface-layer and PBL parameterization schemes. This research article investigates different factors influencing the accuracy of a surface-layer parameterization scheme used in a hydrostatic high-resolution regional model (HRM) in the estimation of surface-layer turbulent fluxes of heat, moisture and momentum over the coastal regions of the Indian sub-continent. Results obtained from this sensitivity study of a parameterization scheme in HRM revealed the role of surface roughness length (z_{0}) in conjunction with the temperature difference between the underlying ground surface and atmosphere above (ΔT = T_{G} - T_{A}) in the estimated values of fluxes. For grid points over the land surface where z_{0} is treated as a constant throughout the model integration time, ΔT showed relative dominance in the estimation of sensible heat flux. In contrast to this, estimation of sensible and latent heat flux over ocean were found to be equally sensitive on the method adopted for assigning the values of z_{0} and also on the magnitudes of ΔT.
Implementation of a gust front head collapse scheme in the WRF numerical model
NASA Astrophysics Data System (ADS)
Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje
2018-05-01
Gust fronts are thunderstorm-related phenomena usually associated with severe winds which are of great importance in theoretical meteorology, weather forecasting, cloud dynamics and precipitation, and wind engineering. An important feature of gust fronts demonstrated through both theoretical and observational studies is the periodic collapse and rebuild of the gust front head. This cyclic behavior of gust fronts results in periodic forcing of vertical velocity ahead of the parent thunderstorm, which consequently influences the storm dynamics and microphysics. This paper introduces the first gust front pulsation parameterization scheme in the WRF-ARW model (Weather Research and Forecasting-Advanced Research WRF). The influence of this new scheme on model performances is tested through investigation of the characteristics of an idealized supercell cumulonimbus cloud, as well as studying a real case of thunderstorms above the United Arab Emirates. In the ideal case, WRF with the gust front scheme produced more precipitation and showed different time evolution of mixing ratios of cloud water and rain, whereas the mixing ratios of ice and graupel are almost unchanged when compared to the default WRF run without the parameterization of gust front pulsation. The included parameterization did not disturb the general characteristics of thunderstorm cloud, such as the location of updraft and downdrafts, and the overall shape of the cloud. New cloud cells in front of the parent thunderstorm are also evident in both ideal and real cases due to the included forcing of vertical velocity caused by the periodic collapse of the gust front head. Despite some differences between the two WRF simulations and satellite observations, the inclusion of the gust front parameterization scheme produced more cumuliform clouds and seem to match better with real observations. Both WRF simulations gave poor results when it comes to matching the maximum composite radar reflectivity from radar measurement. Similar to the ideal case, WRF model with the gust front scheme gave more precipitation than the default WRF run. In particular, the gust front scheme increased the area characterized with light precipitation and diminished the development of very localized and intense precipitation.
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.-L.
2015-10-01
The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Taraphdar, Sourav; Wang, Taiping
This paper presents a modeling study conducted to evaluate the uncertainty of a regional model in simulating hurricane wind and pressure fields, and the feasibility of driving coastal storm surge simulation using an ensemble of region model outputs produced by 18 combinations of three convection schemes and six microphysics parameterizations, using Hurricane Katrina as a test case. Simulated wind and pressure fields were compared to observed H*Wind data for Hurricane Katrina and simulated storm surge was compared to observed high-water marks on the northern coast of the Gulf of Mexico. The ensemble modeling analysis demonstrated that the regional model wasmore » able to reproduce the characteristics of Hurricane Katrina with reasonable accuracy and can be used to drive the coastal ocean model for simulating coastal storm surge. Results indicated that the regional model is sensitive to both convection and microphysics parameterizations that simulate moist processes closely linked to the tropical cyclone dynamics that influence hurricane development and intensification. The Zhang and McFarlane (ZM) convection scheme and the Lim and Hong (WDM6) microphysics parameterization are the most skillful in simulating Hurricane Katrina maximum wind speed and central pressure, among the three convection and the six microphysics parameterizations. Error statistics of simulated maximum water levels were calculated for a baseline simulation with H*Wind forcing and the 18 ensemble simulations driven by the regional model outputs. The storm surge model produced the overall best results in simulating the maximum water levels using wind and pressure fields generated with the ZM convection scheme and the WDM6 microphysics parameterization.« less
Mixing parametrizations for ocean climate modelling
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model. The high sensitivity of the eddy-permitting circulation model to the definition of mixing is revealed, which is associated with significant changes of density fields in the upper baroclinic ocean layer over the total considered area. For instance, usage of the turbulence parameterization instead of PP algorithm leads to increasing circulation velocity in the Gulf Stream and North Atlantic Current, as well as the subpolar cyclonic gyre in the North Atlantic and Beaufort Gyre in the Arctic basin are reproduced more realistically. Consideration of the Prandtl number as a function of the Richardson number significantly increases the modelling quality. The research was supported by the Russian Foundation for Basic Research (grant № 16-05-00534) and the Council on the Russian Federation President Grants (grant № MK-3241.2015.5)
NASA Astrophysics Data System (ADS)
Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.
2016-05-01
The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Cheng, Ye
2013-01-01
The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.
NASA Astrophysics Data System (ADS)
Liu, J.; Chen, Z.; Horowitz, L. W.; Carlton, A. M. G.; Fan, S.; Cheng, Y.; Ervens, B.; Fu, T. M.; He, C.; Tao, S.
2014-12-01
Secondary organic aerosols (SOA) have a profound influence on air quality and climate, but large uncertainties exist in modeling SOA on the global scale. In this study, five SOA parameterization schemes, including a two-product model (TPM), volatility basis-set (VBS) and three cloud SOA schemes (Ervens et al. (2008, 2014), Fu et al. (2008) , and He et al. (2013)), are implemented into the global chemical transport model (MOZART-4). For each scheme, model simulations are conducted with identical boundary and initial conditions. The VBS scheme produces the highest global annual SOA production (close to 35 Tg·y-1), followed by three cloud schemes (26-30 Tg·y-1) and TPM (23 Tg·y-1). Though sharing a similar partitioning theory to the TPM scheme, the VBS approach simulates the chemical aging of multiple generations of VOCs oxidation products, resulting in a much larger SOA source, particularly from aromatic species, over Europe, the Middle East and Eastern America. The formation of SOA in VBS, which represents the net partitioning of semi-volatile organic compounds from vapor to condensed phase, is highly sensitivity to the aging and wet removal processes of vapor-phase organic compounds. The production of SOA from cloud processes (SOAcld) is constrained by the coincidence of liquid cloud water and water-soluble organic compounds. Therefore, all cloud schemes resolve a fairly similar spatial pattern over the tropical and the mid-latitude continents. The spatiotemporal diversity among SOA parameterizations is largely driven by differences in precursor inputs. Therefore, a deeper understanding of the evolution, wet removal, and phase partitioning of semi-volatile organic compounds, particularly above remote land and oceanic areas, is critical to better constrain the global-scale distribution and related climate forcing of secondary organic aerosols.
NASA Astrophysics Data System (ADS)
Pradhan, P. K.; Liberato, Margarida L. R.; Ferreira, Juan A.; Dasamsetti, S.; Vijaya Bhaskara Rao, S.
2018-01-01
The role of the convective parameterization schemes (CPSs) in the ARW-WRF (WRF) mesoscale model is examined for extratropical cyclones (ETCs) over the North Atlantic Ocean. The simulation of very severe winter storms such as Xynthia (2010) and Gong (2013) are considered in this study. Most popular CPSs within WRF model, along with Yonsei University (YSU) planetary boundary layer (PBL) and WSM6 microphysical parameterization schemes are incorporated for the model experiments. For each storm, four numerical experiments were carried out using New Kain Fritsch (NKF), Betts-Miller-Janjic (BMJ), Grell 3D Ensemble (Gr3D) and no convection scheme (NCS) respectively. The prime objectives of these experiments were to recognize the best CPS that can forecast the intensity, track, and landfall over the Iberian Peninsula in advance of two days. The WRF model results such as central sea level pressure (CSLP), wind field, moisture flux convergence, geopotential height, jet stream, track and precipitation have shown sensitivity CPSs. The 48-hour lead simulations with BMJ schemes produce the best simulations both regarding ETCs intensity and track than Gr3D and NKF schemes. The average MAE and RMSE of intensities are least that (6.5 hPa in CSLP and 3.4 ms- 1 in the 10-m wind) found in BMJ scheme. The MAE and RMSE for and intensity and track error have revealed that NCS produces large errors than other CPSs experiments. However, for track simulation of these ETCs, at 72-, 48- and 24-hour means track errors were 440, 390 and 158 km respectively. In brevity, BMJ and Gr3D schemes can be used for short and medium range predictions of the ETCs over North Atlantic. For the evaluation of precipitation distributions using Gr3D scheme are good agreement with TRMM satellite than other CPSs.
Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects
NASA Astrophysics Data System (ADS)
Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.
2008-10-01
A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.
Intelligent robust tracking control for a class of uncertain strict-feedback nonlinear systems.
Chang, Yeong-Chan
2009-02-01
This paper addresses the problem of designing robust tracking controls for a large class of strict-feedback nonlinear systems involving plant uncertainties and external disturbances. The input and virtual input weighting matrices are perturbed by bounded time-varying uncertainties. An adaptive fuzzy-based (or neural-network-based) dynamic feedback tracking controller will be developed such that all the states and signals of the closed-loop system are bounded and the trajectory tracking error should be as small as possible. First, the adaptive approximators with linearly parameterized models are designed, and a partitioned procedure with respect to the developed adaptive approximators is proposed such that the implementation of the fuzzy (or neural network) basis functions depends only on the state variables but does not depend on the tuning approximation parameters. Furthermore, we extend to design the nonlinearly parameterized adaptive approximators. Consequently, the intelligent robust tracking control schemes developed in this paper possess the properties of computational simplicity and easy implementation. Finally, simulation examples are presented to demonstrate the effectiveness of the proposed control algorithms.
A scheme for parameterizing ice cloud water content in general circulation models
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.; Donner, Leo J.
1989-01-01
A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)
2000-01-01
Chao's numerical and theoretical work on multiple quasi-equilibria of the intertropical convergence zone (ITCZ) and the origin of monsoon onset is extended to solve two additional puzzles. One is the highly nonlinear dependence on latitude of the "force" acting on the ITCZ due to earth's rotation, which makes the multiple quasi-equilibria of the ITCZ and monsoon onset possible. The other is the dramatic difference in such dependence when different cumulus parameterization schemes are used in a model. Such a difference can lead to a switch between a single ITCZ at the equator and a double ITCZ, when a different cumulus parameterization scheme is used. Sometimes one of the double ITCZ can diminish and only the other remain, but still this can mean different latitudinal locations for the single ITCZ. A single idea based on two off-equator attractors for the ITCZ, due to earth's rotation and symmetric with respect to the equator, and the dependence of the strength and size of these attractors on the cumulus parameterization scheme solves both puzzles. The origin of these rotational attractors, explained in Part I, is further discussed. The "force" acting on the ITCZ due to earth's rotation is the sum of the "forces" of the two attractors. Each attractor exerts on the ITCZ a "force" of simple shape in latitude; but the sum gives a shape highly varying in latitude. Also the strength and the domain of influence of each attractor vary, when change is made in the cumulus parameterization. This gives rise to the high sensitivity of the "force" shape to cumulus parameterization. Numerical results, of experiments using Goddard's GEOS general circulation model, supporting this idea are presented. It is also found that the model results are sensitive to changes outside of the cumulus parameterization. The significance of this study to El Nino forecast and to tropical forecast in general is discussed.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
NASA Astrophysics Data System (ADS)
Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.
2008-08-01
During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.
Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization
NASA Technical Reports Server (NTRS)
Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.
2011-01-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
Observational and Modeling Studies of Clouds and the Hydrological Cycle
NASA Technical Reports Server (NTRS)
Somerville, Richard C. J.
1997-01-01
Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
NASA Technical Reports Server (NTRS)
Braun, Scott A.; Tao, Wei-Kuo
1999-01-01
The MM5 mesoscale model is used to simulate Hurricane Bob (1991) using grids nested to high resolution (4 km). Tests are conducted to determine the sensitivity of the simulation to the available planetary boundary layer parameterizations, including the bulk-aerodynamic, Blackadar, Medium-RanGe Forecast (MRF) model, and Burk-Thompson boundary-layer schemes. Significant sensitivity is seen, with minimum central pressures varying by up to 17 mb. The Burk-Thompson and bulk-aerodynamic boundary-layer schemes produced the strongest storms while the MRF scheme produced the weakest storm. Precipitation structure of the simulated hurricanes also varied substantially with the boundary layer parameterizations. Diagnostics of boundary-layer variables indicated that the intensity of the simulated hurricanes generally increased as the ratio of the surface exchange coefficients for heat and momentum, C(sub h)/C(sub M), although the manner in which the vertical mixing takes place was also important. Findings specific to the boundary-layer schemes include: 1) the MRF scheme produces mixing that is too deep and causes drying of the lower boundary layer in the inner-core region of the hurricane; 2) the bulk-aerodynamic scheme produces mixing that is probably too shallow, but results in a strong hurricane because of a large value of C(sub h)/C(sub M) (approximately 1.3); 3) the MRF and Blackadar schemes are weak partly because of smaller surface moisture fluxes that result in a reduced value of C(sub h)/C(sub M) (approximately 0.7); 4) the Burk-Thompson scheme produces a strong storm with C(sub h)/C(sub M) approximately 1; and 5) the formulation of the wind-speed dependence of the surface roughness parameter, z(sub 0), is important for getting appropriate values of the surface exchange coefficients in hurricanes based upon current estimates of these parameters.
Improving the Representation of Snow Crystal Properties Within a Single-Moment Microphysics Scheme
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, S. R.
2010-01-01
As computational resources continue their expansion, weather forecast models are transitioning to the use of parameterizations that predict the evolution of hydrometeors and their microphysical processes, rather than estimating the bulk effects of clouds and precipitation that occur on a sub-grid scale. These parameterizations are referred to as single-moment, bulk water microphysics schemes, as they predict the total water mass among hydrometeors in a limited number of classes. Although the development of single moment microphysics schemes have often been driven by the need to predict the structure of convective storms, they may also provide value in predicting accumulations of snowfall. Predicting the accumulation of snowfall presents unique challenges to forecasters and microphysics schemes. In cases where surface temperatures are near freezing, accumulated depth often depends upon the snowfall rate and the ability to overcome an initial warm layer. Precipitation efficiency relates to the dominant ice crystal habit, as dendrites and plates have relatively large surface areas for the accretion of cloud water and ice, but are only favored within a narrow range of ice supersaturation and temperature. Forecast models and their parameterizations must accurately represent the characteristics of snow crystal populations, such as their size distribution, bulk density and fall speed. These properties relate to the vertical distribution of ice within simulated clouds, the temperature profile through latent heat release, and the eventual precipitation rate measured at the surface. The NASA Goddard, single-moment microphysics scheme is available to the operational forecast community as an option within the Weather Research and Forecasting (WRF) model. The NASA Goddard scheme predicts the occurrence of up to six classes of water mass: vapor, cloud ice, cloud water, rain, snow and either graupel or hail.
Using Ground Measurements to Examine the Surface Layer Parameterization Scheme in NCEP GFS
NASA Astrophysics Data System (ADS)
Zheng, W.; Ek, M. B.; Mitchell, K.
2017-12-01
Understanding the behavior and the limitation of the surface layer parameneterization scheme is important for parameterization of surface-atmosphere exchange processes in atmospheric models, accurate prediction of near-surface temperature and identifying the role of different physical processes in contributing to errors. In this study, we examine the surface layer paramerization scheme in the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) using the ground flux measurements including the FLUXNET data. The model simulated surface fluxes, surface temperature and vertical profiles of temperature and wind speed are compared against the observations. The limits of applicability of the Monin-Obukhov similarity theory (MOST), which describes the vertical behavior of nondimensionalized mean flow and turbulence properties within the surface layer, are quantified in daytime and nighttime using the data. Results from unstable regimes and stable regimes are discussed.
Evaluation of Model Microphysics Within Precipitation Bands of Extratropical Cyclones
NASA Technical Reports Server (NTRS)
Colle, Brian A.; Yu, Ruyi; Molthan, Andrew L.; Nesbitt, Steven
2014-01-01
It is hypothesized microphysical predictions have greater uncertainties/errors when there are complex interactions that result from mixed phased processes like riming. Use Global Precipitation Measurement (GPM) Mission ground validation studies in Ontario, Canada to verify and improve parameterizations. The WRF realistically simulated the warm frontal snowband at relatively short lead times (1014 h). The snowband structire is sensitive to the microphysical parameterization used in WRF. The Goddard and SBUYLin most realistically predicted the band structure, but overpredicted snow content. The double moment Morrison scheme best produced the slope of the snow distribution, but it underpredicted the intercept. All schemes and the radar derived (which used dry snow ZR) underpredicted the surface precipitation amount, likely because there was more cloud water than expected. The Morrison had the most cloud water and the best precipitation prediction of all schemes.
Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system
NASA Astrophysics Data System (ADS)
Keane, R. J.; Plant, R. S.; Tennant, W. J.
2015-12-01
The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.
1987-01-01
A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.
1997-01-01
A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.
NASA Astrophysics Data System (ADS)
Qiao, F.; Liang, X.
2011-12-01
Accurate prediction of U.S. summer precipitation, including its geographic distribution, the occurrence frequency and intensity, and diurnal cycle, has been a long-standing problem for most climate and weather models. This study employs the Climate-Weather Research and Forecasting model (CWRF) to investigate the effects of cumulus parameterization on prediction of these key precipitation features during the summers of 1993 and 2008 when severe floods occurred over the U.S. Midwest. Among the 12 widely-used cumulus schemes incorporated in the CWRF, the Ensemble Cumulus Parameterization modified from G3 (ECP) scheme and the Zhang-McFarland cumulus scheme modified by Liang (ZML) well reproduce the geographic distributions of observed 1993 and 2008 floods, albeit both slightly underestimating the maximum amount. However, the ZML scheme greatly overestimates the rainfall amount over the North American Monsoon region and Southeast U.S. while the ECP scheme has a better performance over the entire U.S. Compared to global general circulations models that tend to produce too frequent rainy events at reduced intensity, the CWRF better captures both frequency and intensity of extreme events (heavy rainfall and dry bells). However, most existing cumulus schemes in the CWRF are likely to convert atmospheric moisture into rainfall too fast, leading to less rainy days and stronger heavy rainfall events. A few cumulus schemes can depict the diurnal characteristics in certain but not all the regions over the U.S. For example, the Grell scheme shows its superiority in reproducing the eastward diurnal phase transition and the nocturnal peaks over the Great Plains, whereas the other schemes all fail in capturing this feature. By investigating the critical trigger function(s) that enable these cumulus schemes to capture the observed features, it provides opportunity to better understand the underlying mechanisms that drive the diurnal variation, and thus significantly improves the U.S. summer rainfall diurnal cycle prediction. These will be discussed. For an oral presentation at AGU Fall Meeting 2011 A15: Cloud, Convection, Precipitation, and Radiation: Observations and Modeling, San Francisco, California, USA, 5-9 December 2011.
An improved snow scheme for the ECMWF land surface model: Description and offline validation
Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder
2010-01-01
A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...
An inter-model comparison of urban canopy effects on climate
NASA Astrophysics Data System (ADS)
Halenka, Tomas; Karlicky, Jan; Huszar, Peter; Belda, Michal; Bardachova, Tatsiana
2017-04-01
The role of cities is increasing and will continue to increase in future, as the population within the urban areas is growing faster, with the estimate for Europe of about 84% living in urban areas in about mid of 21st century. To assess the impact of cities and, in general, urban surfaces on climate, using of modeling approach is well appropriate. Moreover, with higher resolution, urban areas becomes to be better resolved in the regional models and their relatively significant impacts should not be neglected. Model descriptions of urban canopy related meteorological effects can, however, differ largely given the odds in the driving models, the underlying surface models and the urban canopy parameterizations, representing a certain uncertainty. In this study we try to contribute to the estimation of this uncertainty by performing numerous experiments to assess the urban canopy meteorological forcing over central Europe on climate for the decade 2001-2010, using two driving models (RegCM4 and WRF) in 10 km resolution driven by ERA-Interim reanalyses, three surface schemes (BATS and CLM4.5 for RegCM4 and Noah for WRF) and five urban canopy parameterizations available: one bulk urban scheme, three single layer and a multilayer urban scheme. Actually, in RegCM4 we used our implementation of the Single Layer Urban Canopy Model (SLUCM) in BATS scheme and CLM4.5 option with urban parameterization based on SLUCM concept as well, in WRF we used all the three options, i.e. bulk, SLUCM and more complex and sophisticated Building Environment Parameterization (BEP) connected with Building Energy Model (BEM). As a reference simulations, runs with no urban areas and with no urban parameterizations were performed. Effects of cities on urban and rural areas were evaluated. Effect of reducing diurnal temperature range in cities (around 2 °C in summer) is noticeable in all simulation, independent to urban parameterization type and model. Also well-known warmer summer city nights appear in all simulations. Further, winter boundary layer increase by 100-200 m, together with wind reduction, is visible in all simulations. The spatial distribution of the night-time temperature response of models to urban canopy forcing is rather similar in each set-up, showing temperature increases up to 3°C in summer. In general, much lower increase are modeled for day-time conditions, which can be even slightly negative due to dominance of shadowing in urban canyons, especially in the morning hours. The winter temperature response, driven mainly by anthropogenic heat (AH) is strong in urban schemes where the building-street energy exchange is more resolved and is smaller, where AH is simply prescribed as additive flux to the sensible heat. Somewhat larger differences between the models are encountered for the response of wind and the height of planetary boundary layer (ZPBL), with dominant increases from a few 10 m up to 250 m depending on the model. The comparison of observation of diurnal temperature amplitude from ECAD data with model results and hourly data from Prague with model hourly values show improvement when urban effects are considered. Larger spread encountered for wind and turbulence (as ZPBL) should be considered when choices of urban canopy schemes are made, especially in connection with modeling transport of pollutants within/from cities. Another conclusion is that choosing more complex urban schemes does not necessary improves model performance and using simpler and computationally less demanding (e.g. single layer) urban schemes, is often sufficient.
Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav
2009-05-01
The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.
2017-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
NASA Astrophysics Data System (ADS)
Pandey, Gavendra; Sharan, Maithili
2018-01-01
Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
WRF model sensitivity to choice of parameterization: a study of the `York Flood 1999'
NASA Astrophysics Data System (ADS)
Remesan, Renji; Bellerby, Tim; Holman, Ian; Frostick, Lynne
2015-10-01
Numerical weather modelling has gained considerable attention in the field of hydrology especially in un-gauged catchments and in conjunction with distributed models. As a consequence, the accuracy with which these models represent precipitation, sub-grid-scale processes and exceptional events has become of considerable concern to the hydrological community. This paper presents sensitivity analyses for the Weather Research Forecast (WRF) model with respect to the choice of physical parameterization schemes (both cumulus parameterisation (CPSs) and microphysics parameterization schemes (MPSs)) used to represent the `1999 York Flood' event, which occurred over North Yorkshire, UK, 1st-14th March 1999. The study assessed four CPSs (Kain-Fritsch (KF2), Betts-Miller-Janjic (BMJ), Grell-Devenyi ensemble (GD) and the old Kain-Fritsch (KF1)) and four MPSs (Kessler, Lin et al., WRF single-moment 3-class (WSM3) and WRF single-moment 5-class (WSM5)] with respect to their influence on modelled rainfall. The study suggests that the BMJ scheme may be a better cumulus parameterization choice for the study region, giving a consistently better performance than other three CPSs, though there are suggestions of underestimation. The WSM3 was identified as the best MPSs and a combined WSM3/BMJ model setup produced realistic estimates of precipitation quantities for this exceptional flood event. This study analysed spatial variability in WRF performance through categorical indices, including POD, FBI, FAR and CSI during York Flood 1999 under various model settings. Moreover, the WRF model was good at predicting high-intensity rare events over the Yorkshire region, suggesting it has potential for operational use.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boroun, G. R., E-mail: grboroun@gmail.com, E-mail: boroun@razi.ac.ir; Zarrin, S.
We derive a general scheme for the evolution of the nonsinglet structure function at the leadingorder (LO) and next-to-leading-order (NLO) by using the Laplace-transform technique. Results for the nonsinglet structure function are compared with MSTW2008, GRV, and CKMT parameterizations and also EMC experimental data in the LO and NLO analysis. The results are in good agreement with the experimental data and other parameterizations in the low- and large-x regions.
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
NASA Astrophysics Data System (ADS)
Chen, Y. H.; Kuo, C. P.; Huang, X.; Yang, P.
2017-12-01
Clouds play an important role in the Earth's radiation budget, and thus realistic and comprehensive treatments of cloud optical properties and cloud-sky radiative transfer are crucial for simulating weather and climate. However, most GCMs neglect LW scattering effects by clouds and tend to use inconsistent cloud SW and LW optical parameterizations. Recently, co-authors of this study have developed a new LW optical properties parameterization for ice clouds, which is based on ice cloud particle statistics from MODIS measurements and state-of-the-art scattering calculation. A two-stream multiple-scattering scheme has also been implemented into the RRTMG_LW, a widely used longwave radiation scheme by climate modeling centers. This study is to integrate both the new LW cloud-radiation scheme for ice clouds and the modified RRTMG_LW with scattering capability into the NCAR CESM to improve the cloud longwave radiation treatment. A number of single column model (SCM) simulations using the observation from the ARM SGP site on July 18 to August 4 in 1995 are carried out to assess the impact of new LW optical properties of clouds and scattering-enabled radiation scheme on simulated radiation budget and cloud radiative effect (CRE). The SCM simulation allows interaction between cloud and radiation schemes with other parameterizations, but the large-scale forcing is prescribed or nudged. Comparing to the results from the SCM of the standard CESM, the new ice cloud optical properties alone leads to an increase of LW CRE by 26.85 W m-2 in average, as well as an increase of the downward LW flux at surface by 6.48 W m-2. Enabling LW cloud scattering further increases the LW CRE by another 3.57 W m-2 and the downward LW flux at the surface by 0.2 W m-2. The change of LW CRE is mainly due to an increase of cloud top height, which enhances the LW CRE. A long-term simulation of CESM will be carried out to further understand the impact of such changes on simulated climates.
Atmospheric parameterization schemes for satellite cloud property retrieval during FIRE IFO 2
NASA Technical Reports Server (NTRS)
Titlow, James; Baum, Bryan A.
1993-01-01
Satellite cloud retrieval algorithms generally require atmospheric temperature and humidity profiles to determine such cloud properties as pressure and height. For instance, the CO2 slicing technique called the ratio method requires the calculation of theoretical upwelling radiances both at the surface and a prescribed number (40) of atmospheric levels. This technique has been applied to data from, for example, the High Resolution Infrared Radiometer Sounder (HIRS/2, henceforth HIRS) flown aboard the NOAA series of polar orbiting satellites and the High Resolution Interferometer Sounder (HIS). In this particular study, four NOAA-11 HIRS channels in the 15-micron region are used. The ratio method may be applied to various channel combinations to estimate cloud top heights using channels in the 15-mu m region. Presently, the multispectral, multiresolution (MSMR) scheme uses 4 HIRS channel combination estimates for mid- to high-level cloud pressure retrieval and Advanced Very High Resolution Radiometer (AVHRR) data for low-level (is greater than 700 mb) cloud level retrieval. In order to determine theoretical upwelling radiances, atmospheric temperature and water vapor profiles must be provided as well as profiles of other radiatively important gas absorber constituents such as CO2, O3, and CH4. The assumed temperature and humidity profiles have a large effect on transmittance and radiance profiles, which in turn are used with HIRS data to calculate cloud pressure, and thus cloud height and temperature. For large spatial scale satellite data analysis, atmospheric parameterization schemes for cloud retrieval algorithms are usually based on a gridded product such as that provided by the European Center for Medium Range Weather Forecasting (ECMWF) or the National Meteorological Center (NMC). These global, gridded products prescribe temperature and humidity profiles for a limited number of pressure levels (up to 14) in a vertical atmospheric column. The FIRE IFO 2 experiment provides an opportunity to investigate current atmospheric profile parameterization schemes, compare satellite cloud height results using both gridded products (ECMWF) and high vertical resolution sonde data from the National Weather Service (NWS) and Cross Chain Loran Atmospheric Sounding System (CLASS), and suggest modifications in atmospheric parameterization schemes based on these results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, R.; Hong, Seungkyu K.; Kwon, Hyoung-Ahn
We used a 3-D regional atmospheric chemistry transport model (WRF-Chem) to examine processes that determine O3 in East Asia; in particular, we focused on O3 dry deposition, which is an uncertain research area due to insufficient observation and numerical studies in East Asia. Here, we compare two widely used dry deposition parameterization schemes, Wesely and M3DRY, which are used in the WRF-Chem and CMAQ models, respectively. The O3 dry deposition velocities simulated using the two aforementioned schemes under identical meteorological conditions show considerable differences (a factor of 2) due to surface resistance parameterization discrepancies. The O3 concentration differed by upmore » to 10 ppbv for the monthly mean. The simulated and observed dry deposition velocities were compared, which showed that the Wesely scheme model is consistent with the observations and successfully reproduces the observed diurnal variation. We conduct several sensitivity simulations by changing the land use data, the surface resistance of the water and the model’s spatial resolution to examine the factors that affect O3 concentrations in East Asia. As shown, the model was considerably sensitive to the input parameters, which indicates a high uncertainty for such O3 dry deposition simulations. Observations are necessary to constrain the dry deposition parameterization and input data to improve the East Asia air quality models.« less
Sims, Aaron P; Alapaty, Kiran; Raman, Sethu
2017-01-01
Two mesoscale circulations, the Sandhills circulation and the sea breeze, influence the initiation of deep convection over the Sandhills and the coast in the Carolinas during the summer months. The interaction of these two circulations causes additional convection in this coastal region. Accurate representation of mesoscale convection is difficult as numerical models have problems with the prediction of the timing, amount, and location of precipitation. To address this issue, the authors have incorporated modifications to the Kain-Fritsch (KF) convective parameterization scheme and evaluated these mesoscale interactions using a high-resolution numerical model. The modifications include changes to the subgrid-scale cloud formulation, the convective turnover time scale, and the formulation of the updraft entrainment rates. The use of a grid-scaling adjustment parameter modulates the impact of the KF scheme as a function of the horizontal grid spacing used in a simulation. Results indicate that the impact of this modified cumulus parameterization scheme is more effective on domains with coarser grid sizes. Other results include a decrease in surface and near-surface temperatures in areas of deep convection (due to the inclusion of the effects of subgrid-scale clouds on the radiation), improvement in the timing of convection, and an increase in the strength of deep convection.
NASA Technical Reports Server (NTRS)
Freitas, Saulo R.; Grell, Georg; Molod, Andrea; Thompson, Matthew A.
2017-01-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, mid, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Here, we briefly introduce the recent developments, implementation, and preliminary results of this parameterization in the NASA GEOS modeling system.
Cloud Radiation Forcings and Feedbacks: General Circulation Model Tests and Observational Validation
NASA Technical Reports Server (NTRS)
Lee,Wan-Ho; Iacobellis, Sam F.; Somerville, Richard C. J.
1997-01-01
Using an atmospheric general circulation model (the National Center for Atmospheric Research Community Climate Model: CCM2), the effects on climate sensitivity of several different cloud radiation parameterizations have been investigated. In addition to the original cloud radiation scheme of CCM2, four parameterizations incorporating prognostic cloud water were tested: one version with prescribed cloud radiative properties and three other versions with interactive cloud radiative properties. The authors' numerical experiments employ perpetual July integrations driven by globally constant sea surface temperature forcings of two degrees, both positive and negative. A diagnostic radiation calculation has been applied to investigate the partial contributions of high, middle, and low cloud to the total cloud radiative forcing, as well as the contributions of water vapor, temperature, and cloud to the net climate feedback. The high cloud net radiative forcing is positive, and the middle and low cloud net radiative forcings are negative. The total net cloud forcing is negative in all of the model versions. The effect of interactive cloud radiative properties on global climate sensitivity is significant. The net cloud radiative feedbacks consist of quite different shortwave and longwave components between the schemes with interactive cloud radiative properties and the schemes with specified properties. The increase in cloud water content in the warmer climate leads to optically thicker middle- and low-level clouds and in turn to negative shortwave feedbacks for the interactive radiative schemes, while the decrease in cloud amount simply produces a positive shortwave feedback for the schemes with a specified cloud water path. For the longwave feedbacks, the decrease in high effective cloudiness for the schemes without interactive radiative properties leads to a negative feedback, while for the other cases, the longwave feedback is positive. These cloud radiation parameterizations are empirically validated by using a single-column diagnostic model. together with measurements from the Atmospheric Radiation Measurement program and from the Tropical Ocean Global Atmosphere Combined Ocean-Atmosphere Response Experiment. The inclusion of prognostic cloud water produces a notable improvement in the realism of the parameterizations, as judged by these observations. Furthermore, the observational evidence suggests that deriving cloud radiative properties from cloud water content and microphysical characteristics is a promising route to further improvement.
Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3
NASA Astrophysics Data System (ADS)
Belochitski, A.; Donner, L.
2014-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.
Double-moment Cloud Microphysics Scheme for the Deep Convection Parameterization in the GFDL AM3
NASA Astrophysics Data System (ADS)
Belochitski, A.; Donner, L.
2013-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) is being implemented in to the deep convection parameterization of Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and the roles of these effects in climate change. The scheme is implemented into the single column version of the GFDL AM3 and evaluated using large scale forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes and Tropical West Pacific sites. Sensitivity of the scheme to formulations for autoconversion of cloud water and its accretion by rain, self-collection of rain and self-collection of snow, as well as the formulation for heterogenous ice nucleation is investigated. In the future, tests with the full atmospheric GCM will be conducted.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
NASA Astrophysics Data System (ADS)
Park, Jun; Hwang, Seung-On
2017-11-01
The impact of a spectral nudging technique for the dynamical downscaling of the summer surface air temperature in a high-resolution regional atmospheric model is assessed. The performance of this technique is measured by comparing 16 analysis-driven simulation sets of physical parameterization combinations of two shortwave radiation and four land surface model schemes of the model, which are known to be crucial for the simulation of the surface air temperature. It is found that the application of spectral nudging to the outermost domain has a greater impact on the regional climate than any combination of shortwave radiation and land surface model physics schemes. The optimal choice of two model physics parameterizations is helpful for obtaining more realistic spatiotemporal distributions of land surface variables such as the surface air temperature, precipitation, and surface fluxes. However, employing spectral nudging adds more value to the results; the improvement is greater than using sophisticated shortwave radiation and land surface model physical parameterizations. This result indicates that spectral nudging applied to the outermost domain provides a more accurate lateral boundary condition to the innermost domain when forced by analysis data by securing the consistency with large-scale forcing over a regional domain. This consequently indirectly helps two physical parameterizations to produce small-scale features closer to the observed values, leading to a better representation of the surface air temperature in a high-resolution downscaled climate.
NASA Astrophysics Data System (ADS)
De Ridder, K.; Bertrand, C.; Casanova, G.; Lefebvre, W.
2012-09-01
Increasingly, mesoscale meteorological and climate models are used to predict urban weather and climate. Yet, large uncertainties remain regarding values of some urban surface properties. In particular, information concerning urban values for thermal roughness length and thermal admittance is scarce. In this paper, we present a method to estimate values for thermal admittance in combination with an optimal scheme for thermal roughness length, based on METEOSAT-8/SEVIRI thermal infrared imagery in conjunction with a deterministic atmospheric model containing a simple urbanized land surface scheme. Given the spatial resolution of the SEVIRI sensor, the resulting parameter values are applicable at scales of the order of 5 km. As a study case we focused on the city of Paris, for the day of 29 June 2006. Land surface temperature was calculated from SEVIRI thermal radiances using a new split-window algorithm specifically designed to handle urban conditions, as described inAppendix A, including a correction for anisotropy effects. Land surface temperature was also calculated in an ensemble of simulations carried out with the ARPS mesoscale atmospheric model, combining different thermal roughness length parameterizations with a range of thermal admittance values. Particular care was taken to spatially match the simulated land surface temperature with the SEVIRI field of view, using the so-called point spread function of the latter. Using Bayesian inference, the best agreement between simulated and observed land surface temperature was obtained for the Zilitinkevich (1970) and Brutsaert (1975) thermal roughness length parameterizations, the latter with the coefficients obtained by Kanda et al. (2007). The retrieved thermal admittance values associated with either thermal roughness parameterization were, respectively, 1843 ± 108 J m-2 s-1/2 K-1 and 1926 ± 115 J m-2 s-1/2 K-1.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; ...
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme.This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
Thayer-Calder, Katherine; Gettelman, A.; Craig, Cheryl; ...
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations.This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysicsmore » scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. In conclusion, the new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, perceptible water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
On the Relationship between Observed NLDN Lightning ...
Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
NASA Astrophysics Data System (ADS)
Song, Xiaoliang; Zhang, Guang J.
2018-03-01
Several improvements are implemented in the Zhang-McFarlane (ZM) convection scheme to investigate the roles of convection parameterization in the formation of double intertropical convergence zone (ITCZ) bias in the NCAR CESM1.2.1. It is shown that the prominent double ITCZ biases of precipitation, sea surface temperature (SST), and wind stress in the standard CESM1.2.1 are largely eliminated in all seasons with the use of these improvements in convection scheme. This study for the first time demonstrates that the modifications of convection scheme can eliminate the double ITCZ biases in all seasons, including boreal winter and spring. Further analysis shows that the elimination of the double ITCZ bias is achieved not by improving other possible contributors, such as stratus cloud bias off the west coast of South America and cloud/radiation biases over the Southern Ocean, but by modifying the convection scheme itself. This study demonstrates that convection scheme is the primary contributor to the double ITCZ bias in the CESM1.2.1, and provides a possible solution to the long-standing double ITCZ problem. The atmospheric model simulations forced by observed SST show that the original ZM convection scheme tends to produce double ITCZ bias in high SST scenario, while the modified convection scheme does not. The impact of changes in each core component of convection scheme on the double ITCZ bias in atmospheric model is identified and further investigated.
Chen, T.H.; Henderson-Sellers, A.; Milly, P.C.D.; Pitman, A.J.; Beljaars, A.C.M.; Polcher, J.; Abramopoulos, F.; Boone, A.; Chang, S.; Chen, F.; Dai, Y.; Desborough, C.E.; Dickinson, R.E.; Dumenil, L.; Ek, M.; Garratt, J.R.; Gedney, N.; Gusev, Y.M.; Kim, J.; Koster, R.; Kowalczyk, E.A.; Laval, K.; Lean, J.; Lettenmaier, D.; Liang, X.; Mahfouf, Jean-Francois; Mengelkamp, H.-T.; Mitchell, Ken; Nasonova, O.N.; Noilhan, J.; Robock, A.; Rosenzweig, C.; Schaake, J.; Schlosser, C.A.; Schulz, J.-P.; Shao, Y.; Shmakin, A.B.; Verseghy, D.L.; Wetzel, P.; Wood, E.F.; Xue, Y.; Yang, Z.-L.; Zeng, Q.
1997-01-01
In the Project for Intercomparison of Land-Surface Parameterization Schemes phase 2a experiment, meteorological data for the year 1987 from Cabauw, the Netherlands, were used as inputs to 23 land-surface flux schemes designed for use in climate and weather models. Schemes were evaluated by comparing their outputs with long-term measurements of surface sensible heat fluxes into the atmosphere and the ground, and of upward longwave radiation and total net radiative fluxes, and also comparing them with latent heat fluxes derived from a surface energy balance. Tuning of schemes by use of the observed flux data was not permitted. On an annual basis, the predicted surface radiative temperature exhibits a range of 2 K across schemes, consistent with the range of about 10 W m-2 in predicted surface net radiation. Most modeled values of monthly net radiation differ from the observations by less than the estimated maximum monthly observational error (±10 W m-2). However, modeled radiative surface temperature appears to have a systematic positive bias in most schemes; this might be explained by an error in assumed emissivity and by models' neglect of canopy thermal heterogeneity. Annual means of sensible and latent heat fluxes, into which net radiation is partitioned, have ranges across schemes of 30 W m-2 and 25 W m-2, respectively. Annual totals of evapotranspiration and runoff, into which the precipitation is partitioned, both have ranges of 315 mm. These ranges in annual heat and water fluxes were approximately halved upon exclusion of the three schemes that have no stomatal resistance under non-water-stressed conditions. Many schemes tend to underestimate latent heat flux and overestimate sensible heat flux in summer, with a reverse tendency in winter. For six schemes, root-mean-square deviations of predictions from monthly observations are less than the estimated upper bounds on observation errors (5 W m-2 for sensible heat flux and 10 W m-2 for latent heat flux). Actual runoff at the site is believed to be dominated by vertical drainage to groundwater, but several schemes produced significant amounts of runoff as overland flow or interflow. There is a range across schemes of 184 mm (40% of total pore volume) in the simulated annual mean root-zone soil moisture. Unfortunately, no measurements of soil moisture were available for model evaluation. A theoretical analysis suggested that differences in boundary conditions used in various schemes are not sufficient to explain the large variance in soil moisture. However, many of the extreme values of soil moisture could be explained in terms of the particulars of experimental setup or excessive evapotranspiration.
NASA Astrophysics Data System (ADS)
Chen, T. H.; Henderson-Sellers, A.; Milly, P. C. D.; Pitman, A. J.; Beljaars, A. C. M.; Polcher, J.; Abramopoulos, F.; Boone, A.; Chang, S.; Chen, F.; Dai, Y.; Desborough, C. E.; Dickinson, R. E.; Dümenil, L.; Ek, M.; Garratt, J. R.; Gedney, N.; Gusev, Y. M.; Kim, J.; Koster, R.; Kowalczyk, E. A.; Laval, K.; Lean, J.; Lettenmaier, D.; Liang, X.; Mahfouf, J.-F.; Mengelkamp, H.-T.; Mitchell, K.; Nasonova, O. N.; Noilhan, J.; Robock, A.; Rosenzweig, C.; Schaake, J.; Schlosser, C. A.; Schulz, J.-P.; Shao, Y.; Shmakin, A. B.; Verseghy, D. L.; Wetzel, P.; Wood, E. F.; Xue, Y.; Yang, Z.-L.; Zeng, Q.
1997-06-01
In the Project for Intercomparison of Land-Surface Parameterization Schemes phase 2a experiment, meteorological data for the year 1987 from Cabauw, the Netherlands, were used as inputs to 23 land-surface flux schemes designed for use in climate and weather models. Schemes were evaluated by comparing their outputs with long-term measurements of surface sensible heat fluxes into the atmosphere and the ground, and of upward longwave radiation and total net radiative fluxes, and also comparing them with latent heat fluxes derived from a surface energy balance. Tuning of schemes by use of the observed flux data was not permitted. On an annual basis, the predicted surface radiative temperature exhibits a range of 2 K across schemes, consistent with the range of about 10 W m2 in predicted surface net radiation. Most modeled values of monthly net radiation differ from the observations by less than the estimated maximum monthly observational error (±10 W m2). However, modeled radiative surface temperature appears to have a systematic positive bias in most schemes; this might be explained by an error in assumed emissivity and by models' neglect of canopy thermal heterogeneity. Annual means of sensible and latent heat fluxes, into which net radiation is partitioned, have ranges across schemes of30 W m2 and 25 W m2, respectively. Annual totals of evapotranspiration and runoff, into which the precipitation is partitioned, both have ranges of 315 mm. These ranges in annual heat and water fluxes were approximately halved upon exclusion of the three schemes that have no stomatal resistance under non-water-stressed conditions. Many schemes tend to underestimate latent heat flux and overestimate sensible heat flux in summer, with a reverse tendency in winter. For six schemes, root-mean-square deviations of predictions from monthly observations are less than the estimated upper bounds on observation errors (5 W m2 for sensible heat flux and 10 W m2 for latent heat flux). Actual runoff at the site is believed to be dominated by vertical drainage to groundwater, but several schemes produced significant amounts of runoff as overland flow or interflow. There is a range across schemes of 184 mm (40% of total pore volume) in the simulated annual mean root-zone soil moisture. Unfortunately, no measurements of soil moisture were available for model evaluation. A theoretical analysis suggested that differences in boundary conditions used in various schemes are not sufficient to explain the large variance in soil moisture. However, many of the extreme values of soil moisture could be explained in terms of the particulars of experimental setup or excessive evapotranspiration.
Overview of the Meso-NH model version 5.4 and its applications
NASA Astrophysics Data System (ADS)
Lac, Christine; Chaboureau, Jean-Pierre; Masson, Valéry; Pinty, Jean-Pierre; Tulet, Pierre; Escobar, Juan; Leriche, Maud; Barthe, Christelle; Aouizerats, Benjamin; Augros, Clotilde; Aumond, Pierre; Auguste, Franck; Bechtold, Peter; Berthet, Sarah; Bielli, Soline; Bosseur, Frédéric; Caumont, Olivier; Cohard, Jean-Martial; Colin, Jeanne; Couvreux, Fleur; Cuxart, Joan; Delautier, Gaëlle; Dauhut, Thibaut; Ducrocq, Véronique; Filippi, Jean-Baptiste; Gazen, Didier; Geoffroy, Olivier; Gheusi, François; Honnert, Rachel; Lafore, Jean-Philippe; Lebeaupin Brossier, Cindy; Libois, Quentin; Lunet, Thibaut; Mari, Céline; Maric, Tomislav; Mascart, Patrick; Mogé, Maxime; Molinié, Gilles; Nuissier, Olivier; Pantillon, Florian; Peyrillé, Philippe; Pergaud, Julien; Perraud, Emilie; Pianezze, Joris; Redelsperger, Jean-Luc; Ricard, Didier; Richard, Evelyne; Riette, Sébastien; Rodier, Quentin; Schoetter, Robert; Seyfried, Léo; Stein, Joël; Suhre, Karsten; Taufour, Marie; Thouron, Odile; Turner, Sandra; Verrelle, Antoine; Vié, Benoît; Visentin, Florian; Vionnet, Vincent; Wautelet, Philippe
2018-05-01
This paper presents the Meso-NH model version 5.4. Meso-NH is an atmospheric non hydrostatic research model that is applied to a broad range of resolutions, from synoptic to turbulent scales, and is designed for studies of physics and chemistry. It is a limited-area model employing advanced numerical techniques, including monotonic advection schemes for scalar transport and fourth-order centered or odd-order WENO advection schemes for momentum. The model includes state-of-the-art physics parameterization schemes that are important to represent convective-scale phenomena and turbulent eddies, as well as flows at larger scales. In addition, Meso-NH has been expanded to provide capabilities for a range of Earth system prediction applications such as chemistry and aerosols, electricity and lightning, hydrology, wildland fires, volcanic eruptions, and cyclones with ocean coupling. Here, we present the main innovations to the dynamics and physics of the code since the pioneer paper of Lafore et al. (1998) and provide an overview of recent applications and couplings.
NASA Technical Reports Server (NTRS)
Molthan, A. L.; Haynes, J. A.; Jedlovec, G. L.; Lapenta, W. M.
2009-01-01
As operational numerical weather prediction is performed at increasingly finer spatial resolution, precipitation traditionally represented by sub-grid scale parameterization schemes is now being calculated explicitly through the use of single- or multi-moment, bulk water microphysics schemes. As computational resources grow, the real-time application of these schemes is becoming available to a broader audience, ranging from national meteorological centers to their component forecast offices. A need for improved quantitative precipitation forecasts has been highlighted by the United States Weather Research Program, which advised that gains in forecasting skill will draw upon improved simulations of clouds and cloud microphysical processes. Investments in space-borne remote sensing have produced the NASA A-Train of polar orbiting satellites, specially equipped to observe and catalog cloud properties. The NASA CloudSat instrument, a recent addition to the A-Train and the first 94 GHz radar system operated in space, provides a unique opportunity to compare observed cloud profiles to their modeled counterparts. Comparisons are available through the use of a radiative transfer model (QuickBeam), which simulates 94 GHz radar returns based on the microphysics of cloudy model profiles and the prescribed characteristics of their constituent hydrometeor classes. CloudSat observations of snowfall are presented for a case in the central United States, with comparisons made to precipitating clouds as simulated by the Weather Research and Forecasting Model and the Goddard single-moment microphysics scheme. An additional forecast cycle is performed with a temperature-based parameterization of the snow distribution slope parameter, with comparisons to CloudSat observations provided through the QuickBeam simulator.
Neely, III, Ryan Reynolds; Conley, Andrew J.; Vitt, Francis; ...
2016-07-25
Here we describe an updated parameterization for prescribing stratospheric aerosol in the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM1). The need for a new parameterization is motivated by the poor response of the CESM1 (formerly referred to as the Community Climate System Model, version 4, CCSM4) simulations contributed to the Coupled Model Intercomparison Project 5 (CMIP5) to colossal volcanic perturbations to the stratospheric aerosol layer (such as the 1991 Pinatubo eruption or the 1883 Krakatau eruption) in comparison to observations. In particular, the scheme used in the CMIP5 simulations by CESM1 simulated a global mean surface temperature decreasemore » that was inconsistent with the GISS Surface Temperature Analysis (GISTEMP), NOAA's National Climatic Data Center, and the Hadley Centre of the UK Met Office (HADCRUT4). The new parameterization takes advantage of recent improvements in historical stratospheric aerosol databases to allow for variations in both the mass loading and size of the prescribed aerosol. An ensemble of simulations utilizing the old and new schemes shows CESM1's improved response to the 1991 Pinatubo eruption. Most significantly, the new scheme more accurately simulates the temperature response of the stratosphere due to local aerosol heating. Here, results also indicate that the new scheme decreases the global mean temperature response to the 1991 Pinatubo eruption by half of the observed temperature change, and modelled climate variability precludes statements as to the significance of this change.« less
NASA Astrophysics Data System (ADS)
Felfelani, F.; Pokhrel, Y. N.
2017-12-01
In this study, we use in-situ observations and satellite data of soil moisture and groundwater to improve irrigation and groundwater parameterizations in the version 4.5 of the Community Land Model (CLM). The irrigation application trigger, which is based on the soil moisture deficit mechanism, is enhanced by integrating soil moisture observations and the data from the Soil Moisture Active Passive (SMAP) mission which is available since 2015. Further, we incorporate different irrigation application mechanisms based on schemes used in various other land surface models (LSMs) and carry out a sensitivity analysis using point simulations at two different irrigated sites in Mead, Nebraska where data from the AmeriFlux observational network are available. We then conduct regional simulations over the entire High Plains region and evaluate model results with the available irrigation water use data at the county-scale. Finally, we present results of groundwater simulations by implementing a simple pumping scheme based on our previous studies. Results from the implementation of current irrigation parameterization used in various LSMs show relatively large difference in vertical soil moisture content profile (e.g., 0.2 mm3/mm3) at point scale which is mostly decreased when averaged over relatively large regions (e.g., 0.04 mm3/mm3 in the High Plains region). It is found that original irrigation module in CLM 4.5 tends to overestimate the soil moisture content compared to both point observations and SMAP, and the results from the improved scheme linked with the groundwater pumping scheme show better agreement with the observations.
Cheng, Meng -Dawn; Kabela, Erik D.
2016-04-30
The Potential Source Contribution Function (PSCF) model has been successfully used for identifying regions of emission source at a long distance in this study, the PSCF model relies on backward trajectories calculated by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model. In this study, we investigated the impacts of grid resolution and Planetary Boundary Layer (PBL) parameterization (e.g., turbulent transport of pollutants) on the PSCF analysis. The Mellor-Yamada-Janjic (MYJ) and Yonsei University (YUS) parameterization schemes were selected to model the turbulent transport in the PBL within the Weather Research and Forecasting (WRF version 3.6) model. Two separate domain grid sizesmore » (83 and 27 km) were chosen in the WRF downscaling in generating the wind data for driving the HYSPLIT calculation. The effects of grid size and PBL parameterization are important in incorporating the influ- ence of regional and local meteorological processes such as jet streaks, blocking patterns, Rossby waves, and terrain-induced convection on the transport of pollutants by a wind trajectory. We found high resolution PSCF did discover and locate source areas more precisely than that with lower resolution meteorological inputs. The lack of anticipated improvement could also be because a PBL scheme chosen to produce the WRF data was only a local parameterization and unable to faithfully duplicate the real atmosphere on a global scale. The MYJ scheme was able to replicate PSCF source identification by those using the Reanalysis and discover additional source areas that was not identified by the Reanalysis data. In conclusion, a potential benefit for using high-resolution wind data in the PSCF modeling is that it could discover new source location in addition to those identified by using the Reanalysis data input.« less
Summary of Cumulus Parameterization Workshop
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh
2002-01-01
A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Chao, Winston C.; Walker, G. K.
1992-01-01
The influence of a cumulus convection scheme on the simulated atmospheric circulation and hydrologic cycle is investigated by means of a coarse version of the GCM. Two sets of integrations, each containing an ensemble of three summer simulations, were produced. The ensemble sets of control and experiment simulations are compared and differentially analyzed to determine the influence of a cumulus convection scheme on the simulated circulation and hydrologic cycle. The results show that cumulus parameterization has a very significant influence on the simulation circulation and precipitation. The upper-level condensation heating over the ITCZ is much smaller for the experiment simulations as compared to the control simulations; correspondingly, the Hadley and Walker cells for the control simulations are also weaker and are accompanied by a weaker Ferrel cell in the Southern Hemisphere. Overall, the difference fields show that experiment simulations (without cumulus convection) produce a cooler and less energetic atmosphere.
NASA Astrophysics Data System (ADS)
Freitas, S.; Grell, G. A.; Molod, A.
2017-12-01
We implemented and began to evaluate an alternative convection parameterization for the NASA Goddard Earth Observing System (GEOS) global model. The parameterization (Grell and Freitas, 2014) is based on the mass flux approach with several closures, for equilibrium and non-equilibrium convection, and includes scale and aerosol awareness functionalities. Scale dependence for deep convection is implemented either through using the method described by Arakawa et al (2011), or through lateral spreading of the subsidence terms. Aerosol effects are included though the dependence of autoconversion and evaporation on the CCN number concentration.Recently, the scheme has been extended to a tri-modal spectral size approach to simulate the transition from shallow, congestus, and deep convection regimes. In addition, the inclusion of a new closure for non-equilibrium convection resulted in a substantial gain of realism in model simulation of the diurnal cycle of convection over the land. Also, a beta-pdf is employed now to represent the normalized mass flux profile. This opens up an additional venue to apply stochasticism in the scheme.
An Empirical Cumulus Parameterization Scheme for a Global Spectral Model
NASA Technical Reports Server (NTRS)
Rajendran, K.; Krishnamurti, T. N.; Misra, V.; Tao, W.-K.
2004-01-01
Realistic vertical heating and drying profiles in a cumulus scheme is important for obtaining accurate weather forecasts. A new empirical cumulus parameterization scheme based on a procedure to improve the vertical distribution of heating and moistening over the tropics is developed. The empirical cumulus parameterization scheme (ECPS) utilizes profiles of Tropical Rainfall Measuring Mission (TRMM) based heating and moistening derived from the European Centre for Medium- Range Weather Forecasts (ECMWF) analysis. A dimension reduction technique through rotated principal component analysis (RPCA) is performed on the vertical profiles of heating (Q1) and drying (Q2) over the convective regions of the tropics, to obtain the dominant modes of variability. Analysis suggests that most of the variance associated with the observed profiles can be explained by retaining the first three modes. The ECPS then applies a statistical approach in which Q1 and Q2 are expressed as a linear combination of the first three dominant principal components which distinctly explain variance in the troposphere as a function of the prevalent large-scale dynamics. The principal component (PC) score which quantifies the contribution of each PC to the corresponding loading profile is estimated through a multiple screening regression method which yields the PC score as a function of the large-scale variables. The profiles of Q1 and Q2 thus obtained are found to match well with the observed profiles. The impact of the ECPS is investigated in a series of short range (1-3 day) prediction experiments using the Florida State University global spectral model (FSUGSM, T126L14). Comparisons between short range ECPS forecasts and those with the modified Kuo scheme show a very marked improvement in the skill in ECPS forecasts. This improvement in the forecast skill with ECPS emphasizes the importance of incorporating realistic vertical distributions of heating and drying in the model cumulus scheme. This also suggests that in the absence of explicit models for convection, the proposed statistical scheme improves the modeling of the vertical distribution of heating and moistening in areas of deep convection.
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A.; Maslanik, J.; Key, J.; Weaver, R.; Barry, R.
1990-01-01
The application of multi-spectral satellite data to estimate polar surface energy fluxes is addressed. To what accuracy and over which geographic areas large scale energy budgets can be estimated are investigated based upon a combination of available remote sensing and climatological data sets. The general approach was to: (1) formulate parameterization schemes for the appropriate sea ice energy budget terms based upon the remotely sensed and/or in-situ data sets; (2) conduct sensitivity analyses using as input both natural variability (observed data in regional case studies) and theoretical variability based upon energy flux model concepts; (3) assess the applicability of these parameterization schemes to both regional and basin wide energy balance estimates using remote sensing data sets; and (4) assemble multi-spectral, multi-sensor data sets for at least two regions of the Arctic Basin and possibly one region of the Antarctic. The type of data needed for a basin-wide assessment is described and the temporal coverage of these data sets are determined by data availability and need as defined by parameterization scheme. The titles of the subjects are as follows: (1) Heat flux calculations from SSM/I and LANDSAT data in the Bering Sea; (2) Energy flux estimation using passive microwave data; (3) Fetch and stability sensitivity estimates of turbulent heat flux; and (4) Surface temperature algorithm.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
Sensitivity of boundary layer variables to PBL schemes over the central Tibetan Plateau
NASA Astrophysics Data System (ADS)
Xu, L.; Liu, H.; Wang, L.; Du, Q.; Liu, Y.
2017-12-01
Planetary Boundary Layer (PBL) parameterization schemes play critical role in numerical weather prediction and research. They describe physical processes associated with the momentum, heat and humidity exchange between land surface and atmosphere. In this study, two non-local (YSU and ACM2) and two local (MYJ and BouLac) planetary boundary layer parameterization schemes in the Weather Research and Forecasting (WRF) model have been tested over the central Tibetan Plateau regarding of their capability to model boundary layer parameters relevant for surface energy exchange. The model performance has been evaluated against measurements from the Third Tibetan Plateau atmospheric scientific experiment (TIPEX-III). Simulated meteorological parameters and turbulence fluxes have been compared with observations through standard statistical measures. Model results show acceptable behavior, but no particular scheme produces best performance for all locations and parameters. All PBL schemes underestimate near surface air temperatures over the Tibetan Plateau. By investigating the surface energy budget components, the results suggest that downward longwave radiation and sensible heat flux are the main factors causing the lower near surface temperature. Because the downward longwave radiation and sensible heat flux are respectively affected by atmosphere moisture and land-atmosphere coupling, improvements in water vapor distribution and land-atmosphere energy exchange is meaningful for better presentation of PBL physical processes over the central Tibetan Plateau.
Booth, James F; Naud, Catherine M; Willison, Jeff
2018-03-01
The representation of extratropical cyclones (ETCs) precipitation in general circulation models (GCMs) and a weather research and forecasting (WRF) model is analyzed. This work considers the link between ETC precipitation and dynamical strength and tests if parameterized convection affects this link for ETCs in the North Atlantic Basin. Lagrangian cyclone tracks of ETCs in ERA-Interim reanalysis (ERAI), the GISS and GFDL CMIP5 models, and WRF with two horizontal resolutions are utilized in a compositing analysis. The 20-km resolution WRF model generates stronger ETCs based on surface wind speed and cyclone precipitation. The GCMs and ERAI generate similar composite means and distributions for cyclone precipitation rates, but GCMs generate weaker cyclone surface winds than ERAI. The amount of cyclone precipitation generated by the convection scheme differs significantly across the datasets, with GISS generating the most, followed by ERAI and then GFDL. The models and reanalysis generate relatively more parameterized convective precipitation when the total cyclone-averaged precipitation is smaller. This is partially due to the contribution of parameterized convective precipitation occurring more often late in the ETC life cycle. For reanalysis and models, precipitation increases with both cyclone moisture and surface wind speed, and this is true if the contribution from the parameterized convection scheme is larger or not. This work shows that these different models generate similar total ETC precipitation despite large differences in the parameterized convection, and these differences do not cause unexpected behavior in ETC precipitation sensitivity to cyclone moisture or surface wind speed.
NASA Technical Reports Server (NTRS)
Chao, Winston C.
2015-01-01
The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
Explicit Global Simulation of Gravity Waves up to the Lower Thermosphere
NASA Astrophysics Data System (ADS)
Becker, E.
2016-12-01
At least for short-term simulations, middle atmosphere general circulation models (GCMs) can be run with sufficiently high resolution in order to describe a good part of the gravity wave spectrum explicitly. Nevertheless, the parameterization of unresolved dynamical scales remains an issue, especially when the scales of parameterized gravity waves (GWs) and resolved GWs become comparable. In addition, turbulent diffusion must always be parameterized along with other subgrid-scale dynamics. A practical solution to the combined closure problem for GWs and turbulent diffusion is to dispense with a parameterization of GWs, apply a high spatial resolution, and to represent the unresolved scales by a macro-turbulent diffusion scheme that gives rise to wave damping in a self-consistent fashion. This is the approach of a few GCMs that extend from the surface to the lower thermosphere and simulate a realistic GW drag and summer-to-winter-pole residual circulation in the upper mesosphere. In this study we describe a new version of the Kuehlungsborn Mechanistic general Circulation Model (KMCM), which includes explicit (though idealized) computations of radiative transfer and the tropospheric moisture cycle. Particular emphasis is spent on 1) the turbulent diffusion scheme, 2) the attenuation of resolved GWs at critical levels, 3) the generation of GWs in the middle atmosphere from body forces, and 4) GW-tidal interactions (including the energy deposition of GWs and tides).
Frozen soil parameterization in a distributed biosphere hydrological model
NASA Astrophysics Data System (ADS)
Wang, L.; Koike, T.; Yang, K.; Jin, R.; Li, H.
2009-11-01
In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM). The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research). In the summer 2008, land surface parameters were optimized using the observed surface radiation fluxes and the soil temperature profile at the Dadongshu-Yakou (DY) station in July; and then soil hydraulic parameters were obtained by the calibration of the July soil moisture profile at the DY station and by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak of 2008. The calibrated WEB-DHM with the frozen scheme was then used for a yearlong simulation from 21 November 2007 to 20 November 2008, to check its performance in cold seasons. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the DY station and the discharges at the basin outlet in the yearlong simulation.
Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons
NASA Technical Reports Server (NTRS)
Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang
2008-01-01
Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.
NASA Astrophysics Data System (ADS)
Salzmann, M.; Ming, Y.; Golaz, J.-C.; Ginoux, P. A.; Morrison, H.; Gettelman, A.; Krämer, M.; Donner, L. J.
2010-08-01
A new stratiform cloud scheme including a two-moment bulk microphysics module, a cloud cover parameterization allowing ice supersaturation, and an ice nucleation parameterization has been implemented into the recently developed GFDL AM3 general circulation model (GCM) as part of an effort to treat aerosol-cloud-radiation interactions more realistically. Unlike the original scheme, the new scheme facilitates the study of cloud-ice-aerosol interactions via influences of dust and sulfate on ice nucleation. While liquid and cloud ice water path associated with stratiform clouds are similar for the new and the original scheme, column integrated droplet numbers and global frequency distributions (PDFs) of droplet effective radii differ significantly. This difference is in part due to a difference in the implementation of the Wegener-Bergeron-Findeisen (WBF) mechanism, which leads to a larger contribution from super-cooled droplets in the original scheme. Clouds are more likely to be either completely glaciated or liquid due to the WBF mechanism in the new scheme. Super-saturations over ice simulated with the new scheme are in qualitative agreement with observations, and PDFs of ice numbers and effective radii appear reasonable in the light of observations. Especially, the temperature dependence of ice numbers qualitatively agrees with in-situ observations. The global average long-wave cloud forcing decreases in comparison to the original scheme as expected when super-saturation over ice is allowed. Anthropogenic aerosols lead to a larger decrease in short-wave absorption (SWABS) in the new model setup, but outgoing long-wave radiation (OLR) decreases as well, so that the net effect of including anthropogenic aerosols on the net radiation at the top of the atmosphere (netradTOA = SWABS-OLR) is of similar magnitude for the new and the original scheme.
NASA Astrophysics Data System (ADS)
Salzmann, M.; Ming, Y.; Golaz, J.-C.; Ginoux, P. A.; Morrison, H.; Gettelman, A.; Krämer, M.; Donner, L. J.
2010-03-01
A new stratiform cloud scheme including a two-moment bulk microphysics module, a cloud cover parameterization allowing ice supersaturation, and an ice nucleation parameterization has been implemented into the recently developed GFDL AM3 general circulation model (GCM) as part of an effort to treat aerosol-cloud-radiation interactions more realistically. Unlike the original scheme, the new scheme facilitates the study of cloud-ice-aerosol interactions via influences of dust and sulfate on ice nucleation. While liquid and cloud ice water path associated with stratiform clouds are similar for the new and the original scheme, column integrated droplet numbers and global frequency distributions (PDFs) of droplet effective radii differ significantly. This difference is in part due to a difference in the implementation of the Wegener-Bergeron-Findeisen (WBF) mechanism, which leads to a larger contribution from super-cooled droplets in the original scheme. Clouds are more likely to be either completely glaciated or liquid due to the WBF mechanism in the new scheme. Super-saturations over ice simulated with the new scheme are in qualitative agreement with observations, and PDFs of ice numbers and effective radii appear reasonable in the light of observations. Especially, the temperature dependence of ice numbers qualitatively agrees with in-situ observations. The global average long-wave cloud forcing decreases in comparison to the original scheme as expected when super-saturation over ice is allowed. Anthropogenic aerosols lead to a larger decrease in short-wave absorption (SWABS) in the new model setup, but outgoing long-wave radiation (OLR) decreases as well, so that the net effect of including anthropogenic aerosols on the net radiation at the top of the atmosphere (netradTOA = SWABS-OLR) is of similar magnitude for the new and the original scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Kyo-Sun; Hong, Song You; Yoon, Jin-Ho
2014-10-01
The most recent version of Simplified Arakawa-Schubert (SAS) cumulus scheme in National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) (GFS SAS) has been implemented into the Weather and Research Forecasting (WRF) model with a modification of triggering condition and convective mass flux to become depending on model’s horizontal grid spacing. East Asian Summer Monsoon of 2006 from June to August is selected to evaluate the performance of the modified GFS SAS scheme. Simulated monsoon rainfall with the modified GFS SAS scheme shows better agreement with observation compared to the original GFS SAS scheme. The original GFS SAS schememore » simulates the similar ratio of subgrid-scale precipitation, which is calculated from a cumulus scheme, against total precipitation regardless of model’s horizontal grid spacing. This is counter-intuitive because the portion of resolved clouds in a grid box should be increased as the model grid spacing decreases. This counter-intuitive behavior of the original GFS SAS scheme is alleviated by the modified GFS SAS scheme. Further, three different cumulus schemes (Grell and Freitas, Kain and Fritsch, and Betts-Miller-Janjic) are chosen to investigate the role of a horizontal resolution on simulated monsoon rainfall. The performance of high-resolution modeling is not always enhanced as the spatial resolution becomes higher. Even though improvement of probability density function of rain rate and long wave fluxes by the higher-resolution simulation is robust regardless of a choice of cumulus parameterization scheme, the overall skill score of surface rainfall is not monotonically increasing with spatial resolution.« less
NASA Astrophysics Data System (ADS)
Kalina, E. A.; Biswas, M.; Newman, K.; Grell, E. D.; Bernardet, L.; Frimel, J.; Carson, L.
2017-12-01
The parameterization of moist physics in numerical weather prediction models plays an important role in modulating tropical cyclone structure, intensity, and evolution. The Hurricane Weather Research and Forecast system (HWRF), the National Oceanic and Atmospheric Administration's operational model for tropical cyclone prediction, uses the Scale-Aware Simplified Arakawa-Schubert (SASAS) cumulus scheme and a modified version of the Ferrier-Aligo (FA) microphysics scheme to parameterize moist physics. The FA scheme contains a number of simplifications that allow it to run efficiently in an operational setting, which includes prescribing values for hydrometeor number concentrations (i.e., single-moment microphysics) and advecting the total condensate rather than the individual hydrometeor species. To investigate the impact of these simplifying assumptions on the HWRF forecast, the FA scheme was replaced with the more complex double-moment Thompson microphysics scheme, which individually advects cloud ice, cloud water, rain, snow, and graupel. Retrospective HWRF forecasts of tropical cyclones that occurred in the Atlantic and eastern Pacific ocean basins from 2015-2017 were then simulated and compared to those produced by the operational HWRF configuration. Both traditional model verification metrics (i.e., tropical cyclone track and intensity) and process-oriented metrics (e.g., storm size, precipitation structure, and heating rates from the microphysics scheme) will be presented and compared. The sensitivity of these results to the cumulus scheme used (i.e., the operational SASAS versus the Grell-Freitas scheme) also will be examined. Finally, the merits of replacing the moist physics schemes that are used operationally with the alternatives tested here will be discussed from a standpoint of forecast accuracy versus computational resources.
Ibeas, Asier; de la Sen, Manuel
2006-10-01
The problem of controlling a tandem of robotic manipulators composing a teleoperation system with force reflection is addressed in this paper. The final objective of this paper is twofold: 1) to design a robust control law capable of ensuring closed-loop stability for robots with uncertainties and 2) to use the so-obtained control law to improve the tracking of each robot to its corresponding reference model in comparison with previously existing controllers when the slave is interacting with the obstacle. In this way, a multiestimation-based adaptive controller is proposed. Thus, the master robot is able to follow more accurately the constrained motion defined by the slave when interacting with an obstacle than when a single-estimation-based controller is used, improving the transparency property of the teleoperation scheme. The closed-loop stability is guaranteed if a minimum residence time, which might be updated online when unknown, between different controller parameterizations is respected. Furthermore, the analysis of the teleoperation and stability capabilities of the overall scheme is carried out. Finally, some simulation examples showing the working of the multiestimation scheme complete this paper.
A simplified scheme for computing radiation transfer in the troposphere
NASA Technical Reports Server (NTRS)
Katayama, A.
1973-01-01
A scheme is presented, for the heating of clear and cloudy air by solar and infrared radiation transfer, designed for use in tropospheric general circulation models with coarse vertical resolution. A bulk transmission function is defined for the infrared transfer. The interpolation factors, required for computing the bulk transmission function, are parameterized as functions of such physical parameters as the thickness of the layer, the pressure, and the mixing ratio at a reference level. The computation procedure for solar radiation is significantly simplified by the introduction of two basic concepts. The first is that the solar radiation spectrum can be divided into a scattered part, for which Rayleigh scattering is significant but absorption by water vapor is negligible, and an absorbed part for which absorption by water vapor is significant but Rayleigh scattering is negligible. The second concept is that of an equivalent cloud water vapor amount which absorbs the same amount of radiation as the cloud.
Frozen soil parameterization in a distributed biosphere hydrological model
NASA Astrophysics Data System (ADS)
Wang, L.; Koike, T.; Yang, K.; Jin, R.; Li, H.
2010-03-01
In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM). The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research). First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths) at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.
Flow Charts: Visualization of Vector Fields on Arbitrary Surfaces
Li, Guo-Shi; Tricoche, Xavier; Weiskopf, Daniel; Hansen, Charles
2009-01-01
We introduce a novel flow visualization method called Flow Charts, which uses a texture atlas approach for the visualization of flows defined over curved surfaces. In this scheme, the surface and its associated flow are segmented into overlapping patches, which are then parameterized and packed in the texture domain. This scheme allows accurate particle advection across multiple charts in the texture domain, providing a flexible framework that supports various flow visualization techniques. The use of surface parameterization enables flow visualization techniques requiring the global view of the surface over long time spans, such as Unsteady Flow LIC (UFLIC), particle-based Unsteady Flow Advection Convolution (UFAC), or dye advection. It also prevents visual artifacts normally associated with view-dependent methods. Represented as textures, Flow Charts can be naturally integrated into hardware accelerated flow visualization techniques for interactive performance. PMID:18599918
Final Technical Report for "Reducing tropical precipitation biases in CESM"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-01-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present parameterization gives an interesting alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.
NASA Astrophysics Data System (ADS)
Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.
2017-12-01
At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.
Noble, Erik; Druyan, Leonard M; Fulakeza, Matthew
2016-01-01
This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000-2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35-0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000-2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation.
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2018-01-01
This paper evaluates the performance of the Weather and Research Forecasting (WRF) model as a regional-atmospheric model over West Africa. It tests WRF sensitivity to 64 configurations of alternative parameterizations in a series of 104 twelve-day September simulations during eleven consecutive years, 2000–2010. The 64 configurations combine WRF parameterizations of cumulus convection, radiation, surface-hydrology, and PBL. Simulated daily and total precipitation results are validated against Global Precipitation Climatology Project (GPCP) and Tropical Rainfall Measuring Mission (TRMM) data. Particular attention is given to westward-propagating precipitation maxima associated with African Easterly Waves (AEWs). A wide range of daily precipitation validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve time-longitude correlations (against GPCP) of between 0.35–0.42 and spatiotemporal variability amplitudes only slightly higher than observed estimates. A parallel simulation by the benchmark Regional Model-v.3 achieves a higher correlation (0.52) and realistic spatiotemporal variability amplitudes. The largest favorable impact on WRF precipitation validation is achieved by selecting the Grell-Devenyi convection scheme, resulting in higher correlations against observations than using the Kain-Fritch convection scheme. Other parameterizations have less obvious impact. Validation statistics for optimized WRF configurations simulating the parallel period during 2000–2010 are more favorable for 2005, 2006, and 2008 than for other years. The selection of some of the same WRF configurations as high scorers in both circulation and precipitation validations supports the notion that simulations of West African daily precipitation benefit from skillful simulations of associated AEW vorticity centers and that simulations of AEWs would benefit from skillful simulations of convective precipitation. PMID:29563651
NASA Astrophysics Data System (ADS)
Astitha, M.; Abdel Kader, M.; Pozzer, A.; Lelieveld, J.
2012-04-01
Atmospheric particulate matter and more specific desert dust has been the topic of numerous research studies in the past due to the wide range of impacts in the environment and climate and the uncertainty of characterizing and quantifying these impacts in a global scale. In this work we present two physical parameterizations of the desert dust production that have been incorporated in the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). The scope of this work is to assess the impact of the two physical parameterizations in the global distribution of desert dust and highlight the advantages and disadvantages of using either technique. The dust concentration and deposition has been evaluated using the AEROCOM dust dataset for the year 2000 and data from the MODIS and MISR satellites as well as sun-photometer data from the AERONET network was used to compare the modelled aerosol optical depth with observations. The implementation of the two parameterizations and the simulations using relatively high spatial resolution (T106~1.1deg) has highlighted the large spatial heterogeneity of the dust emission sources as well as the importance of the input parameters (soil size and texture, vegetation, surface wind speed). Also, sensitivity simulations with the nudging option using reanalysis data from ECMWF and without nudging have showed remarkable differences for some areas. Both parameterizations have revealed the difficulty of simulating all arid regions with the same assumptions and mechanisms. Depending on the arid region, each emission scheme performs more or less satisfactorily which leads to the necessity of treating each desert differently. Even though this is a quite different task to accomplish in a global model, some recommendations are given and ideas for future improvements.
NASA Technical Reports Server (NTRS)
Noble, Erik; Druyan, Leonard M.; Fulakeza, Matthew
2014-01-01
The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative parameterizations in a series of simulations in September. In all, 104 simulations of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF parameterizations of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the influence of alternative parameterizations. The best WRF performers achieve correlations against reanalysis of 0.40-0.60 and realistic amplitudes of spatiotemporal variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell-Devenyi cumulus convection scheme, resulting in higher correlations against reanalysis than simulations using the Kain-Fritch convection. Other parameterizations have less-obvious impact, although WRF configurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations confirms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF configurations simulating the parallel period during 10 additional years are less favorable than for 2006.
Woo Kim, Hyun; Rhee, Young Min
2012-07-30
Recently, many polarizable force fields have been devised to describe induction effects between molecules. In popular polarizable models based on induced dipole moments, atomic polarizabilities are the essential parameters and should be derived carefully. Here, we present a parameterization scheme for atomic polarizabilities using a minimization target function containing both molecular and atomic information. The main idea is to adopt reference data only from quantum chemical calculations, to perform atomic polarizability parameterizations even when relevant experimental data are scarce as in the case of electronically excited molecules. Specifically, our scheme assigns the atomic polarizabilities of any given molecule in such a way that its molecular polarizability tensor is well reproduced. We show that our scheme successfully works for various molecules in mimicking dipole responses not only in ground states but also in valence excited states. The electrostatic potential around a molecule with an externally perturbing nearby charge also exhibits a near-quantitative agreement with the reference data from quantum chemical calculations. The limitation of the model with isotropic atoms is also discussed to examine the scope of its applicability. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tang, W.; Yang, K.; Sun, Z.; Qin, J.; Niu, X.
2016-12-01
A fast parameterization scheme named SUNFLUX is used in this study to estimate instantaneous surface solar radiation (SSR) based on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard both Terra and Aqua platforms. The scheme mainly takes into account the absorption and scattering processes due to clouds, aerosols and gas in the atmosphere. The estimated instantaneous SSR is evaluated against surface observations obtained from seven stations of the Surface Radiation Budget Network (SURFRAD), four stations in the North China Plain (NCP) and 40 stations of the Baseline Surface Radiation Network (BSRN). The statistical results for evaluation against these three datasets show that the relative root-mean-square error (RMSE) values of SUNFLUX are less than 15%, 16% and 17%, respectively. Daily SSR is derived through temporal upscaling from the MODIS-based instantaneous SSR estimates, and is validated against surface observations. The relative RMSE values for daily SSR estimates are about 16% at the seven SURFRAD stations, four NCP stations, 40 BSRN stations and 90 China Meteorological Administration (CMA) radiation stations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gettelman, A.; Liu, Xiaohong; Ghan, Steven J.
2010-09-28
A process-based treatment of ice supersaturation and ice-nucleation is implemented in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). The new scheme is designed to allow (1) supersaturation with respect to ice, (2) ice nucleation by aerosol particles and (3) ice cloud cover consistent with ice microphysics. The scheme is implemented with a 4-class 2 moment microphysics code and is used to evaluate ice cloud nucleation mechanisms and supersaturation in CAM. The new model is able to reproduce field observations of ice mass and mixed phase cloud occurrence better than previous versions of the model. Simulations indicatemore » heterogeneous freezing and contact nucleation on dust are both potentially important over remote areas of the Arctic. Cloud forcing and hence climate is sensitive to different formulations of the ice microphysics. Arctic radiative fluxes are sensitive to the parameterization of ice clouds. These results indicate that ice clouds are potentially an important part of understanding cloud forcing and potential cloud feedbacks, particularly in the Arctic.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Changhyun; Park, Sungsu; Kim, Daehyun
2015-10-01
The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, influences weather and climate in the extratropics through atmospheric teleconnection. In this study, two simulations using the Community Atmosphere Model version 5 (CAM5) - one with the default shallow and deep convection schemes and the other with the Unified Convection scheme (UNICON) - are employed to examine the impacts of cumulus parameterizations on the simulation of the boreal wintertime MJO teleconnection in the Northern Hemisphere. We demonstrate that the UNICON substantially improves the MJO teleconnection. When the UNICON is employed, the simulated circulation anomalies associated with the MJO bettermore » resemble the observed counterpart, compared to the simulation with the default convection schemes. Quantitatively, the pattern correlation for the 300-hPa geopotential height anomalies between the simulations and observation increases from 0.07 for the default schemes to 0.54 for the UNICON. These circulation anomalies associated with the MJO further help to enhance the surface air temperature and precipitation anomalies over North America, although room for improvement is still evident. Initial value calculations suggest that the realistic MJO teleconnection with the UNICON is not attributed to the changes in the background wind, but primarily to the improved tropical convective heating associated with the MJO.« less
A preference-ordered discrete-gaming approach to air-combat analysis
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1978-01-01
An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.
Assessment of State-of-the-Art Dust Emission Scheme in GEOS
NASA Technical Reports Server (NTRS)
Darmenov, Anton; Liu, Xiaohong; Prigent, Catherine
2017-01-01
The GEOS modeling system has been extended with state of the art parameterization of dust emissions based on the vertical flux formulation described in Kok et al 2014. The new dust scheme was coupled with the GOCART and MAM aerosol models. In the present study we compare dust emissions, aerosol optical depth (AOD) and radiative fluxes from GEOS experiments with the standard and new dust emissions. AOD from the model experiments are also compared with AERONET and satellite based data. Based on this comparative analysis we concluded that the new parameterization improves the GEOS capability to model dust aerosols originating from African sources, however it lead to overestimation of dust emissions from Asian and Arabian sources. Further regional tuning of key parameters controlling the threshold friction velocity may be required in order to achieve more definitive and uniform improvement in the dust modeling skill.
On the sensitivity of mesoscale models to surface-layer parameterization constants
NASA Astrophysics Data System (ADS)
Garratt, J. R.; Pielke, R. A.
1989-09-01
The Colorado State University standard mesoscale model is used to evaluate the sensitivity of one-dimensional (1D) and two-dimensional (2D) fields to differences in surface-layer parameterization “constants”. Such differences reflect the range in the published values of the von Karman constant, Monin-Obukhov stability functions and the temperature roughness length at the surface. The sensitivity of 1D boundary-layer structure, and 2D sea-breeze intensity, is generally less than that found in published comparisons related to turbulence closure schemes generally.
NASA Astrophysics Data System (ADS)
Peishu, Zong; Jianping, Tang; Shuyu, Wang; Lingyun, Xie; Jianwei, Yu; Yunqian, Zhu; Xiaorui, Niu; Chao, Li
2017-08-01
The parameterization of physical processes is one of the critical elements to properly simulate the regional climate over eastern China. It is essential to conduct detailed analyses on the effect of physical parameterization schemes on regional climate simulation, to provide more reliable regional climate change information. In this paper, we evaluate the 25-year (1983-2007) summer monsoon climate characteristics of precipitation and surface air temperature by using the regional spectral model (RSM) with different physical schemes. The ensemble results using the reliability ensemble averaging (REA) method are also assessed. The result shows that the RSM model has the capacity to reproduce the spatial patterns, the variations, and the temporal tendency of surface air temperature and precipitation over eastern China. And it tends to predict better climatology characteristics over the Yangtze River basin and the South China. The impact of different physical schemes on RSM simulations is also investigated. Generally, the CLD3 cloud water prediction scheme tends to produce larger precipitation because of its overestimation of the low-level moisture. The systematic biases derived from the KF2 cumulus scheme are larger than those from the RAS scheme. The scale-selective bias correction (SSBC) method improves the simulation of the temporal and spatial characteristics of surface air temperature and precipitation and advances the circulation simulation capacity. The REA ensemble results show significant improvement in simulating temperature and precipitation distribution, which have much higher correlation coefficient and lower root mean square error. The REA result of selected experiments is better than that of nonselected experiments, indicating the necessity of choosing better ensemble samples for ensemble.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
NASA Astrophysics Data System (ADS)
Tang, Yaoguo; Han, Yongxiang; Liu, Zhaohuan
2018-06-01
Dust aerosols are the main aerosol components of the atmosphere that affect climate change, but the contribution of dust devils to the atmospheric dust aerosol budget is uncertain. In this study, a new parameterization scheme for dust devils was established and coupled with WRF-Chem, and the diurnal and monthly variations and the contribution of dust devils to the atmospheric dust aerosol budget in East Asia was simulated. The results show that 1) both the diurnal and monthly variations in dust devil emissions in East Asia had unimodal distributions, with peaks in the afternoon and the summer that were similar to the observations; 2) the simulated dust devils occurred frequently in deserts, including the Gobi. The distributed area and the intensity center of the dust devil moved from east to west during the day; 3) the ratio between the availability of convective buoyancy relative to the frictional dissipation was the main factor that limited the presence of dust devils. The position of the dust devil formation, the surface temperature, and the boundary layer height determined the dust devil intensity; 4) the contribution of dust devils to atmospheric dust aerosols determined in East Asia was 30.4 ± 13%, thereby suggesting that dust devils contribute significantly to the total amount of atmospheric dust aerosols. Although the new parameterization scheme for dust devils was rough, it was helpful for understanding the distribution of dust devils and their contribution to the dust aerosol budget.
NASA Astrophysics Data System (ADS)
Kim, J. B.; Um, M. J.; Kim, Y.
2016-12-01
Drought is one of the most powerful and extensive disasters and has the highest annual average damage among all the disasters. Focusing on East Asia, where over one fifth of all the people in the world live, drought has impacted as well as been projected to impact the region significantly. .Therefore it is critical to reasonably simulate the drought phenomenon in the region and thus this study would focus on the reproducibility of drought with the NCAR CLM. In this study, we examine the propagation of drought processes with different runoff parameterization of CLM in East Asia. Two different schemes are used; TOPMODEL-based and VIC-based schemes, which differentiate the result of runoff through the surface and subsurface runoff parameterization. CLM with different runoff scheme are driven with two atmospheric forcings from CRU/NCEP and NCEP reanalysis data. Specifically, propagation of drought from meteorological, agricultural to hydrologic drought is investigated with different drought indices, estimated with not only model simulated results but also observational data. The indices include the standardized precipitation evapotranspiration index (SPEI), standardized runoff index (SRI) and standardized soil moisture index (SSMI). Based on these indices, the drought characteristics such as intensity, frequency and spatial extent are investigated. At last, such drought assessments would reveal the possible model deficiencies in East Asia. AcknowledgementsThis work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2015R1C1A2A01054800) and the Korea Meteorological Administration R&D Program under Grant KMIPA 2015-6180.
NASA Astrophysics Data System (ADS)
Zhu, Wenbin; Jia, Shaofeng; Lv, Aifeng
2017-10-01
The triangle method based on the spatial relationship between remotely sensed land surface temperature (Ts) and vegetation index (VI) has been widely used for the estimates of evaporative fraction (EF). In the present study, a universal triangle method was proposed by transforming the Ts-VI feature space from a regional scale to a pixel scale. The retrieval of EF is only related to the boundary conditions at pixel scale, regardless of the Ts-VI configuration over the spatial domain. The boundary conditions of each pixel are composed of the theoretical dry edge determined by the surface energy balance principle and the wet edge determined by the average air temperature of open water. The universal triangle method was validated using the EF observations collected by the Energy Balance Bowen Ratio systems in the Southern Great Plains of the United States of America (USA). Two parameterization schemes of EF were used to demonstrate their applicability with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) products over the whole year 2004. The results of this study show that the accuracy produced by both of these two parameterization schemes is comparable to that produced by the traditional triangle method, although the universal triangle method seems specifically suited to the parameterization scheme proposed in our previous research. The independence of the universal triangle method from the Ts-VI feature space makes it possible to conduct a continuous monitoring of evapotranspiration and soil moisture. That is just the ability the traditional triangle method does not possess.
Investigations into the F-106 lightning strike environment as functions of altitude and storm phase
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.
1987-01-01
Work accomplished during this period centered on the completion of the first order parameterization scheme for the intracloud lightning discharge and its incorporation within the framework of the Storm Electrification Model (SEM).
A CPT for Improving Turbulence and Cloud Processes in the NCEP Global Models
NASA Astrophysics Data System (ADS)
Krueger, S. K.; Moorthi, S.; Randall, D. A.; Pincus, R.; Bogenschutz, P.; Belochitski, A.; Chikira, M.; Dazlich, D. A.; Swales, D. J.; Thakur, P. K.; Yang, F.; Cheng, A.
2016-12-01
Our Climate Process Team (CPT) is based on the premise that the NCEP (National Centers for Environmental Prediction) global models can be improved by installing an integrated, self-consistent description of turbulence, clouds, deep convection, and the interactions between clouds and radiative and microphysical processes. The goal of our CPT is to unify the representation of turbulence and subgrid-scale (SGS) cloud processes and to unify the representation of SGS deep convective precipitation and grid-scale precipitation as the horizontal resolution decreases. We aim to improve the representation of small-scale phenomena by implementing a PDF-based SGS turbulence and cloudiness scheme that replaces the boundary layer turbulence scheme, the shallow convection scheme, and the cloud fraction schemes in the GFS (Global Forecast System) and CFS (Climate Forecast System) global models. We intend to improve the treatment of deep convection by introducing a unified parameterization that scales continuously between the simulation of individual clouds when and where the grid spacing is sufficiently fine and the behavior of a conventional parameterization of deep convection when and where the grid spacing is coarse. We will endeavor to improve the representation of the interactions of clouds, radiation, and microphysics in the GFS/CFS by using the additional information provided by the PDF-based SGS cloud scheme. The team is evaluating the impacts of the model upgrades with metrics used by the NCEP short-range and seasonal forecast operations.
NASA Astrophysics Data System (ADS)
Martínez-Castro, Daniel; Vichot-Llano, Alejandro; Bezanilla-Morlot, Arnoldo; Centella-Artola, Abel; Campbell, Jayaka; Giorgi, Filippo; Viloria-Holguin, Cecilia C.
2018-06-01
A sensitivity study of the performance of the RegCM4 regional climate model driven by the ERA Interim reanalysis is conducted for the Central America and Caribbean region. A set of numerical experiments are completed using four configurations of the model, with a horizontal grid spacing of 25 km for a period of 6 years (1998-2003), using three of the convective parameterization schemes implemented in the model, the Emanuel scheme, the Grell over land-Emanuel over ocean scheme and two configurations of the Tiedtke scheme. The objective of the study is to investigate the ability of each configuration to reproduce different characteristics of the temperature, circulation and precipitation fields for the dry and rainy seasons. All schemes simulate the general temperature and precipitation patterns over land reasonably well, with relatively high correlations compared to observation datasets, though in specific regions there are positive or negative biases, greater in the rainy season. We also focus on some circulation features relevant for the region, such as the Caribbean low level jet and sea breeze circulations over islands, which are simulated by the model with varied performance across the different configurations. We find that no model configuration assessed is best performing for all the analysis criteria selected, but the Tiedtke configurations, which include the capability of tuning in particular the exchanges between cloud and environment air, provide the most balanced range of biases across variables, with no outstanding systematic bias emerging.
Performance of ICTP's RegCM4 in Simulating the Rainfall Characteristics over the CORDEX-SEA Domain
NASA Astrophysics Data System (ADS)
Neng Liew, Ju; Tangang, Fredolin; Tieh Ngai, Sheau; Chung, Jing Xiang; Narisma, Gemma; Cruz, Faye Abigail; Phan Tan, Van; Thanh, Ngo-Duc; Santisirisomboon, Jerasron; Milindalekha, Jaruthat; Singhruck, Patama; Gunawan, Dodo; Satyaningsih, Ratna; Aldrian, Edvin
2015-04-01
The performance of the RegCM4 in simulating rainfall variations over the Southeast Asia regions was examined. Different combinations of six deep convective parameterization schemes, namely i) Grell scheme with Arakawa-Schubert closure assumption, ii) Grell scheme with Fritch-Chappel closure assumption, iii) Emanuel MIT scheme, iv) mixed scheme with Emanuel MIT scheme over the Ocean and the Grell scheme over the land, v) mixed scheme with Grell scheme over the land and Emanuel MIT scheme over the ocean and (vi) Kuo scheme, and three ocean flux treatments were tested. In order to account for uncertainties among the observation products, four different gridded rainfall products were used for comparison. The simulated climate is generally drier over the equatorial regions and slightly wetter over the mainland Indo-China compare to the observation. However, simulation with MIT cumulus scheme used over the land area consistently produces large amplitude of positive rainfall biases, although it simulates more realistic annual rainfall variations. The simulations are found less sensitive to treatment of ocean fluxes. Although the simulations produced the rainfall climatology well, all of them simulated much stronger interannual variability compare to that of the observed. Nevertheless, the time evolution of the inter-annual variations was well reproduced particularly over the eastern part of maritime continent. Over the mainland Southeast Asia (SEA), unrealistic rainfall anomalies processes were simulated. The lacking of summer season air-sea interaction results in strong oceanic forcings over the regions, leading to positive rainfall anomalies during years with warm ocean temperature anomalies. This incurs much stronger atmospheric forcings on the land surface processes compare to that of the observed. A score ranking system was designed to rank the simulations according to their performance in reproducing different aspects of rainfall characteristics. The result suggests that the simulation with Emanuel MIT convective scheme and BATs land surface scheme produces better collective performance compare to the rest of the simulations.
Active Subspaces of Airfoil Shape Parameterizations
NASA Astrophysics Data System (ADS)
Grey, Zachary J.; Constantine, Paul G.
2018-05-01
Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.
NASA Astrophysics Data System (ADS)
Stanford, McKenna W.
The High Altitude Ice Crystals - High Ice Water Content (HAIC-HIWC) field campaign produced aircraft retrievals of total condensed water content (TWC), hydrometeor particle size distributions, and vertical velocity (w) in high ice water content regions of tropical mesoscale convective systems (MCSs). These observations are used to evaluate deep convective updraft properties in high-resolution nested Weather Research and Forecasting (WRF) simulations of observed MCSs. Because simulated hydrometeor properties are highly sensitive to the parameterization of microphysics, three commonly used microphysical parameterizations are tested, including two bulk schemes (Thompson and Morrison) and one bin scheme (Fast Spectral Bin Microphysics). A commonly documented bias in cloud-resolving simulations is the exaggeration of simulated radar reflectivities aloft in tropical MCSs. This may result from overly strong convective updrafts that loft excessive condensate mass and from simplified approximations of hydrometeor size distributions, properties, species separation, and microphysical processes. The degree to which the reflectivity bias is a separate function of convective dynamics, condensate mass, and hydrometeor size has yet to be addressed. This research untangles these components by comparing simulated and observed relationships between w, TWC, and hydrometer size as a function of temperature. All microphysics schemes produce median mass diameters that are generally larger than observed for temperatures between -10 °C and -40 °C and TWC > 1 g m-3. Observations produce a prominent mode in the composite mass size distribution around 300 microm, but under most conditions, all schemes shift the distribution mode to larger sizes. Despite a much greater number of samples, all simulations fail to reproduce observed high TWC or high w conditions between -20 °C and -40 °C in which only a small fraction of condensate mass is found in relatively large particle sizes. Increasing model resolution and employing explicit cloud droplet nucleation decrease the size bias, but not nearly enough to reproduce observations. Because simulated particle sizes are too large across all schemes when controlling for temperature, w, and TWC, this bias is hypothesized to partly result from errors in parameterized microphysical processes in addition to overly simplified hydrometeor properties such as mass-size relationships and particle size distribution parameters.
Pattanayak, Sujata; Mohanty, U C; Osuri, Krishna K
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error.
NASA Astrophysics Data System (ADS)
Silva Junior, R. S.; Rocha, R. P.; Andrade, M. F.
2007-05-01
The Planetary Boundary Layer (PBL) is the region of the atmosphere that suffers the direct influence of surface processes and the evolution of their characteristics during the day is of great importance for the pollutants dispersion. The aim of the present work is to analyze the most efficient combination of PBL, cumulus convection and cloud microphysics parameterizations for the forecast of the vertical profile of wind speed over Metropolitan Region of São Paulo (MRSP) that presents serious problems of atmospheric pollution. The model used was the WRF/Chem that was integrated for 48 h forecasts during one week of observational experiment that take place in the MRSP during October-November of 2006. The model domain has 72 x 48 grid points, with 18 km of resolution, centered in the MRSP. Considering a mixed-physics ensemble approach the forecasts used a combination of the parameterizations: (a) PBL the schemes of Mellor-Yamada-Janjic (MYJ) and Yonsei University Scheme (YSU); (b) cumulus convections schemes of Grell-Devenyi ensemble (GDE) and Betts-Miller-Janjic (BMJ); (c) cloud microphysics schemes of Purdue Lin (MPL) and NCEP 5-class (MPN). The combinations tested were the following: MYJ-BMJ-MPL, MYJ-BMJ-MPN, MYJ-GDE-MPL, MYJ-GDE-MPN, YSU-BMJ-MPL, YSU-BMJ-MPN, YSU-GDE-MPL, YSU-GDE-MPN, i.e., a set of 8 previsions for day. The model initial and boundary conditions was obtained of the AVN-NCEP model. Besides this data set, the MRSP observed soundings were used to verify the WRF results. The statistical analysis considered the correlation coefficient, root mean square error, mean error between forecasts and observed wind profiles. The results showed that the most suitable combination is the YSU-GDE-MPL. This can be associated to the GDE cumulus convection scheme, which takes into consideration the entrainment process in the clouds, and also the MPL scheme that considers a larger number of classes of water phase, including the ice and mixed phases. For PBL the YSU presents the better approaches to represent the wind speed, where the atmospheric gradients are stronger and the atmosphere is less mixed.
A new approach to the convective parameterization of the regional atmospheric model BRAMS
NASA Astrophysics Data System (ADS)
Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.
2013-05-01
The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ji-Young; Hong, Song-You; Sunny Lim, Kyo-Sun
The sensitivity of a cumulus parameterization scheme (CPS) to a representation of precipitation production is examined. To do this, the parameter that determines the fraction of cloud condensate converted to precipitation in the simplified Arakawa–Schubert (SAS) convection scheme is modified following the results from a cloud-resolving simulation. While the original conversion parameter is assumed to be constant, the revised parameter includes a temperature dependency above the freezing level, whichleadstolessproductionoffrozenprecipitating condensate with height. The revised CPS has been evaluated for a heavy rainfall event over Korea as well as medium-range forecasts using the Global/Regional Integrated Model system (GRIMs). The inefficient conversionmore » of cloud condensate to convective precipitation at colder temperatures generally leads to a decrease in pre-cipitation, especially in the category of heavy rainfall. The resultant increase of detrained moisture induces moistening and cooling at the top of clouds. A statistical evaluation of the medium-range forecasts with the revised precipitation conversion parameter shows an overall improvement of the forecast skill in precipitation and large-scale fields, indicating importance of more realistic representation of microphysical processes in CPSs.« less
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-05-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of ozone data, the present parameterization gives a valuable alternative to the introduction of detailed and computationally costly chemical schemes into general circulation models.
Investigating the Sensitivity of Nucleation Parameterization on Ice Growth
NASA Astrophysics Data System (ADS)
Gaudet, L.; Sulia, K. J.
2017-12-01
The accurate prediction of precipitation from lake-effect snow events associated with the Great Lakes region depends on the parameterization of thermodynamic and microphysical processes, including the formation and subsequent growth of frozen hydrometeors. More specifically, the formation of ice hydrometeors has been represented through varying forms of ice nucleation parameterizations considering the different nucleation modes (e.g., deposition, condensation-freezing, homogeneous). These parameterizations have been developed from in-situ measurements and laboratory observations. A suite of nucleation parameterizations consisting of those published in Meyers et al. (1992) and DeMott et al. (2010) as well as varying ice nuclei data sources are coupled with the Adaptive Habit Model (AHM, Harrington et al. 2013), a microphysics module where ice crystal aspect ratio and density are predicted and evolve in time. Simulations are run with the AHM which is implemented in the Weather Research and Forecasting (WRF) model to investigate the effect of ice nucleation parameterization on the non-spherical growth and evolution of ice crystals and the subsequent effects on liquid-ice cloud-phase partitioning. Specific lake-effect storms that were observed during the Ontario Winter Lake-Effect Systems (OWLeS) field campaign (Kristovich et al. 2017) are examined to elucidate this potential microphysical effect. Analysis of these modeled events is aided by dual-polarization radar data from the WSR-88D in Montague, New York (KTYX). This enables a comparison of the modeled and observed polarmetric and microphysical profiles of the lake-effect clouds, which involves investigating signatures of reflectivity, specific differential phase, correlation coefficient, and differential reflectivity. Microphysical features of lake-effect bands, such as ice, snow, and liquid mixing ratios, ice crystal aspect ratio, and ice density are analyzed to understand signatures in the aforementioned modeled dual-polarization radar variables. Hence, this research helps to determine an ice nucleation scheme that will best model observations of lake-effect clouds producing snow off of Lake Ontario and Lake Erie, and analyses will highlight the sensitivity of the evolution of the cases to a given nucleation scheme.
NASA Astrophysics Data System (ADS)
Faridatussafura, Nurzaka; Wandala, Agie
2018-05-01
The meteorological model WRF-ARW version 3.8.1 is used for simulating the heavy rainfall in Semarang that occurred on February 12th, 2015. Two different convective schemes and two different microphysics scheme in a nested configuration were chosen. The sensitivity of those schemes in capturing the extreme weather event has been tested. GFS data were used for the initial and boundary condition. Verification on the twenty-four hours accumulated rainfall using GSMaPsatellite data shows that Kain-Fritsch convective scheme and Lin microphysics scheme is the best combination scheme among the others. The combination also gives the highest success ratio value in placing high intensity rainfall area. Based on the ROC diagram, KF-Lin shows the best performance in detecting high intensity rainfall. However, the combination still has high bias value.
On Improving 4-km Mesoscale Model Simulations
NASA Astrophysics Data System (ADS)
Deng, Aijun; Stauffer, David R.
2006-03-01
A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6) Although use of current CPSs may violate their underlying assumptions related to the size of the convective element relative to the grid size, the gridpoint storm problem was greatly reduced by applying a CPS to the 4-km grid.
Implementation of a Parameterization Framework for Cybersecurity Laboratories
2017-03-01
designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of laboratory exercises. A...is to provide the designer of laboratory exercises with tools to parameterize labs for each student , and automate some aspects of the grading of...support might assist the designer of laboratory exercises to achieve the following? 1. Verify that students performed lab exercises, with some
Harvesting model uncertainty for the simulation of interannual variability
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2009-08-01
An innovative modeling strategy is introduced to account for uncertainty in the convective parameterization (CP) scheme of a coupled ocean-atmosphere model. The methodology involves calling the CP scheme several times at every given time step of the model integration to pick the most probable convective state. Each call of the CP scheme is unique in that one of its critical parameter values (which is unobserved but required by the scheme) is chosen randomly over a given range. This methodology is tested with the relaxed Arakawa-Schubert CP scheme in the Center for Ocean-Land-Atmosphere Studies (COLA) coupled general circulation model (CGCM). Relative to the control COLA CGCM, this methodology shows improvement in the El Niño-Southern Oscillation simulation and the Indian summer monsoon precipitation variability.
New Concepts for Refinement of Cumulus Parameterization in GCM's the Arakawa-Schubert Framework
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.; Lau, William (Technical Monitor)
2002-01-01
Several state-of-the-art models including the one employed in this study use the Arakawa-Schubert framework for moist convection, and Sundqvist formulation of stratiform. clouds, for moist physics, in-cloud condensation, and precipitation. Despite a variety of cloud parameterization methodologies developed by several modelers including the authors, most of the parameterized cloud-models have similar deficiencies. These consist of: (a) not enough shallow clouds, (b) too many deep clouds; (c) several layers of clouds in a vertically demoralized model as opposed to only a few levels of observed clouds, and (d) higher than normal incidence of double ITCZ (Inter-tropical Convergence Zone). Even after several upgrades consisting of a sophisticated cloud-microphysics and sub-grid scale orographic precipitation into the Data Assimilation Office (DAO)'s atmospheric model (called GEOS-2 GCM) at two different resolutions, we found that the above deficiencies remained persistent. The two empirical solutions often used to counter the aforestated deficiencies consist of a) diffusion of moisture and heat within the lower troposphere to artificially force the shallow clouds; and b) arbitrarily invoke evaporation of in-cloud water for low-level clouds. Even though helpful, these implementations lack a strong physical rationale. Our research shows that two missing physical conditions can ameliorate the aforestated cloud-parameterization deficiencies. First, requiring an ascending cloud airmass to be saturated at its starting point will not only make the cloud instantly buoyant all through its ascent, but also provide the essential work function (buoyancy energy) that would promote more shallow clouds. Second, we argue that training clouds that are unstable to a finite vertical displacement, even if neutrally buoyant in their ambient environment, must continue to rise and entrain causing evaporation of in-cloud water. These concepts have not been invoked in any of the cloud parameterization schemes so far. We introduced them into the DAO-GEOS-2 GCM with McRAS (Microphysics of Clouds with Relaxed Arakawa-Schubert Scheme).
a Cumulus Parameterization Study with Special Attention to the Arakawa-Schubert Scheme
NASA Astrophysics Data System (ADS)
Kao, Chih-Yue Jim
Arakawa and Schubert (1974) developed a cumulus parameterization scheme in a framework that conceptually divides the mutual interaction of the cumulus convection and large-scale disturbance into the categories of large -scale budget requirements and the quasi-equilibrium assumption of cloud work function. We have applied the A-S scheme through a semi-prognostic approach to two different data sets: one is for an intense tropical cloud band event; the other is for tropical composite easterly wave disturbances. Both were observed in GATE. The cloud heating and drying effects predicted by the Arakawa-Schubert scheme are found to agree rather well with the observations. However, it is also found that the Arakawa-Schubert scheme underestimates both condensation and evaporation rates substantially when compared with the cumulus ensemble model results (Soong and Tao, 1980; Tao, 1983). An inclusion of the downdraft effects, as formulated by Johnson (1976), appears to alleviate this deficiency. In order to examine how the Arakawa-Schubert scheme works in a fully prognostic problem, a simulation of the evolution and structure of the tropical cloud band, mentioned above, under the influence of an imposed large-scale low -level forcing has been made, using a two-dimensional hydrostatic model with the inclusion of the Arakawa-Schubert scheme. Basically, the model result indicates that the meso-scale convective system is driven by the excess of the convective heating derived from the Arakawa-Schubert scheme over the adiabatic cooling due to the imposed large-scale lifting and induced meso-scale upward motion. However, as the convective system develops, the adiabatic warming due to the subsidence outside the cloud cluster gradually accumulates into a secondary temperature anomaly which subsequently reduces the original temperature contrast and inhibits the further development of the convective system. A 24 hour integration shows that the model is capable of simulating many important features such as the life cycle, intensity of circulation, and rainfall rates.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Roesler, Erika L.; Posselt, Derek J.; Rood, Richard B.
2017-04-06
Three-dimensional large eddy simulations (LES) are used to analyze a springtime Arctic mixed-phase stratocumulus observed on 26 April 2008 during the Indirect and Semi-Direct Aerosol Campaign. Two subgrid-scale turbulence parameterizations are compared. The first scheme is a 1.5-order turbulent kinetic energy (1.5-TKE) parameterization that has been previously applied to boundary layer cloud simulations. The second scheme, Cloud Layers Unified By Binormals (CLUBB), provides higher-order turbulent closure with scale awareness. The simulations, in comparisons with observations, show that both schemes produce the liquid profiles within measurement variability but underpredict ice water mass and overpredict ice number concentration. The simulation using CLUBBmore » underpredicted liquid water path more than the simulation using the 1.5-TKE scheme, so the turbulent length scale and horizontal grid box size were increased to increase liquid water path and reduce dissipative energy. The LES simulations show this stratocumulus cloud to maintain a closed cellular structure, similar to observations. The updraft and downdraft cores self-organize into a larger meso-γ-scale convective pattern with the 1.5-TKE scheme, but the cores remain more isotropic with the CLUBB scheme. Additionally, the cores are often composed of liquid and ice instead of exclusively containing one or the other. Furthermore, these results provide insight into traditionally unresolved and unmeasurable aspects of an Arctic mixed-phase cloud. From analysis, this cloud's updraft and downdraft cores appear smaller than other closed-cell stratocumulus such as midlatitude stratocumulus and Arctic autumnal mixed-phase stratocumulus due to the weaker downdrafts and lower precipitation rates.« less
NASA Technical Reports Server (NTRS)
Li, Xiaowen; Tao, Wei-Kuo; Khain, Alexander P.; Simpson, Joanne; Johnson, Daniel E.
2009-01-01
Part I of this paper compares two simulations, one using a bulk and the other a detailed bin microphysical scheme, of a long-lasting, continental mesoscale convective system with leading convection and trailing stratiform region. Diagnostic studies and sensitivity tests are carried out in Part II to explain the simulated contrasts in the spatial and temporal variations by the two microphysical schemes and to understand the interactions between cloud microphysics and storm dynamics. It is found that the fixed raindrop size distribution in the bulk scheme artificially enhances rain evaporation rate and produces a stronger near surface cool pool compared with the bin simulation. In the bulk simulation, cool pool circulation dominates the near-surface environmental wind shear in contrast to the near-balance between cool pool and wind shear in the bin simulation. This is the main reason for the contrasting quasi-steady states simulated in Part I. Sensitivity tests also show that large amounts of fast-falling hail produced in the original bulk scheme not only result in a narrow trailing stratiform region but also act to further exacerbate the strong cool pool simulated in the bulk parameterization. An empirical formula for a correction factor, r(q(sub r)) = 0.11q(sub r)(exp -1.27) + 0.98, is developed to correct the overestimation of rain evaporation in the bulk model, where r is the ratio of the rain evaporation rate between the bulk and bin simulations and q(sub r)(g per kilogram) is the rain mixing ratio. This formula offers a practical fix for the simple bulk scheme in rain evaporation parameterization.
2012-01-01
Experiments have been conducted to validate the de- signed parameterization scheme. A 2.3Ah A123TM 26650 LiFePO4 /graphite battery is cycled with a BitrodeTM...management strategy. The type of battery used in the experiment ( LiFePO4 26650) is different from the one in Fig. 3. Schematics of the Flow Chamber [23...of a cylindrical lifepo4 /graphite lithium-ion battery,” Journal of Power Sources, vol. 195, pp. 2961–2968, 2010. [9] C. W. Park and A. K. Jaura
Parametrization of turbulence models using 3DVAR data assimilation in laboratory conditions
NASA Astrophysics Data System (ADS)
Olbert, A. I.; Nash, S.; Ragnoli, E.; Hartnett, M.
2013-12-01
In this research the 3DVAR data assimilation scheme is implemented in the numerical model DIVAST in order to optimize the performance of the numerical model by selecting an appropriate turbulence scheme and tuning its parameters. Two turbulence closure schemes: the Prandtl mixing length model and the two-equation k-ɛ model were incorporated into DIVAST and examined with respect to their universality of application, complexity of solutions, computational efficiency and numerical stability. A square harbour with one symmetrical entrance subject to tide-induced flows was selected to investigate the structure of turbulent flows. The experimental part of the research was conducted in a tidal basin. A significant advantage of such laboratory experiment is a fully controlled environment where domain setup and forcing are user-defined. The research shows that the Prandtl mixing length model and the two-equation k-ɛ model, with default parameterization predefined according to literature recommendations, overestimate eddy viscosity which in turn results in a significant underestimation of velocity magnitudes in the harbour. The data assimilation of the model-predicted velocity and laboratory observations significantly improves model predictions for both turbulence models by adjusting modelled flows in the harbour to match de-errored observations. Such analysis gives an optimal solution based on which numerical model parameters can be estimated. The process of turbulence model optimization by reparameterization and tuning towards optimal state led to new constants that may be potentially applied to complex turbulent flows, such as rapidly developing flows or recirculating flows. This research further demonstrates how 3DVAR can be utilized to identify and quantify shortcomings of the numerical model and consequently to improve forecasting by correct parameterization of the turbulence models. Such improvements may greatly benefit physical oceanography in terms of understanding and monitoring of coastal systems and the engineering sector through applications in coastal structure design, marine renewable energy and pollutant transport.
Yu, Zhaoxu; Li, Shugang; Yu, Zhaosheng; Li, Fangfei
2018-04-01
This paper investigates the problem of output feedback adaptive stabilization for a class of nonstrict-feedback stochastic nonlinear systems with both unknown backlashlike hysteresis and unknown control directions. A new linear state transformation is applied to the original system, and then, control design for the new system becomes feasible. By combining the neural network's (NN's) parameterization, variable separation technique, and Nussbaum gain function method, an input-driven observer-based adaptive NN control scheme, which involves only one parameter to be updated, is developed for such systems. All closed-loop signals are bounded in probability and the error signals remain semiglobally bounded in the fourth moment (or mean square). Finally, the effectiveness and the applicability of the proposed control design are verified by two simulation examples.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
NASA Astrophysics Data System (ADS)
McFarquhar, G. M.; Finlon, J.; Um, J.; Nesbitt, S. W.; Borque, P.; Chase, R.; Wu, W.; Morrison, H.; Poellot, M.
2017-12-01
Parameterizations of fall speed-dimension (V-D), mass (m)-D and projected area (A)-D relationships are needed for development of model parameterization and remote sensing retrieval schemes. An approach for deriving such relations is discussed here that improves upon previously developed schemes in the following aspects: 1) surfaces are used to characterize uncertainties in derived coefficients; 2) all derived relations are internally consistent; and 3) multiple bulk measures are used to derive parameter coefficients. In this study, data collected by two-dimensional optical array probes (OAPs) installed on the University of North Dakota Citation aircraft during the Mid-Latitude Continental Convective Clouds Experiment (MC3E) and during the Olympic Mountains Experiment (OLYMPEX) are used in conjunction with data from a Nevzorov total water content (TWC) probe and ground-based radar data at S-band to test a novel approach that determines m-D relationships for a variety of environments. A surface of equally realizable a and b coefficients, where m=aDb, in (a,b) phase space is determined using a technique that minimizes the chi-squared difference between both the TWC and radar reflectivity Z derived from the size distributions measured by the OAPs and those directly measured by a TWC probe and radar, accepting as valid all coefficients within a specified tolerance of the minimum chi-squared difference. Because both A and perimeter P can be directly measured by OAPs, coefficients characterizing these relationships are derived using only one bulk parameter constraint derived from the appropriate images. Because terminal velocity parameterizations depend on both A and m, V-D relations can be derived from these self-consistent relations. Using this approach, changes in parameters associated with varying environmental conditions and varying aerosol amounts and compositions can be isolated from changes associated with statistical noise or measurement errors. The applicability of the derived coefficients for a stochastic framework that employs an observationally-constrained dataset to account for coefficient variability within microphysics parameterization schemes is discussed.
Urban Canopy Effects in Regional Climate Simulations - An Inter-Model Comparison
NASA Astrophysics Data System (ADS)
Halenka, T.; Huszar, P.; Belda, M.; Karlicky, J.
2017-12-01
To assess the impact of cities and urban surfaces on climate, the modeling approach is often used with inclusion of urban parameterization in land-surface interactions. This is especially important when going to higher resolution, which is common trend both in operational weather prediction and regional climate modelling. Model description of urban canopy related meteorological effects can, however, differ largely given especially the underlying surface models and the urban canopy parameterizations, representing a certain uncertainty. To assess this uncertainty is important for adaptation and mitigation measures often applied in the big cities, especially in connection to climate change perspective, which is one of the main task of the new project OP-PPR Proof of Concept UK. In this study we contribute to the estimation of this uncertainty by performing numerous experiments to assess the urban canopy meteorological forcing over central Europe on climate for the decade 2001-2010, using two regional climate models (RegCM4 and WRF) in 10 km resolution driven by ERA-Interim reanalyses, three surface schemes (BATS and CLM4.5 for RegCM4 and Noah for WRF) and five urban canopy parameterizations available: one bulk urban scheme, three single layer and a multilayer urban scheme. Effects of cities on urban and remote areas were evaluated. There are some differences in sensitivity of individual canopy model implementations to the UHI effects, depending on season and size of the city as well. Effect of reducing diurnal temperature range in cities (around 2 °C in summer mean) is noticeable in all simulations, independent to urban parameterization type and model, due to well-known warmer summer city nights. For the adaptation and mitigation purposes, rather than the average urban heat island intensity the distribution of it is more important providing the information on extreme UHI effects, e.g. during heat waves. We demonstrate that for big central European cities this effect can approach 10°C, even for not so big ones these extreme effects can go above 5°C.
Bias Reduction as Guidance for Developing Convection and Cloud Parameterization in GFDL AM4/CM4
NASA Astrophysics Data System (ADS)
Zhao, M.; Held, I.; Golaz, C.
2016-12-01
The representations of moist convection and clouds are challenging in global climate models and they are known to be important to climate simulations at all spatial and temporal scales. Many climate simulation biases can be traced to deficiencies in convection and cloud parameterizations. I will present some key biases that we are concerned about and the efforts that we have made to reduce the biases during the development of NOAA's Geophysical Fluid Dynamics Laboratory (GFDL) new generation global climate model AM4/CM4. In particular, I will present a modified version of the moist convection scheme that is based on the University of Washington Shallow Cumulus scheme (UWShCu, Bretherton et. al 2004). The new scheme produces marked improvement in simulation of the Madden-Julian Oscillation (MJO) and the El Niño-Southern Oscillation (ENSO) compared to that used in AM3 and HIRAM. AM4/CM4 also produces high quality simulation of global distribution of cloud radiative effects and the precipitation with realistic mean climate state. This differs from models of improved MJO but with a much deteriorated mean state. The modifications to the UWShCu include an additional bulk plume for representing deep convection. The entrainment rate in the deep plume is parameterized to be a function of column-integrated relative humidity. The deep convective closure is based on relaxation of the convective available potential energy (CAPE) or cloud work function. The plumes' precipitation efficiency is optimized for better simulations of the cloud radiative effects. Precipitation re-evaporation is included in both shallow and deep plumes. In addition, a parameterization of convective gustiness is included with an energy source driven by cold pool derived from precipitation re-evaporation within the boundary layer and energy sink due to dissipation. I will present the motivations of these changes which are driven by reducing some aspects of the AM4/CM4 biases. Finally, I will also present the biases in current AM4/CM4 and challenges to further reduce them.
Tropical Cumulus Convection and Upward Propagating Waves in Middle Atmospheric GCMs
NASA Technical Reports Server (NTRS)
Horinouchi, T.; Pawson, S.; Shibata, K.; Langematz, U.; Manzini, E.; Giorgetta, M. A.; Sassi, F.; Wilson, R. J.; Hamilton, K. P.; deGranpre, J.;
2002-01-01
It is recognized that the resolved tropical wave spectrum can vary considerably between general circulation models (GCMs) and that these differences can have an important impact on the simulated climate. A comprehensive comparison of the waves is presented for the December-January-February period using high-frequency (three-hourly) data archives from eight GCMs and one simple model participating in the GCM Reality Intercomparison Project for SPARC (GRIPS). Quantitative measures of the structure and causes of the wavenumber-frequency structure of resolved waves and their impacts on the climate are given. Space-time spectral analysis reveals that the wave spectrum throughout the middle atmosphere is linked to variability of convective precipitation, which is determined by the parameterized convection. The variability of the precipitation spectrum differs by more than an order of magnitude between the models, with additional changes in the spectral distribution (especially the frequency). These differences can be explained primarily by the choice of different, cumulus par amet erizations: quasi-equilibrium mass-flux schemes tend to produce small variability, while the moist-convective adjustment scheme is most active. Comparison with observational estimates of precipitation variability suggests that the model values are scattered around the truth. This result indicates that a significant portion of the forcing of the equatorial quasi-biennial oscillation (QBO) is provided by waves with scales that are not resolved in present-day GCMs, since only the moist convective adjustment scheme (which has the largest transient variability) can force a QBO in models that have no parameterization of non-stationary gravity waves. Parameterized cumulus convection also impacts the nonmigrating tides in the equatorial region. In most of the models, momentum transport by diurnal nonmigrating tides in the mesosphere is larger than that by Kelvin waves, being more significant than has been thought. It is shown that the equatorial semi-annual oscillation in the models examined is driven mainly by gravity waves with periods shorter than three days, with at least some contribution from parameterized gravity waves; the contribution from the ultra-fast zonal wavenumber-1 Kelvin waves is negligible.
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Wang, Yuqing
2018-01-01
The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Jiwen; Han, Bin; Varble, Adam
A constrained model intercomparison study of a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes, to understand specific processes that lead to the large spread of simulated cloud and precipitation at cloud-resolving scales, with a focus of this paper on convective cores. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area than observed, but a much narrower stratiform area, with most bulk schemes overpredicting radar reflectivity. The magnitudes of the virtual potential temperature drop,more » pressure rise, and the peak wind speed associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations also overestimate the vertical velocity and Ze in convective cores as compared with observational retrievals. The modeled updraft velocity and precipitation have a significant spread across the eight schemes even in this strongly dynamically-driven system. The spread of updraft velocity is attributed to the combined effects of the low-level perturbation pressure gradient determined by cold pool intensity and buoyancy that is not necessarily well correlated to differences in latent heating among the simulations. Variability of updraft velocity between schemes is also related to differences in ice-related parameterizations, whereas precipitation variability increases in no-ice simulations because of scheme differences in collision-coalescence parameterizations.« less
Factors affecting the simulated trajectory and intensification of Tropical Cyclone Yasi (2011)
NASA Astrophysics Data System (ADS)
Parker, Chelsea L.; Lynch, Amanda H.; Mooney, Priscilla A.
2017-09-01
This study investigates the sensitivity of the simulated trajectory, intensification, and forward speed of Tropical Cyclone Yasi to initial conditions, physical parameterizations, and sea surface temperatures. Yasi was a category 5 storm that made landfall in Queensland, Australia in February 2011. A series of simulations were performed using WRF-ARW v3.4.1 driven by ERA-Interim data at the lateral boundaries. To assess these simulations, a new simple skill score is devised to summarize the deviation from observed conditions at landfall. The results demonstrate the sensitivity to initial condition resolution and the need for a new initialization dataset. Ensemble testing of physics parameterizations revealed strong sensitivity to cumulus schemes, with a trade-off between trajectory and intensity accuracy. The Tiedtke scheme produces an accurate trajectory evolution and landfall location. The Kain Fritch scheme is associated with larger errors in trajectory due to a less active shallow convection over the ocean, leading to warmer temperatures at the 700 mb level and a stronger, more poleward steering flow. However, the Kain Fritsch scheme produces more accurate intensities and translation speeds. Tiedtke-derived intensities were weaker due to suppression of deep convection by active shallow convection. Accurate representation of the sea surface temperature through correcting a newly discovered SST lag in reanalysis data or increasing resolution of SST data can improve the simulation. Higher resolution increases relative vorticity and intensity. However, the sea surface boundary had a more pronounced effect on the simulation with the Tiedtke scheme due to its moisture convergence trigger and active shallow convection over the tropical ocean.
Triple collocation based merging of satellite soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...
NASA Technical Reports Server (NTRS)
Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.
1991-01-01
Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.
Challenges in Understanding and Forecasting Winds in Complex Terrain.
NASA Astrophysics Data System (ADS)
Mann, J.; Fernando, J.; Wilczak, J. M.
2017-12-01
An overview will be given of some of the challenges in understanding and forecasting winds in complex terrain. These challenges can occur for several different reasons including 1) gaps in our understanding of fundamental physical boundary layer processes occurring in complex terrain; 2) a lack of adequate parameterizations and/or numerical schemes in NWP models; and 3) inadequate observations for initialization of NWP model forecasts. Specific phenomena that will be covered include topographic wakes/vortices, cold pools, gap flows, and mountain-valley winds, with examples taken from several air quality and wind energy related field programs in California as well as from the recent Second Wind Forecast Improvement Program (WFIP2) field campaign in the Columbia River Gorge/Basin area of Washington and Oregon States. Recent parameterization improvements discussed will include those for boundary layer turbulence, including 3D turbulence schemes, and gravity wave drag. Observational requirements for improving wind forecasting in complex terrain will be discussed, especially in the context of forecasting pressure gradient driven gap flow events.
Loupa, G; Rapsomanikis, S; Trepekli, A; Kourtidis, K
2016-01-15
Energy flux parameterization was effected for the city of Athens, Greece, by utilizing two approaches, the Local-Scale Urban Meteorological Parameterization Scheme (LUMPS) and the Bulk Approach (BA). In situ acquired data are used to validate the algorithms of these schemes and derive coefficients applicable to the study area. Model results from these corrected algorithms are compared with literature results for coefficients applicable to other cities and their varying construction materials. Asphalt and concrete surfaces, canyons and anthropogenic heat releases were found to be the key characteristics of the city center that sustain the elevated surface and air temperatures, under hot, sunny and dry weather, during the Mediterranean summer. A relationship between storage heat flux plus anthropogenic energy flux and temperatures (surface and lower atmosphere) is presented, that results in understanding of the interplay between temperatures, anthropogenic energy releases and the city characteristics under the Urban Heat Island conditions.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Dong, Xiquan; Kennedy, Aaron; Xi, Baike; Li, Zhanqing
2017-03-01
The planetary boundary layer turbulence and moist convection parameterizations have been modified recently in the NASA Goddard Institute for Space Studies (GISS) Model E2 atmospheric general circulation model (GCM; post-CMIP5, hereafter P5). In this study, single column model (SCM P5) simulated cloud fractions (CFs), cloud liquid water paths (LWPs) and precipitation were compared with Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) groundbased observations made during the period 2002-08. CMIP5 SCM simulations and GCM outputs over the ARM SGP region were also used in the comparison to identify whether the causes of cloud and precipitation biases resulted from either the physical parameterization or the dynamic scheme. The comparison showed that the CMIP5 SCM has difficulties in simulating the vertical structure and seasonal variation of low-level clouds. The new scheme implemented in the turbulence parameterization led to significantly improved cloud simulations in P5. It was found that the SCM is sensitive to the relaxation time scale. When the relaxation time increased from 3 to 24 h, SCM P5-simulated CFs and LWPs showed a moderate increase (10%-20%) but precipitation increased significantly (56%), which agreed better with observations despite the less accurate atmospheric state. Annual averages among the GCM and SCM simulations were almost the same, but their respective seasonal variations were out of phase. This suggests that the same physical cloud parameterization can generate similar statistical results over a long time period, but different dynamics drive the differences in seasonal variations. This study can potentially provide guidance for the further development of the GISS model.
NASA Astrophysics Data System (ADS)
Fisher, A. W.; Sanford, L. P.; Scully, M. E.; Suttles, S. E.
2016-02-01
Enhancement of wind-driven mixing by Langmuir turbulence (LT) may have important implications for exchanges of mass and momentum in estuarine and coastal waters, but the transient nature of LT and observational constraints make quantifying its impact on vertical exchange difficult. Recent studies have shown that wind events can be of first order importance to circulation and mixing in estuaries, prompting this investigation into the ability of second-moment turbulence closure schemes to model wind-wave enhanced mixing in an estuarine environment. An instrumented turbulence tower was deployed in middle reaches of Chesapeake Bay in 2013 and collected observations of coherent structures consistent with LT that occurred under regions of breaking waves. Wave and turbulence measurements collected from a vertical array of Acoustic Doppler Velocimeters (ADVs) provided direct estimates of TKE, dissipation, turbulent length scale, and the surface wave field. Direct measurements of air-sea momentum and sensible heat fluxes were collected by a co-located ultrasonic anemometer deployed 3m above the water surface. Analyses of the data indicate that the combined presence of breaking waves and LT significantly influences air-sea momentum transfer, enhancing vertical mixing and acting to align stress in the surface mixed layer in the direction of Lagrangian shear. Here these observations are compared to the predictions of commonly used second-moment turbulence closures schemes, modified to account for the influence of wave breaking and LT. LT parameterizations are evaluated under neutrally stratified conditions and buoyancy damping parameterizations are evaluated under stably stratified conditions. We compare predicted turbulent quantities to observations for a variety of wind, wave, and stratification conditions. The effects of fetch-limited wave growth, surface buoyancy flux, and tidal distortion on wave mixing parameterizations will also be discussed.
NASA Astrophysics Data System (ADS)
Lemieux, Jean-François; Dupont, Frédéric; Blain, Philippe; Roy, François; Smith, Gregory C.; Flato, Gregory M.
2016-10-01
In some coastal regions of the Arctic Ocean, grounded ice ridges contribute to stabilizing and maintaining a landfast ice cover. Recently, a grounding scheme representing this effect on sea ice dynamics was introduced and tested in a viscous-plastic sea ice model. This grounding scheme, based on a basal stress parameterization, improves the simulation of landfast ice in many regions such as in the East Siberian Sea, the Laptev Sea, and along the coast of Alaska. Nevertheless, in some regions like the Kara Sea, the area of landfast ice is systematically underestimated. This indicates that another mechanism such as ice arching is at play for maintaining the ice cover fast. To address this problem, the combination of the basal stress parameterization and tensile strength is investigated using a 0.25° Pan-Arctic CICE-NEMO configuration. Both uniaxial and isotropic tensile strengths notably improve the simulation of landfast ice in the Kara Sea but also in the Laptev Sea. However, the simulated landfast ice season for the Kara Sea is too short compared to observations. This is especially obvious for the onset of the landfast ice season which systematically occurs later in the model and with a slower build up. This suggests that improvements to the sea ice thermodynamics could reduce these discrepancies with the data.
Pattanayak, Sujata; Mohanty, U. C.; Osuri, Krishna K.
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error. PMID:22701366
NASA Astrophysics Data System (ADS)
Xie, Zhipeng; Hu, Zeyong
2016-04-01
Snow cover is an important component of local- and regional-scale energy and water budgets, especially in mountainous areas. This paper evaluates the snow simulations by using two snow cover fraction schemes in CLM4.5 (NY07 is the original snow-covered area parameterization used in CLM4, and SL12 is the default scheme in CLM4.5). Off-line simulations are carried out forced by the China Meteorological forcing dataset from January 1, 2001 to December 31, 2010 over the Tibetan Plateau. Simulated snow cover fraction (SCF), snow depth, and snow water equivalent (SWE) were compared against a set of observations including the Interactive Multisensor Snow and Ice Mapping System (IMS) snow cover product, the daily snow depth dataset of China, and China Meteorological Administration (CMA) in-situ snow depth and SWE observations. The comparison results indicate significant differences existing between those two SCF parameterizations simulations. Overall, the SL12 formulation shows a certain improvement compared to the NY07 scheme used in CLM4, with the percentage of correctly modeled snow/no snow being 75.8% and 81.8% when compared with the IMS snow product, respectively. Yet, this improvement varies both temporally and spatially. Both these two snow cover schemes overestimated the snow depth, in comparison with the daily snow depth dataset of China, the average biases of simulated snow depth are 7.38cm (8.77cm), 6.97cm (8.2cm) and 5.49cm (5.76cm) NY07 (and SL12) in the snow accumulation period (September through next February), snowmelt period (March through May) and snow-free period (June through August), respectively. When compared with the CMA in-situ snow depth observations, averaged biases are 3.18cm (4.38cm), 2.85cm (4.34cm) and 0.34cm (0.34cm) for NY07 (SL12), respectively. Though SL12 does worse snow depth simulation than NY07, the simulated SWE by SL12 is better than that by NY07, with average biases being 2.64mm, 6.22mm, 1.33mm for NY07, and 1.47mm, 2.63mm, 0.31mm for SL12, respectively. This study demonstrates that future improvements on snow simulation over the Tibetan Plateau are in urgent need for better representing the variability of snow in CLM. Furthermore, these findings lay a foundation for follow-up studies on the modification of snow cover parameterization in the land surface model. Keywords: snow cover, CLM, Tibetan Plateau, simulation.
NASA Astrophysics Data System (ADS)
Johnson, M. T.
2010-10-01
The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.
NASA Technical Reports Server (NTRS)
Fritsch, J. Michael; Kain, John S.
1997-01-01
Research efforts during the second year have centered on improving the manner in which convective stabilization is achieved in the Penn State/NCAR mesoscale model MM5. Ways of improving this stabilization have been investigated by (1) refining the partitioning between the Kain-Fritsch convective parameterization scheme and the grid scale by introducing a form of moist convective adjustment; (2) using radar data to define locations of subgrid-scale convection during a dynamic initialization period; and (3) parameterizing deep-convective feedbacks as subgrid-scale sources and sinks of mass. These investigations were conducted by simulating a long-lived convectively-generated mesoscale vortex that occurred during 14-18 Jul. 1982 and the 10-11 Jun. 1985 squall line that occurred over the Kansas-Oklahoma region during the PRE-STORM experiment. The long-lived vortex tracked across the central Plains states and was responsible for multiple convective outbreaks during its lifetime.
Application of a planetary wave breaking parameterization to stratospheric circulation statistics
NASA Technical Reports Server (NTRS)
Randel, William J.; Garcia, Rolando R.
1994-01-01
The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.
NASA Technical Reports Server (NTRS)
Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; Fridlind, Ann; Endo, Satoshi; Song, Hua; Feng, Sha; Toto, Tami; Li, Zhijin; Zhang, Minghua
2015-01-01
Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) Project has constructed case studies from the Atmospheric Radiation Measurement Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only a relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.
Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; ...
2015-06-19
Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) project has constructed case studies from the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only amore » relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.« less
Sensitivity of CAM5-simulated Arctic clouds and radiation to ice nucleation parameterization
Xie, Shaocheng; Liu, Xiaohong; Zhao, Chuanfeng; ...
2013-08-06
Sensitivity of Arctic clouds and radiation in the Community Atmospheric Model, version 5, to the ice nucleation process is examined by testing a new physically based ice nucleation scheme that links the variation of ice nuclei (IN) number concentration to aerosol properties. The default scheme parameterizes the IN concentration simply as a function of ice supersaturation. The new scheme leads to a significant reduction in simulated IN concentration at all latitudes while changes in cloud amounts and properties are mainly seen at high- and midlatitude storm tracks. In the Arctic, there is a considerable increase in midlevel clouds and amore » decrease in low-level clouds, which result from the complex interaction among the cloud macrophysics, microphysics, and large-scale environment. The smaller IN concentrations result in an increase in liquid water path and a decrease in ice water path caused by the slowdown of the Bergeron–Findeisen process in mixed-phase clouds. Overall, there is an increase in the optical depth of Arctic clouds, which leads to a stronger cloud radiative forcing (net cooling) at the top of the atmosphere. The comparison with satellite data shows that the new scheme slightly improves low-level cloud simulations over most of the Arctic but produces too many midlevel clouds. Considerable improvements are seen in the simulated low-level clouds and their properties when compared with Arctic ground-based measurements. As a result, issues with the observations and the model–observation comparison in the Arctic region are discussed.« less
NASA Astrophysics Data System (ADS)
Choi, Hyun-Joo; Choi, Suk-Jin; Koo, Myung-Seo; Kim, Jung-Eun; Kwon, Young Cheol; Hong, Song-You
2017-10-01
The impact of subgrid orographic drag on weather forecasting and simulated climatology over East Asia in boreal summer is examined using two parameterization schemes in a global forecast model. The schemes consider gravity wave drag (GWD) with and without lower-level wave breaking drag (LLWD) and flow-blocking drag (FBD). Simulation results from sensitivity experiments verify that the scheme with LLWD and FBD improves the intensity of a summertime continental high over the northern part of the Korean Peninsula, which is exaggerated with GWD only. This is because the enhanced lower tropospheric drag due to the effects of lower-level wave breaking and flow blocking slows down the wind flowing out of the high-pressure system in the lower troposphere. It is found that the decreased lower-level divergence induces a compensating weakening of middle- to upper-level convergence aloft. Extended experiments for medium-range forecasts for July 2013 and seasonal simulations for June to August of 2013-2015 are also conducted. Statistical skill scores for medium-range forecasting are improved not only in low-level winds but also in surface pressure when both LLWD and FBD are considered. A simulated climatology of summertime monsoon circulation in East Asia is also realistically reproduced.
NASA Technical Reports Server (NTRS)
Mcdougal, David S. (Editor)
1990-01-01
FIRE (First ISCCP Regional Experiment) is a U.S. cloud-radiation research program formed in 1984 to increase the basic understanding of cirrus and marine stratocumulus cloud systems, to develop realistic parameterizations for these systems, and to validate and improve ISCCP cloud product retrievals. Presentations of results culminating the first 5 years of FIRE research activities were highlighted. The 1986 Cirrus Intensive Field Observations (IFO), the 1987 Marine Stratocumulus IFO, the Extended Time Observations (ETO), and modeling activities are described. Collaborative efforts involving the comparison of multiple data sets, incorporation of data measurements into modeling activities, validation of ISCCP cloud parameters, and development of parameterization schemes for General Circulation Models (GCMs) are described.
Performance Assessment of New Land-Surface and Planetary Boundary Layer Physics in the WRF-ARW
The Pleim-Xiu land surface model, Pleim surface layer scheme, and Asymmetric Convective Model (version 2) are now options in version 3.0 of the Weather Research and Forecasting model (WRF) Advanced Research WRF (ARW) core. These physics parameterizations were developed for the f...
A simple parameterization of aerosol emissions in RAMS
NASA Astrophysics Data System (ADS)
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical model. Furthermore, SA formation is greatly reduced during the winter months due to the lack of naturally produced organic VOC's. Because of these reasons, it was felt that neglecting SOA within the model was the best course of action. The actual parameterization uses a prescribed source map to add aerosol to the model at two vertical levels that surround an arbitrary height decided by the user. To best represent the real-world, the WRF Chemistry model was run using the National Emissions Inventory (NEI2005) to represent anthropogenic emissions and the Model Emissions of Gases and Aerosols from Nature (MEGAN) to represent natural contributions to aerosol. WRF Chemistry was run for one hour, after which the aerosol output along with the hygroscopicity parameter (κ) were saved into a data file that had the capacity to be interpolated to an arbitrary grid used in RAMS. The comparison of this parameterization to observations collected at Mesa Verde National Park (MVNP) during the Inhibition of Snowfall from Pollution Aerosol (ISPA-III) field campaign yielded promising results. The model was able to simulate the variability in near surface aerosol concentration with reasonable accuracy, though with a general low bias. Furthermore, this model compared much better to the observations than did the WRF Chemistry model using a fraction of the computational expense. This emissions scheme was able to show reasonable solutions regarding the aerosol concentrations and can therefore be used to provide an estimate of the seasonal impact of increased CCN on water resources in Western Colorado with relatively low computational expense.
Hamed, Kaveh Akbari; Gregg, Robert D
2016-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Hamed, Kaveh Akbari; Gregg, Robert D
2017-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and [Formula: see text] robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and H2 robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:28959117
NASA Astrophysics Data System (ADS)
Hamdi, R.; Schayes, G.
2005-07-01
The Martilli's urban parameterization scheme is improved and implemented in a mesoscale model in order to take into account the typical effects of a real city on the air temperature near the ground and on the surface exchange fluxes. The mesoscale model is run on a single column using atmospheric data and radiation recorded above roof level as forcing. Here, the authors validate the Martilli's urban boundary layer scheme using measurements from two mid-latitude European cities: Basel, Switzerland and Marseilles, France. For Basel, the model performance is evaluated with observations of canyon temperature, surface radiation, and energy balance fluxes obtained during the Basel urban boundary layer experiment (BUBBLE). The results show that the urban parameterization scheme is able to reproduce the generation of the Urban Heat Island (UHI) effect over urban area and represents correctly most of the behavior of the fluxes typical of the city center of Basel, including the large heat uptake by the urban fabric and the positive sensible heat flux at night. For Marseilles, the model performance is evaluated with observations of surface temperature, canyon temperature, surface radiation, and energy balance fluxes collected during the field experiments to constrain models of atmospheric pollution and transport of emissions (ESCOMPTE) and its urban boundary layer (UBL) campaign. At both urban sites, vegetation cover is less than 20%, therefore, particular attention was directed to the ability of the Martilli's urban boundary layer scheme to reproduce the observations for the Marseilles city center, where the urban parameters and the synoptic forcing are totally different from Basel. Evaluation of the model with wall, road, and roof surface temperatures gave good results. The model correctly simulates the net radiation, canyon temperature, and the partitioning between the turbulent and storage heat fluxes.
NASA Astrophysics Data System (ADS)
Hamdi, R.; Schayes, G.
2007-08-01
Martilli's urban parameterization scheme is improved and implemented in a mesoscale model in order to take into account the typical effects of a real city on the air temperature near the ground and on the surface exchange fluxes. The mesoscale model is run on a single column using atmospheric data and radiation recorded above roof level as forcing. Here, the authors validate Martilli's urban boundary layer scheme using measurements from two mid-latitude European cities: Basel, Switzerland and Marseilles, France. For Basel, the model performance is evaluated with observations of canyon temperature, surface radiation, and energy balance fluxes obtained during the Basel urban boundary layer experiment (BUBBLE). The results show that the urban parameterization scheme represents correctly most of the behavior of the fluxes typical of the city center of Basel, including the large heat uptake by the urban fabric and the positive sensible heat flux at night. For Marseilles, the model performance is evaluated with observations of surface temperature, canyon temperature, surface radiation, and energy balance fluxes collected during the field experiments to constrain models of atmospheric pollution and transport of emissions (ESCOMPTE) and its urban boundary layer (UBL) campaign. At both urban sites, vegetation cover is less than 20%, therefore, particular attention was directed to the ability of Martilli's urban boundary layer scheme to reproduce the observations for the Marseilles city center, where the urban parameters and the synoptic forcing are totally different from Basel. Evaluation of the model with wall, road, and roof surface temperatures gave good results. The model correctly simulates the net radiation, canyon temperature, and the partitioning between the turbulent and storage heat fluxes.
A Comprehensive Two-moment Warm Microphysical Bulk Scheme :
NASA Astrophysics Data System (ADS)
Caro, D.; Wobrock, W.; Flossmann, A.; Chaumerliac, N.
The microphysic properties of gaz, aerosol particles, and hydrometeors have impli- cations at local scale (precipitations, pollution peak,..), at regional scale (inundation, acid rains,...), and also, at global scale (radiative forcing,...). So, a multi-scale study is necessary to understand and forecast in a good way meteorological phenomena con- cerning clouds. However, it cannot be carried with detailed microphysic model, on account of computers limitations. So, microphysical bulk schemes have to estimate the n´ large scale z properties of clouds due to smaller scale processes and charac- teristics. So, the development of such bulk scheme is rather important to go further in the knowledge of earth climate and in the forecasting of intense meteorological phenomena. Here, a quasi-spectral microphysic warm scheme has been developed to predict the concentrations and mixing ratios of aerosols, cloud droplets and raindrops. It considers, explicitely and analytically, the nucleation of droplets (Abdul-Razzak et al., 2000), condensation/evaporation (Chaumerliac et al., 1987), the breakup and collision-coalescence processes with the Long (1974) Ss kernels and the Berry and ´ Reinhardt (1974) Ss autoconversion parameterization, but also, the aerosols and gaz ´ scavenging. First, the parameterization has been estimated in the simplest dynamic framework of an air parcel model, with the results of the detailed scavenging model, DESCAM (Flossmann et al., 1985). Then, it has been tested, in the dynamic frame- work of a kinematic model (Szumowski et al., 1998) dedicated to the HaRP cam- paign (Hawaiian Rainband Project, 1990), with the observations and with the results of the two dimensional detailed microphysic scheme, DESCAM 2-D (Flossmann et al., 1988), implement in the CLARK model (Clark and Farley, 1984).
Chen, Sheng; Yao, Liping; Chen, Bao
2016-11-01
The enhancement of lung nodules in chest radiographs (CXRs) plays an important role in the manual as well as computer-aided detection (CADe) lung cancer. In this paper, we proposed a parameterized logarithmic image processing (PLIP) method combined with the Laplacian of a Gaussian (LoG) filter to enhance lung nodules in CXRs. We first applied several LoG filters with varying parameters to an original CXR to enhance the nodule-like structures as well as the edges in the image. We then applied the PLIP model, which can enhance lung nodule images with high contrast and was beneficial in extracting effective features for nodule detection in the CADe scheme. Our method combined the advantages of both the PLIP algorithm and the LoG algorithm, which can enhance lung nodules in chest radiographs with high contrast. To test our nodule enhancement method, we tested a CADe scheme, with a relatively high performance in nodule detection, using a publically available database containing 140 nodules in 140 CXRs enhanced through our nodule enhancement method. The CADe scheme attained a sensitivity of 81 and 70 % with an average of 5.0 frame rate (FP) and 2.0 FP, respectively, in a leave-one-out cross-validation test. By contrast, the CADe scheme based on the original image recorded a sensitivity of 77 and 63 % at 5.0 FP and 2.0 FP, respectively. We introduced the measurement of enhancement by entropy evaluation to objectively assess our method. Experimental results show that the proposed method obtains an effective enhancement of lung nodules in CXRs for both radiologists and CADe schemes.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.;
2014-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were added to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.;
2014-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were applied to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.
NASA Astrophysics Data System (ADS)
Cai, Fu; Ming, Huiqing; Mi, Na; Xie, Yanbing; Zhang, Yushu; Li, Rongping
2017-04-01
As root water uptake (RWU) is an important link in the water and heat exchange between plants and ambient air, improving its parameterization is key to enhancing the performance of land surface model simulations. Although different types of RWU functions have been adopted in land surface models, there is no evidence as to which scheme most applicable to maize farmland ecosystems. Based on the 2007-09 data collected at the farmland ecosystem field station in Jinzhou, the RWU function in the Common Land Model (CoLM) was optimized with scheme options in light of factors determining whether roots absorb water from a certain soil layer ( W x ) and whether the baseline cumulative root efficiency required for maximum plant transpiration ( W c ) is reached. The sensibility of the parameters of the optimization scheme was investigated, and then the effects of the optimized RWU function on water and heat flux simulation were evaluated. The results indicate that the model simulation was not sensitive to W x but was significantly impacted by W c . With the original model, soil humidity was somewhat underestimated for precipitation-free days; soil temperature was simulated with obvious interannual and seasonal differences and remarkable underestimations for the maize late-growth stage; and sensible and latent heat fluxes were overestimated and underestimated, respectively, for years with relatively less precipitation, and both were simulated with high accuracy for years with relatively more precipitation. The optimized RWU process resulted in a significant improvement of CoLM's performance in simulating soil humidity, temperature, sensible heat, and latent heat, for dry years. In conclusion, the optimized RWU scheme available for the CoLM model is applicable to the simulation of water and heat flux for maize farmland ecosystems in arid areas.
Analysis of soil hydraulic and thermal properties for land surface modeling over the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zhao, Hong; Zeng, Yijian; Lv, Shaoning; Su, Zhongbo
2018-06-01
Soil information (e.g., soil texture and porosity) from existing soil datasets over the Tibetan Plateau (TP) is claimed to be inadequate and even inaccurate for determining soil hydraulic properties (SHP) and soil thermal properties (STP), hampering the understanding of the land surface process over TP. As the soil varies across three dominant climate zones (i.e., arid, semi-arid and subhumid) over the TP, the associated SHP and STP are expected to vary correspondingly. To obtain an explicit insight into the soil hydrothermal properties over the TP, in situ and laboratory measurements of over 30 soil property profiles were obtained across the climate zones. Results show that porosity and SHP and STP differ across the climate zones and strongly depend on soil texture. In particular, it is proposed that gravel impact on porosity and SHP and STP are both considered in the arid zone and in deep layers of the semi-arid zone. Parameterization schemes for porosity, SHP and STP are investigated and compared with measurements taken. To determine the SHP, including soil water retention curves (SWRCs) and hydraulic conductivities, the pedotransfer functions (PTFs) developed by Cosby et al. (1984) (for the Clapp-Hornberger model) and the continuous PTFs given by Wösten et al. (1999) (for the Van Genuchten-Mualem model) are recommended. The STP parameterization scheme proposed by Farouki (1981) based on the model of De Vries (1963) performed better across the TP than other schemes. Using the parameterization schemes mentioned above, the uncertainties of five existing regional and global soil datasets and their derived SHP and STP over the TP are quantified through comparison with in situ and laboratory measurements. The measured soil physical properties dataset is available at https://data.4tu.nl/repository/uuid:c712717c-6ac0-47ff-9d58-97f88082ddc0.
Assessing and Upgrading Ocean Mixing for the Study of Climate Change
NASA Astrophysics Data System (ADS)
Howard, A. M.; Fells, J.; Lindo, F.; Tulsee, V.; Canuto, V.; Cheng, Y.; Dubovikov, M. S.; Leboissetier, A.
2016-12-01
Climate is critical. Climate variability affects us all; Climate Change is a burning issue. Droughts, floods, other extreme events, and Global Warming's effects on these and problems such as sea-level rise and ecosystem disruption threaten lives. Citizens must be informed to make decisions concerning climate such as "business as usual" vs. mitigating emissions to keep warming within bounds. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. To make useful predictions we must realistically model each component of the climate system, including the ocean, whose critical role includes transporting&storing heat and dissolved CO2. We need physically based parameterizations of key ocean processes that can't be put explicitly in a global climate model, e.g. vertical&lateral mixing. The NASA-GISS turbulence group uses theory to model mixing including: 1) a comprehensive scheme for small scale vertical mixing, including convection&shear, internal waves & double-diffusion, and bottom tides 2) a new parameterization for the lateral&vertical mixing by mesoscale eddies. For better understanding we write our own programs. To assess the modelling MATLAB programs visualize and calculate statistics, including means, standard deviations and correlations, on NASA-GISS OGCM output with different mixing schemes and help us study drift from observations. We also try to upgrade the schemes, e.g. the bottom tidal mixing parameterizations' roughness, calculated from high resolution topographic data using Gaussian weighting functions with cut-offs. We study the effects of their parameters to improve them. A FORTRAN program extracts topography data subsets of manageable size for a MATLAB program, tested on idealized cases, to visualize&calculate roughness on. Students are introduced to modeling a complex system, gain a deeper appreciation of climate science, programming skills and familiarity with MATLAB, while furthering climate science by improving our mixing schemes. We are incorporating climate research into our college curriculum. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, an urban minority serving institution in central Brooklyn. Supported by NSF Award AGS-1359293.
NASA Astrophysics Data System (ADS)
Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling
2016-10-01
In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
NASA Astrophysics Data System (ADS)
Wang, M.; Peng, Y.; Xie, X.; Liu, Y.
2017-12-01
Aerosol cloud interaction continues to constitute one of the most significant uncertainties for anthropogenic climate perturbations. The parameterization of cloud droplet size distribution and autoconversion process from large scale cloud to rain can influence the estimation of first and second aerosol indirect effects in global climate models. We design a series of experiments focusing on the microphysical cloud scheme of NCAR CAM5 (Community Atmospheric Model Version 5) in transient historical run with realistic sea surface temperature and sea ice. We investigate the effect of three empirical, two semi-empirical and one analytical expressions for droplet size distribution on cloud properties and explore the statistical relationships between aerosol optical thickness (AOT) and simulated cloud variables, including cloud top droplet effective radius (CDER), cloud optical depth (COD), cloud water path (CWP). We also introduce the droplet spectral shape parameter into the autoconversion process to incorporate the effect of droplet size distribution on second aerosol indirect effect. Three satellite datasets (MODIS Terra/ MODIS Aqua/ AVHRR) are used to evaluate the simulated aerosol indirect effect from the model. Evident CDER decreasing with significant AOT increasing is found in the east coast of China to the North Pacific Ocean and the east coast of USA to the North Atlantic Ocean. Analytical and semi-empirical expressions for spectral shape parameterization show stronger first aerosol indirect effect but weaker second aerosol indirect effect than empirical expressions because of the narrower droplet size distribution.
NASA Astrophysics Data System (ADS)
Paukert, M.; Hoose, C.; Simmel, M.
2017-03-01
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. In contrast, the immersion freezing of larger drops—"rain"—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. Here we introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation in raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.
NASA Astrophysics Data System (ADS)
Mölg, Thomas; Cullen, Nicolas J.; Kaser, Georg
Broadband radiation schemes (parameterizations) are commonly used tools in glacier mass-balance modelling, but their performance at high altitude in the tropics has not been evaluated in detail. Here we take advantage of a high-quality 2 year record of global radiation (G) and incoming longwave radiation (L↓) measured on Kersten Glacier, Kilimanjaro, East Africa, at 5873 m a.s.l., to optimize parameterizations of G and L↓. We show that the two radiation terms can be related by an effective cloud-cover fraction neff, so G or L↓ can be modelled based on neff derived from measured L↓ or G, respectively. At neff = 1, G is reduced to 35% of clear-sky G, and L↓ increases by 45-65% (depending on altitude) relative to clear-sky L↓. Validation for a 1 year dataset of G and L↓ obtained at 4850 m on Glaciar Artesonraju, Peruvian Andes, yields a satisfactory performance of the radiation scheme. Whether this performance is acceptable for mass-balance studies of tropical glaciers is explored by applying the data from Glaciar Artesonraju to a physically based mass-balance model, which requires, among others, G and L↓ as forcing variables. Uncertainties in modelled mass balance introduced by the radiation parameterizations do not exceed those that can be caused by errors in the radiation measurements. Hence, this paper provides a tool for inclusion in spatially distributed mass-balance modelling of tropical glaciers and/or extension of radiation data when only G or L↓ is measured.
A comparison study of two snow models using data from different Alpine sites
NASA Astrophysics Data System (ADS)
Piazzi, Gaia; Riboust, Philippe; Campo, Lorenzo; Cremonese, Edoardo; Gabellani, Simone; Le Moine, Nicolas; Morra di Cella, Umberto; Ribstein, Pierre; Thirel, Guillaume
2017-04-01
The hydrological balance of an Alpine catchment is strongly affected by snowpack dynamics. Melt-water supplies a significant component of the annual water budget, both in terms of soil moisture and runoff, which play a critical role in floods generation and impact water resource management in snow-dominated basins. Several snow models have been developed with variable degrees of complexity, mainly depending on their target application and the availability of computational resources and data. According to the level of detail, snow models range from statistical snowmelt-runoff and degree-day methods using composite snow-soil or explicit snow layer(s), to physically-based and energy balance snow models, consisting of detailed internal snow-process schemes. Intermediate-complexity approaches have been widely developed resulting in simplified versions of the physical parameterization schemes with a reduced snowpack layering. Nevertheless, an increasing model complexity does not necessarily entail improved model simulations. This study presents a comparison analysis between two snow models designed for hydrological purposes. The snow module developed at UPMC and IRSTEA is a mono-layer energy balance model analytically resolving heat and phase change equations into the snowpack. Vertical mass exchange into the snowpack is also analytically resolved. The model is intended to be used for hydrological studies but also to give a realistic estimation of the snowpack state at watershed scale (SWE and snow depth). The structure of the model allows it to be easily calibrated using snow observation. This model is further presented in EGU2017-7492. The snow module of SMASH (Snow Multidata Assimilation System for Hydrology) consists in a multi-layer snow dynamic scheme. It is physically based on mass and energy balances and it reproduces the main physical processes occurring within the snowpack: accumulation, density dynamics, melting, sublimation, radiative balance, heat and mass exchanges. The model is driven by observed forcing meteorological data (air temperature, wind velocity, relative air humidity, precipitation and incident solar radiation) to provide an estimation of the snowpack state. In this study, no DA is used. For more details on the DA scheme, please see EGU2017-7777. Observed data supplied by meteorological stations located in three experimental Alpine sites are used: Col de Porte (1325 m, France); Torgnon (2160 m, Italy); Weissfluhjoch (2540 m, Switzerland). Performances of the two models are compared through evaluations of snow mass, snow depth, albedo and surface temperature simulations in order to better understand and pinpoint limits and potentialities of the analyzed schemes and the impact of different parameterizations on models simulations.
A Vertically Resolved Planetary Boundary Layer
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1984-01-01
Increase of the vertical resolution of the GLAS Fourth Order General Circulation Model (GCM) near the Earth's surface and installation of a new package of parameterization schemes for subgrid-scale physical processes were sought so that the GLAS Model GCM will predict the resolved vertical structure of the planetary boundary layer (PBL) for all grid points.
An explicit microphysics thunderstorm model.
R. Solomon; C.M. Medaglia; C. Adamo; S. Dietrick; A. Mugnai; U. Biader Ceipidor
2005-01-01
The authors present a brief description of a 1.5-dimensional thunderstorm model with a lightning parameterization that utilizes an explicit microphysical scheme to model lightning-producing clouds. The main intent of this work is to describe the basic microphysical and electrical properties of the model, with a small illustrative section to show how the model may be...
NASA Technical Reports Server (NTRS)
McFarquhar, Greg M.; Zhang, Henian; Dudhia, Jimy; Halverson, Jeffrey B.; Heymsfield, Gerald; Hood, Robbie; Marks, Frank, Jr.
2003-01-01
Fine-resolution simulations of Hurricane Erin 2001 are conducted using the Penn State University/National Center for Atmospheric Research mesoscale model version 3.5 to investigate the role of thermodynamic, boundary layer and microphysical processes in Erin's growth and maintenance, and their effects on the horizontal and vertical distributions of hydrometeors. Through comparison against radar, radiometer, and dropsonde data collected during the Convection and Moisture Experiment 4, it is seen that realistic simulations of Erin are obtained provided that fine resolution simulations with detailed representations of physical processes are conducted. The principle findings of the study are as follows: 1) a new iterative condensation scheme, which limits the unphysical increase of equivalent potential temperature associated with most condensation schemes, increases the horizontal size of the hurricane, decreases its maximum rainfall rate, reduces its intensity, and makes its eye more moist; 2) in general, microphysical parameterization schemes with more categories of hydrometeors produce more intense hurricanes, larger hydrometeor mixing ratios, and more intense updrafts and downdrafts; 3) the choice of coefficients describing hydrometeor fall velocities has as big of an impact on the hurricane simulations as does choice of microphysical parameterization scheme with no clear relationship between fall velocity and hurricane intensity; and 4) in order for a tropical cyclone to adequately intensify, an advanced boundary layer scheme (e.g., Burk-Thompson scheme) must be used to represent boundary layer processes. The impacts of varying simulations on the horizontal and vertical distributions of different categories of hydrometeor species, on equivalent potential temperature, and on storm updrafts and downdrafts are examined to determine how the release of latent heat feedbacks upon the structure of Erin. In general, all simulations tend to overpredict precipitation rate and hydrometeor mixing ratios. The ramifications of these findings for quantitative precipitation forecasts (QPFs) of tropical cyclones are discussed.
NASA Astrophysics Data System (ADS)
Kao, C.-Y. J.; Smith, W. S.
1999-05-01
A physically based cloud parameterization package, which includes the Arakawa-Schubert (AS) scheme for subgrid-scale convective clouds and the Sundqvist (SUN) scheme for nonconvective grid-scale layered clouds (hereafter referred to as the SUNAS cloud package), is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, Version 2 (CCM2). The AS scheme is used for a more reasonable heating distribution due to convective clouds and their associated precipitation. The SUN scheme allows for the prognostic computation of cloud water so that the cloud optical properties are more physically determined for shortwave and longwave radiation calculations. In addition, the formation of anvil-like clouds from deep convective systems is able to be simulated with the SUNAS package. A 10-year simulation spanning the period from 1980 to 1989 is conducted, and the effect of the cloud package on the January climate is assessed by comparing it with various available data sets and the National Center for Environmental Protection/NCAR reanalysis. Strengths and deficiencies of both the SUN and AS methods are identified and discussed. The AS scheme improves some aspects of the model dynamics and precipitation, especially with respect to the Pacific North America (PNA) pattern. CCM2's tendency to produce a westward bias of the 500 mbar stationary wave (time-averaged zonal anomalies) in the PNA sector is remedied apparently because of a less "locked-in" heating pattern in the tropics. The additional degree of freedom added by the prognostic calculation of cloud water in the SUN scheme produces interesting results in the modeled cloud and radiation fields compared with data. In general, too little cloud water forms in the tropics, while excessive cloud cover and cloud liquid water are simulated in midlatitudes. This results in a somewhat degraded simulation of the radiation budget. The overall simulated precipitation by the SUNAS package is, however, substantially improved over the original CCM2.
Atmospheric Electrical Modeling in Support of the NASA F-106 Storm Hazards Project
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.
1988-01-01
A recently developed storm electrification model (SEM) is used to investigate the operating environment of the F-106 airplane during the NASA Storm Hazards Project. The model is 2-D, time dependent and uses a bulkwater microphysical parameterization scheme. Electric charges and fields are included, and the model is fully coupled dynamically, microphysically and electrically. One flight showed that a high electric field was developed at the aircraft's operating altitude (28 kft) and that a strong electric field would also be found below 20 kft; however, this low-altitude, high-field region was associated with the presence of small hail, posing a hazard to the aircraft. An operational procedure to increase the frequency of low-altitude lightning strikes was suggested. To further the understanding of lightning within the cloud environment, a parameterization of the lightning process was included in the SEM. It accounted for the initiation, propagation, termination, and charge redistribution associated with an intracloud discharge. Finally, a randomized lightning propagation scheme was developed, and the effects of cloud particles on the initiation of lightning investigated.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
NASA Astrophysics Data System (ADS)
Hasan, Md Alfi; Islam, A. K. M. Saiful
2018-05-01
Accurate forecasting of heavy rainfall is crucial for the improvement of flood warning to prevent loss of life and property damage due to flash-flood-related landslides in the hilly region of Bangladesh. Forecasting heavy rainfall events is challenging where microphysics and cumulus parameterization schemes of Weather Research and Forecast (WRF) model play an important role. In this study, a comparison was made between observed and simulated rainfall using 19 different combinations of microphysics and cumulus schemes available in WRF over Bangladesh. Two severe rainfall events during 11th June 2007 and 24-27th June 2012, over the eastern hilly region of Bangladesh, were selected for performance evaluation using a number of indicators. A combination of the Stony Brook University microphysics scheme with Tiedtke cumulus scheme is found as the most suitable scheme for reproducing those events. Another combination of the single-moment 6-class microphysics scheme with New Grell 3D cumulus schemes also showed reasonable performance in forecasting heavy rainfall over this region. The sensitivity analysis confirms that cumulus schemes play a greater role than microphysics schemes for reproducing the heavy rainfall events using WRF.
NASA Astrophysics Data System (ADS)
Sinitsyn, Alexey
2017-04-01
Shortwave radiation is one of the key air-sea flux components playing an important role in on the ocean heat balance. The most accurate method to obtaining estimates of shortwave fluxes are the field measurements at various locations at the globe. However, these data are very sparse. Different satellite missions and re-analyses provide alternative source of short-wave radiation data, however they need are source for uncertainties and need to be validated. An alternative way to produce long-term time series of shortwave radiation is to apply bulk parameterizations of shortwave radiation to the observations of Voluntary Observing Ship (VOS) cloud data or to the cloud measurements from CM-SAF. In our work, we compare three sources of shortwave flux estimates. In-situ measurements were obtained during 12 cruises (320 day of measurements) of research cruises in different regions of the Atlantic Ocean from 2004 to 2014. Shortwave radiation was measured by the Kipp&Zonen net radiometer CNR-1. Also during the cruise, standard meteorological observations were carried out. Satellite data were the hourly and daily time series of the incoming shortwave radiation with spatial resolution 0.05x0.05 degree (METEOSAT MSG coverage Europe, Africa, Atlantic Ocean), and were obtained by the MVIRI/SEVIRI instrument from METEOSAT. SEVIRI cloud properties were taken from CLAAS-2 data record from CM-SAF. Parameterizations of shortwave fluxes used consisted of three different schemes based upon consideration of only total as well as total and low cloud cover. The incoming shortwave radiation retrieved by satellite had a positive bias of 3 Wm-2 and RMS of 69 Wm-2 compared to in-situ measurements. For different Octa categories the bias was from 1 to 5 Wm-2 and RMS from 41 to 71 Wm-2. The incoming shortwave radiation computed by bulk parameterization indicated a bias of -10 Wm-2 to 60 Wm-2 depending on the scheme and the region of the Atlantic Ocean. The results of the comparison suggest that satellite data is an excellent ground for testing bulk parameterizations of incoming shortwave radiation. Among the bulk paramterizations, the IORAS/SAIL scheme is the least biased algorithm for computing shortwave radiation from cloud observations.
Stochastic Parameterization: Toward a New View of Weather and Climate Models
Berner, Judith; Achatz, Ulrich; Batté, Lauriane; ...
2017-03-31
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, Richard
2013-08-22
The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been a collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen).« less
Stochastic Parameterization: Toward a New View of Weather and Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berner, Judith; Achatz, Ulrich; Batté, Lauriane
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Liwei; Qian, Yun; Zhou, Tianjun
2014-10-01
In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less
NASA Astrophysics Data System (ADS)
Fröhlich, K.; Schmidt, T.; Ern, M.; Preusse, P.; de La Torre, A.; Wickert, J.; Jacobi, Ch.
2007-12-01
Five years of global temperatures retrieved from radio occultations measured by Champ (Challenging Minisatellite Payload) and SAC-C (Satelite de Aplicaciones Cientificas-C) are analyzed for gravity waves (GWs). In order to separate GWs from other atmospheric variations, a high-pass filter was applied on the vertical profile. Resulting temperature fluctuations correspond to vertical wavelengths between 400 m (instrumental resolution) and 10 km (limit of the high-pass filter). The temperature fluctuations can be converted into GW potential energy, but for comparison with parameterization schemes GW momentum flux is required. We therefore used representative values for the vertical and horizontal wavelength to infer GW momentum flux from the GPS measurements. The vertical wavelength value is determined by high-pass filtering, the horizontal wavelength is adopted from a latitude-dependent climatology. The obtained momentum flux distributions agree well, both in global distribution and in absolute values, with simulations using the Warner and McIntyre parameterization (WM) scheme. However, discrepancies are found in the annual cycle. Online simulations, implementing the WM scheme in the mechanistic COMMA-LIM (Cologne Model of the Middle Atmosphere—Leipzig Institute for Meteorology) general circulation model (GCM), do not converge, demonstrating that a good representation of GWs in a GCM requires both a realistic launch distribution and an adequate representation of GW breaking and momentum transfer.
NASA Astrophysics Data System (ADS)
Fei, Jianfang; Ding, Juli; Huang, Xiaogang; Cheng, Xiaoping; Hu, Xiaohua
2013-06-01
The Weather Research and Forecasting model version 3.2 (WRF v3.2) was used with the bogus data assimilation (BDA) scheme and sea spray parameterization (SSP), and experiments were conducted to assess the impacts of the BDA and SSP on prediction of the typhoon ducting process induced by Typhoon Mindule (2004). The global positioning system (GPS) dropsonde observations were used for comparison. The results show that typhoon ducts are likely to form in every direction around the typhoon center, with the main type of ducts being elevated duct. With the BDA scheme included in the model initialization, the model has a better performance in predicting the existence, distribution, and strength of typhoon ducts. This improvement is attributed to the positive effect of the BDA scheme on the typhoon's ambient boundary layer structure. Sea spray affects typhoon ducts mainly by changing the latent heat (LH) flux at the air-sea interface beyond 270 km from the typhoon center. The strength of the typhoon duct is enhanced when the boundary layer under this duct is cooled and moistened by the sea spray; otherwise, the typhoon duct is weakened. The sea spray induced changes in the air-sea sensible heat (SH) flux and LH flux are concentrated in the maximum wind speed area near the typhoon center, and the changes are significantly weakened with the increase of the radial range.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
NASA Technical Reports Server (NTRS)
Miller, Timothy L.; Cohen, Charles; Paxton, Jessica; Robertson, F. R. (Pete)
2009-01-01
Global forecasts were made with the 0.25-degree latitude version of GEOS-5, with the RAS scheme and with the Kain-Fritsch scheme. Examination was made of the Katrina (2005) hurricane simulation. Replacement of the RAS convective scheme with the K-F scheme results in a much more vigorous Katrina, closer to reality. Still, the result is not as vigorous as reality. In terms of wind maximum, the gap was closed by 50%. The result seems to be due to the RAS scheme drying out the boundary layer, thus hampering the grid-scale secondary circulation and attending cyclone development. The RAS case never developed a full warm core, whereas the K-F case did. Not shown here: The K-F scheme also resulted in a more vigorous storm than when GEOS-5 is run with no convective parameterization. Also not shown: An experiment in which the RAS firing level was moved up by 3 model levels resulted in a stronger, warm-core storm, though not as strong as the K-F case. Effects on storm track were noticed, but not studied.
Hierarchical atom type definitions and extensible all-atom force fields.
Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai
2016-03-15
The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
Landsurface hydrological parameterizations are implemented in the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: (1) runoff and evapotranspiration functions that include the effects of subgrid scale spatial variability and use physically based equations of hydrologic flux at the soil surface, and (2) a realistic soil moisture diffusion scheme for the movement of water in the soil column. A one dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three dimensional GCM. Results of the final simulation with the GISS GCM and the new landsurface hydrology indicate that the runoff rate, especially in the tropics is significantly improved. As a result, the remaining components of the heat and moisture balance show comparable improvements when compared to observations. The validation of model results is carried from the large global (ocean and landsurface) scale, to the zonal, continental, and finally the finer river basin scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
Lehmann, Benjamin V.; Mao, Yao -Yuan; Becker, Matthew R.; ...
2016-12-28
Empirical methods for connecting galaxies to their dark matter halos have become essential for interpreting measurements of the spatial statistics of galaxies. In this work, we present a novel approach for parameterizing the degree of concentration dependence in the abundance matching method. Furthermore, this new parameterization provides a smooth interpolation between two commonly used matching proxies: the peak halo mass and the peak halo maximal circular velocity. This parameterization controls the amount of dependence of galaxy luminosity on halo concentration at a fixed halo mass. Effectively this interpolation scheme enables abundance matching models to have adjustable assembly bias in the resulting galaxy catalogs. With the newmore » $$400\\,\\mathrm{Mpc}\\,{h}^{-1}$$ DarkSky Simulation, whose larger volume provides lower sample variance, we further show that low-redshift two-point clustering and satellite fraction measurements from SDSS can already provide a joint constraint on this concentration dependence and the scatter within the abundance matching framework.« less
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
NASA Astrophysics Data System (ADS)
Zhang, K.; O'Donnell, D.; Kazil, J.; Stier, P.; Kinne, S.; Lohmann, U.; Ferrachat, S.; Croft, B.; Quaas, J.; Wan, H.; Rast, S.; Feichter, J.
2012-03-01
This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Sensitivity experiments are carried out to analyse the effects of these improvements in the process representation on the simulated aerosol properties and global distribution. The new parameterizations that have largest impact on the global mean aerosol optical depth and radiative effects turn out to be the water uptake scheme and cloud microphysics. The former leads to a significant decrease of aerosol water contents in the lower troposphere, and consequently smaller optical depth; the latter results in higher aerosol loading and longer lifetime due to weaker in-cloud scavenging. The combined effects of the new/updated parameterizations are demonstrated by comparing the new model results with those from the earlier version, and against observations. Model simulations are evaluated in terms of aerosol number concentrations against measurements collected from twenty field campaigns as well as from fixed measurement sites, and in terms of optical properties against the AERONET measurements. Results indicate a general improvement with respect to the earlier version. The aerosol size distribution and spatial-temporal variance simulated by HAM2 are in better agreement with the observations. Biases in the earlier model version in aerosol optical depth and in the Ångström parameter have been reduced. The paper also points out the remaining model deficiencies that need to be addressed in the future.
ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities
NASA Astrophysics Data System (ADS)
Neggers, R.
2014-12-01
Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Wen, X.
2017-12-01
The Yellow River source region is situated in the northeast Tibetan Plateau, which is considered as a global climate change hot-spot and one of the most sensitive areas in terms of response to global warming in view of its fragile ecosystem. This region plays an irreplaceable role for downstream water supply of The Yellow River because of its unique topography and variable climate. The water energy cycle processes of the Yellow River source Region from July to September in 2015 were simulated by using the WRF mesoscale numerical model. The two groups respectively used Noah and CLM4 parameterization schemes of land surface process. Based on the observation data of GLDAS data set, ground automatic weather station and Zoige plateau wetland ecosystem research station, the simulated values of near surface meteorological elements and surface energy parameters of two different schemes were compared. The results showed that the daily variations about meteorological factors in Zoige station in September were simulated quite well by the model. The correlation coefficient between the simulated temperature and humidity of the CLM scheme were 0.88 and 0.83, the RMSE were 1.94 ° and 9.97%, and the deviation Bias were 0.04 ° and 3.30%, which was closer to the observation data than the Noah scheme. The correlation coefficients of net radiation, surface heat flux, upward short wave and upward longwave radiation were respectively 0.86, 0.81, 0.84 and 0.88, which corresponded better than the observation data. The sensible heat flux and latent heat flux distribution of the Noah scheme corresponded quite well to GLDAS. the distribution and magnitude of 2m relative humidity and soil moisture were closer to surface observation data because the CLM scheme described the photosynthesis and evapotranspiration of land surface vegetation more rationally. The simulating abilities of precipitation and downward longwave radiation need to be improved. This study provides a theoretical basis for the numerical simulation of water energy cycle in the source region over the Yellow River basin.
NASA Astrophysics Data System (ADS)
Alapaty, Kiran; Bullock, O. Russell; Herwehe, Jerold; Spero, Tanya; Nolte, Christopher; Mallard, Megan
2014-05-01
The Regional Climate Modeling Team at the U.S. Environmental Protection Agency has been improving the quality of regional climate fields generated by the Weather Research and Forecasting (WRF) model. Active areas of research include improving core physics within the WRF model and adapting the physics for regional climate applications, improving the representation of inland lakes that are unresolved by the driving fields, evaluating nudging strategies, and devising techniques to demonstrate value added by dynamical downscaling. These research efforts have been conducted using reanalysis data as driving fields, and then their results have been applied to downscale data from global climate models. The goals of this work are to equip environmental managers and policy/decision makers in the U.S. with science, tools, and data to inform decisions related to adapting to and mitigating the potential impacts of climate change on air quality, ecosystems, and human health. Our presentation will focus mainly on one area of the Team's research: Development and testing of a seamless convection parameterization scheme. For the continental U.S., one of the impediments to high-resolution (~3 to 15 km) climate modeling is related to the lack of a seamless convection parameterization that works across many scales. Since many convection schemes are not developed to work at those "gray scales", they often lead to excessive precipitation during warm periods (e.g., summer). The Kain-Fritsch (KF) convection parameterization in the WRF model has been updated such that it can be used seamlessly across spatial scales down to ~1 km grid spacing. First, we introduced subgrid-scale cloud and radiation interactions that had not been previously considered in the KF scheme. Then, a scaling parameter was developed to introduce scale-dependency in the KF scheme for use with various processes. In addition, we developed new formulations for: (1) convective adjustment timescale; (2) entrainment of environmental air; (3) impacts of convective updraft on grid-scale vertical velocity; (4) convective cloud microphysics; (5) stabilizing capacity; (6) elimination of double counting of precipitation; and (7) estimation of updraft mass flux at the lifting condensation level. Some of these scale-dependent formulations make the KF scheme operable at all scales up to about sub-kilometer grid resolution. In this presentation, regional climate simulations using the WRF model will be presented to demonstrate the effects of these changes to the KF scheme. Additionally, we briefly present results obtained from the improved representation of inland lakes, various nudging strategies, and added value of dynamical downscaling of regional climate. Requesting for a plenary talk for the session: "Regional climate modeling, including CORDEX" (session number CL6.4) at the EGU 2014 General Assembly, to be held 27 April - 2 May 2014 in Vienna, Austria.
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, May Wai San; Ovchinnikov, Mikhail; Wang, Minghuai
Potential ways of parameterizing vertical turbulent fluxes of hydrometeors are examined using a high-resolution cloud-resolving model. The cloud-resolving model uses the Morrison microphysics scheme, which contains prognostic variables for rain, graupel, ice, and snow. A benchmark simulation with a horizontal grid spacing of 250 m of a deep convection case carried out to evaluate three different ways of parameterizing the turbulent vertical fluxes of hydrometeors: an eddy-diffusion approximation, a quadrant-based decomposition, and a scaling method that accounts for within-quadrant (subplume) correlations. Results show that the down-gradient nature of the eddy-diffusion approximation tends to transport mass away from concentrated regions, whereasmore » the benchmark simulation indicates that the vertical transport tends to transport mass from below the level of maximum to aloft. Unlike the eddy-diffusion approach, the quadri-modal decomposition is able to capture the signs of the flux gradient but underestimates the magnitudes. The scaling approach is shown to perform the best by accounting for within-quadrant correlations, and improves the results for all hydrometeors except for snow. A sensitivity study is performed to examine how vertical transport may affect the microphysics of the hydrometeors. The vertical transport of each hydrometeor type is artificially suppressed in each test. Results from the sensitivity tests show that cloud-droplet-related processes are most sensitive to suppressed rain or graupel transport. In particular, suppressing rain or graupel transport has a strong impact on the production of snow and ice aloft. Lastly, a viable subgrid-scale hydrometeor transport scheme in an assumed probability density function parameterization is discussed.« less
Paukert, M.; Hoose, C.; Simmel, M.
2017-02-21
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. Conversely, the immersion freezing of larger drops—“rain”—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. We introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation inmore » raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This also provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paukert, M.; Hoose, C.; Simmel, M.
In model studies of aerosol-dependent immersion freezing in clouds, a common assumption is that each ice nucleating aerosol particle corresponds to exactly one cloud droplet. Conversely, the immersion freezing of larger drops—“rain”—is usually represented by a liquid volume-dependent approach, making the parameterizations of rain freezing independent of specific aerosol types and concentrations. This may lead to inconsistencies when aerosol effects on clouds and precipitation shall be investigated, since raindrops consist of the cloud droplets—and corresponding aerosol particles—that have been involved in drop-drop-collisions. We introduce an extension to a two-moment microphysical scheme in order to account explicitly for particle accumulation inmore » raindrops by tracking the rates of selfcollection, autoconversion, and accretion. This also provides a direct link between ice nuclei and the primary formation of large precipitating ice particles. A new parameterization scheme of drop freezing is presented to consider multiple ice nuclei within one drop and effective drop cooling rates. In our test cases of deep convective clouds, we find that at altitudes which are most relevant for immersion freezing, the majority of potential ice nuclei have been converted from cloud droplets into raindrops. Compared to the standard treatment of freezing in our model, the less efficient mineral dust-based freezing results in higher rainwater contents in the convective core, affecting both rain and hail precipitation. The aerosol-dependent treatment of rain freezing can reverse the signs of simulated precipitation sensitivities to ice nuclei perturbations.« less
NASA Technical Reports Server (NTRS)
Liston, G. E.; Sud, Y. C.; Wood, E. F.
1994-01-01
To relate general circulation model (GCM) hydrologic output to readily available river hydrographic data, a runoff routing scheme that routes gridded runoffs through regional- or continental-scale river drainage basins is developed. By following the basin overland flow paths, the routing model generates river discharge hydrographs that can be compared to observed river discharges, thus allowing an analysis of the GCM representation of monthly, seasonal, and annual water balances over large regions. The runoff routing model consists of two linear reservoirs, a surface reservoir and a groundwater reservoir, which store and transport water. The water transport mechanisms operating within these two reservoirs are differentiated by their time scales; the groundwater reservoir transports water much more slowly than the surface reservior. The groundwater reservior feeds the corresponding surface store, and the surface stores are connected via the river network. The routing model is implemented over the Global Energy and Water Cycle Experiment (GEWEX) Continental-Scale International Project Mississippi River basin on a rectangular grid of 2 deg X 2.5 deg. Two land surface hydrology parameterizations provide the gridded runoff data required to run the runoff routing scheme: the variable infiltration capacity model, and the soil moisture component of the simple biosphere model. These parameterizations are driven with 4 deg X 5 deg gridded climatological potential evapotranspiration and 1979 First Global Atmospheric Research Program (GARP) Global Experiment precipitation. These investigations have quantified the importance of physically realistic soil moisture holding capacities, evaporation parameters, and runoff mechanisms in land surface hydrology formulations.
NASA Astrophysics Data System (ADS)
Dietlicher, Remo; Neubauer, David; Lohmann, Ulrike
2018-04-01
A new scheme for stratiform cloud microphysics has been implemented in the ECHAM6-HAM2 general circulation model. It features a widely used description of cloud water with two categories for cloud droplets and raindrops. The unique aspect of the new scheme is the break with the traditional approach to describe cloud ice analogously. Here we parameterize cloud ice by a single category that predicts bulk particle properties (P3). This method has already been applied in a regional model and most recently also in the Community Atmosphere Model 5 (CAM5). A single cloud ice category does not rely on heuristic conversion rates from one category to another. Therefore, it is conceptually easier and closer to first principles. This work shows that a single category is a viable approach to describe cloud ice in climate models. Prognostic representation of sedimentation is achieved by a nested approach for sub-stepping the cloud microphysics scheme. This yields good results in terms of accuracy and performance as compared to simulations with high temporal resolution. Furthermore, the new scheme allows for a competition between various cloud processes and is thus able to unbiasedly represent the ice formation pathway from nucleation to growth by vapor deposition and collisions to sedimentation. Specific aspects of the P3 method are evaluated. We could not produce a purely stratiform cloud where rime growth dominates growth by vapor deposition and conclude that the lack of appropriate conditions renders the prognostic parameters associated with the rime properties unnecessary. Limitations inherent in a single category are examined.
NASA Astrophysics Data System (ADS)
Astitha, M.; Lelieveld, J.; Abdel Kader, M.; Pozzer, A.; de Meij, A.
2012-11-01
Airborne desert dust influences radiative transfer, atmospheric chemistry and dynamics, as well as nutrient transport and deposition. It directly and indirectly affects climate on regional and global scales. Two versions of a parameterization scheme to compute desert dust emissions are incorporated into the atmospheric chemistry general circulation model EMAC (ECHAM5/MESSy2.41 Atmospheric Chemistry). One uses a globally uniform soil particle size distribution, whereas the other explicitly accounts for different soil textures worldwide. We have tested these two versions and investigated the sensitivity to input parameters, using remote sensing data from the Aerosol Robotic Network (AERONET) and dust concentrations and deposition measurements from the AeroCom dust benchmark database (and others). The two versions are shown to produce similar atmospheric dust loads in the N-African region, while they deviate in the Asian, Middle Eastern and S-American regions. The dust outflow from Africa over the Atlantic Ocean is accurately simulated by both schemes, in magnitude, location and seasonality. Approximately 70% of the modelled annual deposition data and 70-75% of the modelled monthly aerosol optical depth (AOD) in the Atlantic Ocean stations lay in the range 0.5 to 2 times the observations for all simulations. The two versions have similar performance, even though the total annual source differs by ~50%, which underscores the importance of transport and deposition processes (being the same for both versions). Even though the explicit soil particle size distribution is considered more realistic, the simpler scheme appears to perform better in several locations. This paper discusses the differences between the two versions of the dust emission scheme, focusing on their limitations and strengths in describing the global dust cycle and suggests possible future improvements.
Global land-atmosphere coupling associated with cold climate processes
NASA Astrophysics Data System (ADS)
Dutra, Emanuel
This dissertation constitutes an assessment of the role of cold processes, associated with snow cover, in controlling the land-atmosphere coupling. The work was based on model simulations, including offline simulations with the land surface model HTESSEL, and coupled atmosphere simulations with the EC-EARTH climate model. A revised snow scheme was developed and tested in HTESSEL and EC-EARTH. The snow scheme is currently operational at the European Centre for Medium-Range Weather Forecasts integrated forecast system, and in the default configuration of EC-EARTH. The improved representation of the snowpack dynamics in HTESSEL resulted in improvements in the near surface temperature simulations of EC-EARTH. The new snow scheme development was complemented with the option of multi-layer version that showed its potential in modeling thick snowpacks. A key process was the snow thermal insulation that led to significant improvements of the surface water and energy balance components. Similar findings were observed when coupling the snow scheme to lake ice, where lake ice duration was significantly improved. An assessment on the snow cover sensitivity to horizontal resolution, parameterizations and atmospheric forcing within HTESSEL highlighted the role of the atmospheric forcing accuracy and snowpack parameterizations in detriment of horizontal resolution over flat regions. A set of experiments with and without free snow evolution was carried out with EC-EARTH to assess the impact of the interannual variability of snow cover on near surface and soil temperatures. It was found that snow cover interannual variability explained up to 60% of the total interannual variability of near surface temperature over snow covered regions. Although these findings are model dependent, the results showed consistency with previously published work. Furthermore, the detailed validation of the snow dynamics simulations in HTESSEL and EC-EARTH guarantees consistency of the results.
NASA Technical Reports Server (NTRS)
Steffen, K.; Abdalati, W.; Stroeve, J.; Key, J.
1994-01-01
The proposed research involves the application of multispectral satellite data in combination with ground truth measurements to monitor surface properties of the Greenland Ice Sheet which are essential for describing the energy and mass of the ice sheet. Several key components of the energy balance are parameterized using satellite data and in situ measurements. The analysis will be done for a ten year time period in order to get statistics on the seasonal and interannual variations of the surface processes and the climatology. Our goal is to investigate to what accuracy and over what geographic areas large scale snow properties and radiative fluxes can be derived based upon a combination of available remote sensing and meteorological data sets. Operational satellite sensors are calibrated based on ground measurements and atmospheric modeling prior to large scale analysis to ensure the quality of the satellite data. Further, several satellite sensors of different spatial and spectral resolution are intercompared to access the parameter accuracy. Proposed parameterization schemes to derive key component of the energy balance from satellite data are validated. For the understanding of the surface processes a field program was designed to collect information on spectral albedo, specular reflectance, soot content, grain size and the physical properties of different snow types. Further, the radiative and turbulent fluxes at the ice/snow surface are monitored for the parameterization and interpretation of the satellite data. The expected results include several baseline data sets of albedo, surface temperature, radiative fluxes, and different snow types of the entire Greenland Ice Sheet. These climatological data sets will be of potential use for climate sensitivity studies in the context of future climate change.
NASA Astrophysics Data System (ADS)
Costa, Andrea; Doglioli, Andrea M.; Marsaleix, Patrick; Petrenko, Anne A.
2017-12-01
In situ measurements of kinetic energy dissipation rate ε and estimates of eddy viscosity KZ from the Gulf of Lion (NW Mediterranean Sea) are used to assess the ability of k - ɛ and k - ℓ closure schemes to predict microscale turbulence in a 3-D numerical ocean circulation model. Two different surface boundary conditions are considered in order to investigate their influence on each closure schemes' performance. The effect of two types of stability functions and optical schemes on the k - ɛ scheme is also explored. Overall, the 3-D model predictions are much closer to the in situ data in the surface mixed layer as opposed to below it. Above the mixed layer depth, we identify one model's configuration that outperforms all the other ones. Such a configuration employs a k - ɛ scheme with Canuto A stability functions, surface boundary conditions parameterizing wave breaking and an appropriate photosynthetically available radiation attenuation length. Below the mixed layer depth, reliability is limited by the model's resolution and the specification of a hard threshold on the minimum turbulent kinetic energy.
Evaluating Cloud Initialization in a Convection-permit NWP Model
NASA Astrophysics Data System (ADS)
Li, Jia; Chen, Baode
2015-04-01
In general, to avoid "double counting precipitation" problem, in convection permit NWP models, it was a common practice to turn off convective parameterization. However, if there were not any cloud information in the initial conditions, the occurrence of precipitation could be delayed due to spin-up of cloud field or microphysical variables. In this study, we utilized the complex cloud analysis package from the Advanced Regional Prediction System (ARPS) to adjust the initial states of the model on water substance, such as cloud water, cloud ice, rain water, et al., that is, to initialize the microphysical variables (i.e., hydrometers), mainly based on radar reflectivity observations. Using the Advanced Research WRF (ARW) model, numerical experiments with/without cloud initialization and convective parameterization were carried out at grey-zone resolutions (i.e. 1, 3, and 9 km). The results from the experiments without convective parameterization indicate that model ignition with radar reflectivity can significantly reduce spin-up time and accurately simulate precipitation at the initial time. In addition, it helps to improve location and intensity of predicted precipitation. With grey-zone resolutions (i.e. 1, 3, and 9 km), using the cumulus convective parameterization scheme (without radar data) cannot produce realistic precipitation at the early time. The issues related to microphysical parametrization associated with cloud initialization were also discussed.
NASA Astrophysics Data System (ADS)
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2018-04-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
NASA Astrophysics Data System (ADS)
Tsai, T. C.; Chen, J. P.; Dearden, C.
2014-12-01
The wide variety of ice crystal shapes and growth habits makes it a complicated issue in cloud models. This study developed the bulk ice adaptive habit parameterization based on the theoretical approach of Chen and Lamb (1994) and introduced a 6-class hydrometeors double-moment (mass and number) bulk microphysics scheme with gamma-type size distribution function. Both the proposed schemes have been implemented into the Weather Research and Forecasting model (WRF) model forming a new multi-moment bulk microphysics scheme. Two new moments of ice crystal shape and volume are included for tracking pristine ice's adaptive habit and apparent density. A closure technique is developed to solve the time evolution of the bulk moments. For the verification of the bulk ice habit parameterization, some parcel-type (zero-dimension) calculations were conducted and compared with binned numerical calculations. The results showed that: a flexible size spectrum is important in numerical accuracy, the ice shape can significantly enhance the diffusional growth, and it is important to consider the memory of growth habit (adaptive growth) under varying environmental conditions. Also, the derived results with the 3-moment method were much closer to the binned calculations. A field campaign of DIAMET was selected to simulate in the WRF model for real-case studies. The simulations were performed with the traditional spherical ice and the new adaptive shape schemes to evaluate the effect of crystal habits. Some main features of narrow rain band, as well as the embedded precipitation cells, in the cold front case were well captured by the model. Furthermore, the simulations produced a good agreement in the microphysics against the aircraft observations in ice particle number concentration, ice crystal aspect ratio, and deposition heating rate especially within the temperature region of ice secondary multiplication production.
NASA Astrophysics Data System (ADS)
Salvador, Nadir; Reis, Neyval Costa; Santos, Jane Meri; Albuquerque, Taciana Toledo de Almeida; Loriato, Ayres Geraldo; Delbarre, Hervé; Augustin, Patrick; Sokolov, Anton; Moreira, Davidson Martins
2016-12-01
Three atmospheric boundary layer (ABL) schemes and two land surface models that are used in the Weather Research and Forecasting (WRF) model, version 3.4.1, were evaluated with numerical simulations by using data from the north coast of France (Dunkerque). The ABL schemes YSU (Yonsei University), ACM2 (Asymmetric Convective Model version 2), and MYJ (Mellor-Yamada-Janjic) were combined with two land surface models, Noah and RUC (Rapid Update Cycle), in order to determine the performances under sea-breeze conditions. Particular attention is given in the determination of the thermal internal boundary layer (TIBL), which is very important in air pollution scenarios. The other physics parameterizations used in the model were consistent for all simulations. The predictions of the sea-breeze dynamics output from the WRF model were compared with observations taken from sonic detection and ranging, light detection and ranging systems and a meteorological surface station to verify that the model had reasonable accuracy in predicting the behavior of local circulations. The temporal comparisons of the vertical and horizontal wind speeds and wind directions predicted by the WRF model showed that all runs detected the passage of the sea-breeze front. However, except for the combination of MYJ and Noah, all runs had a time delay compared with the frontal passage measured by the instruments. The proposed study shows that the synoptic wind attenuated the intensity and penetration of the sea breeze. This provided changes in the vertical mixing in a short period of time and on soil temperature that could not be detected by the WRF model simulations with the computational grid used. Additionally, among the tested schemes, the combination of the localclosure MYJ scheme with the land surface Noah scheme was able to produce the most accurate ABL height compared with observations, and it was also able to capture the TIBL.
2012-09-30
oscillation (SAO) and quasi-biennial oscillation ( QBO ) of stratospheric equatorial winds in long-term (10-year) nature runs. The ability of these new schemes...to generate and maintain tropical SAO and QBO circulations in Navy models for the first time is an important breakthrough, since these circulations
Electronic Polarizability and the Effective Pair Potentials of Water
Leontyev, I. V.; Stuchebrukhov, A. A.
2014-01-01
Employing the continuum dielectric model for electronic polarizability, we have developed a new consistent procedure for parameterization of the effective nonpolarizable potential of liquid water. The model explains the striking difference between the value of water dipole moment μ~3D reported in recent ab initio and experimental studies with the value μeff~2.3D typically used in the empirical potentials, such as TIP3P or SPC/E. It is shown that the consistency of the parameterization scheme can be achieved if the magnitude of the effective dipole of water is understood as a scaled value μeff=μ∕εel, where εel =1.78 is the electronic (high-frequency) dielectric constant of water, and a new electronic polarization energy term, missing in the previous theories, is included. The new term is evaluated by using Kirkwood - Onsager theory. The new scheme is fully consistent with experimental data on enthalpy of vaporization, density, diffusion coefficient, and static dielectric constant. The new theoretical framework provides important insights into the nature of the effective parameters, which is crucial when the computational models of liquid water are used for simulations in different environments, such as proteins, or for interaction with solutes. PMID:25383062
NASA Astrophysics Data System (ADS)
Xu, Xin; Wang, Yuan; Xue, Ming; Zhu, Kefeng
2017-11-01
The impact of horizontal propagation of mountain waves on the orographic gravity wave drag (OGWD) in the stratosphere and lower mesosphere of the Northern Hemisphere is evaluated for the first time. Using a fine-resolution (1 arc min) terrain and 2.5°×2.5° European Centre for Medium-Range Weather Forecasts ERA-Interim reanalysis data during 2011-2016, two sets of OGWD are calculated offline according to a traditional parameterization scheme (without horizontal propagation) and a newly proposed scheme (with horizontal propagation). In both cases, the zonal mean OGWDs show similar spatial patterns and undergo a notable seasonal variation. In winter, the OGWD is mainly distributed in the upper stratosphere and lower mesosphere of middle to high latitudes, whereas the summertime OGWD is confined in the lower stratosphere. Comparison between the two sets of OGWD reveal that the horizontal propagation of mountain waves tends to decrease (increase) the OGWD in the lower stratosphere (middle to upper stratosphere and lower mesosphere). Consequently, including the horizontal propagation of mountain waves in the parameterization of OGWD can reduce the excessive OGWD in the lower stratosphere and strengthen the insufficient gravity wave forcing in the mesosphere, which are the known problems of traditional OGWD schemes. The impact of horizontal propagation is more prominent in winter than in summer, with the OGWD in western Tibetan Plateau, Rocky Mountains, and Greenland notably affected.
Improving Hydrological Simulations by Incorporating GRACE Data for Parameter Calibration
NASA Astrophysics Data System (ADS)
Bai, P.
2017-12-01
Hydrological model parameters are commonly calibrated by observed streamflow data. This calibration strategy is questioned when the modeled hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. We constructed a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations, with the aim of improving the parameterizations of hydrological models. The multi-objective calibration scheme was compared with the traditional single-objective calibration scheme, which is based only on streamflow observations. Two monthly hydrological models were employed on 22 Chinese catchments with different hydroclimatic conditions. The model evaluation was performed using observed streamflows, GRACE-derived TWSC, and evapotranspiraiton (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. In addition, the improvements of TWSC and ET simulations were more significant in relatively dry catchments than in relatively wet catchments. This study highlights the importance of including additional constraints besides streamflow observations in the parameter estimation to improve the performances of hydrological models.
NASA Technical Reports Server (NTRS)
Mocko, David M.; Sud, Y. C.
2000-01-01
Refinements to the snow-physics scheme of SSiB (Simplified Simple Biosphere Model) are described and evaluated. The upgrades include a partial redesign of the conceptual architecture to better simulate the diurnal temperature of the snow surface. For a deep snowpack, there are two separate prognostic temperature snow layers - the top layer responds to diurnal fluctuations in the surface forcing, while the deep layer exhibits a slowly varying response. In addition, the use of a very deep soil temperature and a treatment of snow aging with its influence on snow density is parameterized and evaluated. The upgraded snow scheme produces better timing of snow melt in GSWP-style simulations using ISLSCP Initiative I data for 1987-1988 in the Russian Wheat Belt region. To simulate more realistic runoff in regions with high orographic variability, additional improvements are made to SSiB's soil hydrology. These improvements include an orography-based surface runoff scheme as well as interaction with a water table below SSiB's three soil layers. The addition of these parameterizations further help to simulate more realistic runoff and accompanying prognostic soil moisture fields in the GSWP-style simulations. In intercomparisons of the performance of the new snow-physics SSiB with its earlier versions using an 18-year single-site dataset from Valdai Russia, the version of SSiB described in this paper again produces the earliest onset of snow melt. Soil moisture and deep soil temperatures also compare favorably with observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. Themore » evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.« less
NASA Astrophysics Data System (ADS)
Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.
2014-12-01
A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Ma, Po-Lun; Xiao, Heng
2013-08-29
The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less
Leaf chlorophyll constraint on model simulated gross primary productivity in agricultural systems
NASA Astrophysics Data System (ADS)
Houborg, Rasmus; McCabe, Matthew F.; Cescatti, Alessandro; Gitelson, Anatoly A.
2015-12-01
Leaf chlorophyll content (Chll) may serve as an observational proxy for the maximum rate of carboxylation (Vmax), which describes leaf photosynthetic capacity and represents the single most important control on modeled leaf photosynthesis within most Terrestrial Biosphere Models (TBMs). The parameterization of Vmax is associated with great uncertainty as it can vary significantly between plants and in response to changes in leaf nitrogen (N) availability, plant phenology and environmental conditions. Houborg et al. (2013) outlined a semi-mechanistic relationship between Vmax25 (Vmax normalized to 25 °C) and Chll based on inter-linkages between Vmax25, Rubisco enzyme kinetics, N and Chll. Here, these relationships are parameterized for a wider range of important agricultural crops and embedded within the leaf photosynthesis-conductance scheme of the Community Land Model (CLM), bypassing the questionable use of temporally invariant and broadly defined plant functional type (PFT) specific Vmax25 values. In this study, the new Chll constrained version of CLM is refined with an updated parameterization scheme for specific application to soybean and maize. The benefit of using in-situ measured and satellite retrieved Chll for constraining model simulations of Gross Primary Productivity (GPP) is evaluated over fields in central Nebraska, U.S.A between 2001 and 2005. Landsat-based Chll time-series records derived from the Regularized Canopy Reflectance model (REGFLEC) are used as forcing to the CLM. Validation of simulated GPP against 15 site-years of flux tower observations demonstrate the utility of Chll as a model constraint, with the coefficient of efficiency increasing from 0.91 to 0.94 and from 0.87 to 0.91 for maize and soybean, respectively. Model performances particularly improve during the late reproductive and senescence stage, where the largest temporal variations in Chll (averaging 35-55 μg cm-2 for maize and 20-35 μg cm-2 for soybean) are observed. While prolonged periods of vegetation stress did not occur over the studied fields, given the usefulness of Chll as an indicator of plant health, enhanced GPP predictabilities should be expected in fields exposed to longer periods of moisture and nutrient stress. While the results support the use of Chll as an observational proxy for Vmax25, future work needs to be directed towards improving the Chll retrieval accuracy from space observations and developing consistent and physically realistic modeling schemes that can be parameterized with acceptable accuracy over spatial and temporal domains.
Evaluating and Improving Cloud Processes in the Multi-Scale Modeling Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackerman, Thomas P.
2015-03-01
The research performed under this grant was intended to improve the embedded cloud model in the Multi-scale Modeling Framework (MMF) for convective clouds by using a 2-moment microphysics scheme rather than the single moment scheme used in all the MMF runs to date. The technical report and associated documents describe the results of testing the cloud resolving model with fixed boundary conditions and evaluation of model results with data. The overarching conclusion is that such model evaluations are problematic because errors in the forcing fields control the results so strongly that variations in parameterization values cannot be usefully constrained
In-Space Radiator Shape Optimization using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael
2006-01-01
Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in-space radiators for unique situations. Preliminary results indicate an optimized shape following that of the temperature distribution regions in the "cooler" portions of the radiator. The results closely follow the expected radiator shape.
Qiao, Gang; Gan, Shuwei; Liu, Songzuo; Ma, Lu; Sun, Zongxin
2018-05-24
To improve the throughput of underwater acoustic (UWA) networking, the In-band full-duplex (IBFD) communication is one of the most vital pieces of research. The major drawback of IBFD-UWA communication is Self-Interference (SI). This paper presents a digital SI cancellation algorithm for asynchronous IBFD-UWA communication system. We focus on two issues: one is asynchronous communication dissimilar to IBFD radio communication, the other is nonlinear distortion caused by power amplifier (PA). First, we discuss asynchronous IBFD-UWA signal model with the nonlinear distortion of PA. Then, we design a scheme for asynchronous IBFD-UWA communication utilizing the non-overlapping region between SI and intended signal to estimate the nonlinear SI channel. To cancel the nonlinear distortion caused by PA, we propose an Over-Parameterization based Recursive Least Squares (RLS) algorithm (OPRLS) to estimate the nonlinear SI channel. Furthermore, we present the OPRLS with a sparse constraint to estimate the SI channel, which reduces the requirement of the length of the non-overlapping region. Finally, we verify our concept through simulation and the pool experiment. Results demonstrate that the proposed digital SI cancellation scheme can cancel SI efficiently.
NASA Astrophysics Data System (ADS)
Xie, Xin
Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.
NASA Astrophysics Data System (ADS)
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare to observed flash rates. For the 6 June storm, a preliminary analysis of aircraft observations of storm inflow and outflow is presented in order to place flash rates (and other lightning statistics) in the context of storm chemistry. An approach to a possibly improved LNOx parameterization scheme using different lightning metrics such as flash area will be discussed.
NASA Astrophysics Data System (ADS)
Liu, Yuefeng; Duan, Zhuoyi; Chen, Song
2017-10-01
Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.
NASA Technical Reports Server (NTRS)
Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.
2016-01-01
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.
Improving microphysics in a convective parameterization: possibilities and limitations
NASA Astrophysics Data System (ADS)
Labbouz, Laurent; Heikenfeld, Max; Stier, Philip; Morrison, Hugh; Milbrandt, Jason; Protat, Alain; Kipling, Zak
2017-04-01
The convective cloud field model (CCFM) is a convective parameterization implemented in the climate model ECHAM6.1-HAM2.2. It represents a population of clouds within each ECHAM-HAM model column, simulating up to 10 different convective cloud types with individual radius, vertical velocities and microphysical properties. Comparisons between CCFM and radar data at Darwin, Australia, show that in order to reproduce both the convective cloud top height distribution and the vertical velocity profile, the effect of aerodynamic drag on the rising parcel has to be considered, along with a reduced entrainment parameter. A new double-moment microphysics (the Predicted Particle Properties scheme, P3) has been implemented in the latest version of CCFM and is compared to the standard single-moment microphysics and the radar retrievals at Darwin. The microphysical process rates (autoconversion, accretion, deposition, freezing, …) and their response to changes in CDNC are investigated and compared to high resolution CRM WRF simulations over the Amazon region. The results shed light on the possibilities and limitations of microphysics improvements in the framework of CCFM and in convective parameterizations in general.
Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology
NASA Astrophysics Data System (ADS)
Jin, Z.; Azzari, G.; Lobell, D. B.
2016-12-01
Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.
NASA Technical Reports Server (NTRS)
Johnson, Kevin D.; Entekhabi, Dara; Eagleson, Peter S.
1993-01-01
New land-surface hydrologic parameterizations are implemented into the NASA Goddard Institute for Space Studies (GISS) General Circulation Model (GCM). These parameterizations are: 1) runoff and evapotranspiration functions that include the effects of subgrid-scale spatial variability and use physically based equations of hydrologic flux at the soil surface and 2) a realistic soil moisture diffusion scheme for the movement of water and root sink in the soil column. A one-dimensional climate model with a complete hydrologic cycle is used to screen the basic sensitivities of the hydrological parameterizations before implementation into the full three-dimensional GCM. Results of the final simulation with the GISS GCM and the new land-surface hydrology indicate that the runoff rate, especially in the tropics, is significantly improved. As a result, the remaining components of the heat and moisture balance show similar improvements when compared to observations. The validation of model results is carried from the large global (ocean and land-surface) scale to the zonal, continental, and finally the regional river basin scales.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
GEWEX Cloud Systems Study (GCSS)
NASA Technical Reports Server (NTRS)
Moncrieff, Mitch
1993-01-01
The Global Energy and Water Cycle Experiment (GEWEX) Cloud Systems Study (GCSS) program seeks to improve the physical understanding of sub-grid scale cloud processes and their representation in parameterization schemes. By improving the description and understanding of key cloud system processes, GCSS aims to develop the necessary parameterizations in climate and numerical weather prediction (NWP) models. GCSS will address these issues mainly through the development and use of cloud-resolving or cumulus ensemble models to generate realizations of a set of archetypal cloud systems. The focus of GCSS is on mesoscale cloud systems, including precipitating convectively-driven cloud systems like MCS's and boundary layer clouds, rather than individual clouds, and on their large-scale effects. Some of the key scientific issues confronting GCSS that particularly relate to research activities in the central U.S. are presented.
NASA Astrophysics Data System (ADS)
Sol Galligani, Victoria; Wang, Die; Alvarez Imaz, Milagros; Salio, Paola; Prigent, Catherine
2017-10-01
In the present study, three meteorological events of extreme deep moist convection, characteristic of south-eastern South America, are considered to conduct a systematic evaluation of the microphysical parameterizations available in the Weather Research and Forecasting (WRF) model by undertaking a direct comparison between satellite-based simulated and observed microwave radiances. A research radiative transfer model, the Atmospheric Radiative Transfer Simulator (ARTS), is coupled with the WRF model under three different microphysical parameterizations (WSM6, WDM6 and Thompson schemes). Microwave radiometry has shown a promising ability in the characterization of frozen hydrometeors. At high microwave frequencies, however, frozen hydrometeors significantly scatter radiation, and the relationship between radiation and hydrometeor populations becomes very complex. The main difficulty in microwave remote sensing of frozen hydrometeor characterization is correctly characterizing this scattering signal due to the complex and variable nature of the size, composition and shape of frozen hydrometeors. The present study further aims at improving the understanding of frozen hydrometeor optical properties characteristic of deep moist convection events in south-eastern South America. In the present study, bulk optical properties are computed by integrating the single-scattering properties of the Liu(2008) discrete dipole approximation (DDA) single-scattering database across the particle size distributions parameterized by the different WRF schemes in a consistent manner, introducing the equal mass approach. The equal mass approach consists of describing the optical properties of the WRF snow and graupel hydrometeors with the optical properties of habits in the DDA database whose dimensions might be different (D
Importance of ensembles in projecting regional climate trends
NASA Astrophysics Data System (ADS)
Arritt, Raymond; Daniel, Ariele; Groisman, Pavel
2016-04-01
We have performed an ensemble of simulations using RegCM4 to examine the ability to reproduce observed trends in precipitation intensity and to project future changes through the 21st century for the central United States. We created a matrix of simulations over the CORDEX North America domain for 1950-2099 by driving the regional model with two different global models (HadGEM2-ES and GFDL-ESM2M, both for RCP8.5), by performing simulations at both 50 km and 25 km grid spacing, and by using three different convective parameterizations. The result is a set of 12 simulations (two GCMs by two resolutions by three convective parameterizations) that can be used to systematically evaluate the influence of simulation design on predicted precipitation. The two global models were selected to bracket the range of climate sensitivity in the CMIP5 models: HadGEM2-ES has the highest ECS of the CMIP5 models, while GFDL-ESM2M has one of the lowestt. Our evaluation metrics differ from many other RCM studies in that we focus on the skill of the models in reproducing past trends rather than the mean climate state. Trends in frequency of extreme precipitation (defined as amounts exceeding 76.2 mm/day) for most simulations are similar to the observed trend but with notable variations depending on RegCM4 configuration and on the driving GCM. There are complex interactions among resolution, choice of convective parameterization, and the driving GCM that carry over into the future climate projections. We also note that biases in the current climate do not correspond to biases in trends. As an example of these points the Emanuel scheme is consistently "wet" (positive bias in precipitation) yet it produced the smallest precipitation increase of the three convective parameterizations when used in simulations driven by HadGEM2-ES. However, it produced the largest increase when driven by GFDL-ESM2M. These findings reiterate that ensembles using multiple RCM configurations and driving GCMs are essential for projecting regional climate change, even when a single RCM is used. This research was sponsored by the U.S. Department of Agriculture National Institute of Food and Agriculture.
NASA Astrophysics Data System (ADS)
Newman, James Charles, III
1997-10-01
The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.
New optimization scheme to obtain interaction potentials for oxide glasses
NASA Astrophysics Data System (ADS)
Sundararaman, Siddharth; Huang, Liping; Ispas, Simona; Kob, Walter
2018-05-01
We propose a new scheme to parameterize effective potentials that can be used to simulate atomic systems such as oxide glasses. As input data for the optimization, we use the radial distribution functions of the liquid and the vibrational density of state of the glass, both obtained from ab initio simulations, as well as experimental data on the pressure dependence of the density of the glass. For the case of silica, we find that this new scheme facilitates finding pair potentials that are significantly more accurate than the previous ones even if the functional form is the same, thus demonstrating that even simple two-body potentials can be superior to more complex three-body potentials. We have tested the new potential by calculating the pressure dependence of the elastic moduli and found a good agreement with the corresponding experimental data.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
NASA Astrophysics Data System (ADS)
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Role of Microphysical Parameterizations with Droplet Relative Dispersion in IAP AGCM 4.1
Xie, Xiaoning; Zhang, He; Liu, Xiaodong; ...
2018-01-10
In previous studies we see that accurate descriptions of the cloud droplet effective radius (Re) and the autoconversion process of cloud droplets to raindrops (Au) can effectively improve simulated clouds and surface precipitation, and reduce the uncertainty of aerosol indirect effects in global climate models (GCMs). In this paper, we implement cloud microphysical schemes including two-moment Au and R e considering relative dispersion of the cloud droplet size distribution into version 4.1 of the Institute of Atmospheric Physics atmospheric GCM (IAP AGCM 4.1), which is the atmospheric component of the Chinese Academy of Sciences-Earth System model (CAS-ESM 1.0). An analysismore » of the effects of different schemes shows that the newly implemented schemes can improve both the simulated shortwave (SWCF) and longwave cloud radiative forcings (LWCF) as compared to the standard scheme in IAP AGCM 4.1. The new schemes also effectively enhance the large-scale precipitation, especially over low latitudes, although the influences of total precipitation are insignificant for different schemes. Further studies show that similar results can be found with the Community Atmosphere Model 5.1 (CAM5.1).« less
Role of Microphysical Parameterizations with Droplet Relative Dispersion in IAP AGCM 4.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Xiaoning; Zhang, He; Liu, Xiaodong
In previous studies we see that accurate descriptions of the cloud droplet effective radius (Re) and the autoconversion process of cloud droplets to raindrops (Au) can effectively improve simulated clouds and surface precipitation, and reduce the uncertainty of aerosol indirect effects in global climate models (GCMs). In this paper, we implement cloud microphysical schemes including two-moment Au and R e considering relative dispersion of the cloud droplet size distribution into version 4.1 of the Institute of Atmospheric Physics atmospheric GCM (IAP AGCM 4.1), which is the atmospheric component of the Chinese Academy of Sciences-Earth System model (CAS-ESM 1.0). An analysismore » of the effects of different schemes shows that the newly implemented schemes can improve both the simulated shortwave (SWCF) and longwave cloud radiative forcings (LWCF) as compared to the standard scheme in IAP AGCM 4.1. The new schemes also effectively enhance the large-scale precipitation, especially over low latitudes, although the influences of total precipitation are insignificant for different schemes. Further studies show that similar results can be found with the Community Atmosphere Model 5.1 (CAM5.1).« less
NASA Astrophysics Data System (ADS)
Xu, Xin; Tang, Ying; Wang, Yuan; Xue, Ming
2018-03-01
The directional absorption of mountain waves in the Northern Hemisphere is assessed by examination of horizontal wind rotation using the 2.5° × 2.5° European Centre for Medium-Range Weather Forecasts ERA-Interim reanalysis between 2011 and 2016. In the deep layer of troposphere and stratosphere, the horizontal wind rotates by more than 120° all over the Northern Hemisphere primary mountainous areas, with the rotation mainly occurring in the troposphere (stratosphere) of lower (middle to high) latitudes. The rotation of tropospheric wind increases markedly in summer over the Tibetan Plateau and Iranian Plateau, due to the influence of Asian summer monsoonal circulation. The influence of directional absorption of mountain waves on the mountain wave momentum transport is also studied using a new parameterization scheme of orographic gravity wave drag (OGWD) which accounts for the effect of directional wind shear. Owing to the directional absorption, the wave momentum flux is attenuated by more than 50% in the troposphere of lower latitudes, producing considerable orographic gravity wave lift which is normal to the mean wind. Compared with the OGWD produced in traditional schemes assuming a unidirectional wind profile, the OGWD in the new scheme is suppressed in the lower stratosphere but enhanced in the upper stratosphere and lower mesosphere. This is because the directional absorption of mountain waves in the troposphere reduces the wave amplitude in the stratosphere. Consequently, mountain waves are prone to break at higher altitudes, which favors the production of stronger OGWD given the decrease of air density with height.
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
NASA Astrophysics Data System (ADS)
Britt, Darrell Steven, Jr.
Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.
Assessment of the turbulence parameterization schemes for the Martian mesoscale simulations
NASA Astrophysics Data System (ADS)
Temel, Orkun; Karatekin, Ozgur; Van Beeck, Jeroen
2016-07-01
Turbulent transport within the Martian atmospheric boundary layer (ABL) is one of the most important physical processes in the Martian atmosphere due to the very thin structure of Martian atmosphere and super-adiabatic conditions during the diurnal cycle [1]. The realistic modeling of turbulent fluxes within the Martian ABL has a crucial effect on the many physical phenomena including dust devils [2], methane dispersion [3] and nocturnal jets [4]. Moreover, the surface heat and mass fluxes, which are related with the mass transport within the sub-surface of Mars, are being computed by the turbulence parameterization schemes. Therefore, in addition to the possible applications within the Martian boundary layer, parameterization of turbulence has an important effect on the biological research on Mars including the investigation of water cycle or sub-surface modeling. In terms of the turbulence modeling approaches being employed for the Martian ABL, the "planetary boundary layer (PBL) schemes" have been applied not only for the global circulation modeling but also for the mesoscale simulations [5]. The PBL schemes being used for Mars are the variants of the PBL schemes which had been developed for the Earth and these schemes are either based on the empirical determination of turbulent fluxes [6] or based on solving a one dimensional turbulent kinetic energy equation [7]. Even though, the Large Eddy Simulation techniques had also been applied with the regional models for Mars, it must be noted that these advanced models also use the features of these traditional PBL schemes for sub-grid modeling [8]. Therefore, assessment of these PBL schemes is vital for a better understanding the atmospheric processes of Mars. In this framework, this present study is devoted to the validation of different turbulence modeling approaches for the Martian ABL in comparison to Viking Lander [9] and MSL [10] datasets. The GCM/Mesoscale code being used is the PlanetWRF, the extended version of WRF model for the extraterrestrial atmospheres [11]. Based on the measurements, the performances of different PBL schemes have been evaluated and some improvements have been proposed. [1] Colaïtis, A., Spiga, A., Hourdin, F., Rio, C., Forget, F., & Millour, E. (2013). A thermal plume model for the Martian convective boundary layer. Journal of Geophysical Research: Planets, 118(7), 1468-1487. [2] Balme, M., & Greeley, R. (2006). Dust devils on Earth and Mars. Reviews of Geophysics, 44(3). [3] Olsen, K. S., Cloutis, E., & Strong, K. (2012). Small-scale methane dispersion modelling for possible plume sources on the surface of Mars. Geophysical Research Letters, 39(19). [4] Savijärvi, H., & Siili, T. (1993). The Martian slope winds and the nocturnal PBL jet. Journal of the atmospheric sciences, 50(1), 77-88. [5] Fenton, L. K., Toigo, A. D., & Richardson, M. I. (2005). Aeolian processes in Proctor crater on Mars: Mesoscale modeling of dune-forming winds. Journal of Geophysical Research: Planets, 110(E6). [6] Hong, Song-You, Yign Noh, Jimy Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 2318-2341. [7] Janjic, Zavisa I., 1994: The Step-Mountain Eta Coordinate Model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927-945. [8] Michaels, T. I., & Rafkin, S. C. (2004). Large-eddy simulation of atmospheric convection on Mars. Quarterly Journal of the Royal Meteorological Society, 130(599), 1251-1274. [9] Hess, S. L., Henry, R. M., Leovy, C. B., Ryan, J. A., & Tillman, J. E. (1977). Meteorological results from the surface of Mars: Viking 1 and 2. Journal of Geophysical Research, 82(28), 4559-4574. [10] Martínez, G. et Al. (2015). Likely frost events at Gale crater: Analysis from MSL/REMS measurements. Icarus. [11] Richardson, M. I., Toigo, A. D., & Newman, C. E. (2007). PlanetWRF: A general purpose, local to global numerical model for planetary atmospheric and climate dynamics. Journal of Geophysical Research: Planets, 112(E9).
NASA Astrophysics Data System (ADS)
Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin
2018-03-01
An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on radiations, T2, precipitation, and air quality (e.g., decreasing O3) through complex aerosol-radiation-cloud-chemistry feedbacks. The inclusion of adsorptive activation of dust particles in the FN series scheme has similar impacts on the meteorology and air quality but to lesser extent as compared to differences between the FN series and AG schemes. Compared to the overall differences between the FN series and AG schemes, impacts of adsorptive activation of dust particles can contribute significantly to the increase of total CDNC (∼45%) during dust storm events and indicate their importance in modulating regional climate over East Asia.
Modeling the Surface Energy Balance of the Core of an Old Mediterranean City: Marseille.
NASA Astrophysics Data System (ADS)
Lemonsu, A.; Grimmond, C. S. B.; Masson, V.
2004-02-01
The Town Energy Balance (TEB) model, which parameterizes the local-scale energy and water exchanges between urban surfaces and the atmosphere by treating the urban area as a series of urban canyons, coupled to the Interactions between Soil, Biosphere, and Atmosphere (ISBA) scheme, was run in offline mode for Marseille, France. TEB's performance is evaluated with observations of surface temperatures and surface energy balance fluxes collected during the field experiments to constrain models of atmospheric pollution and transport of emissions (ESCOMPTE) urban boundary layer (UBL) campaign. Particular attention was directed to the influence of different surface databases, used for input parameters, on model predictions. Comparison of simulated canyon temperatures with observations resulted in improvements to TEB parameterizations by increasing the ventilation. Evaluation of the model with wall, road, and roof surface temperatures gave good results. The model succeeds in simulating a sensible heat flux larger than heat storage, as observed. A sensitivity comparison using generic dense city parameters, derived from the Coordination of Information on the Environment (CORINE) land cover database, and those from a surface database developed specifically for the Marseille city center shows the importance of correctly documenting the urban surface. Overall, the TEB scheme is shown to be fairly robust, consistent with results from previous studies.
Simulation of the Atmospheric Boundary Layer for Wind Energy Applications
NASA Astrophysics Data System (ADS)
Marjanovic, Nikola
Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).
Preparing CAM-SE for Multi-Tracer Applications: CAM-SE-Cslam
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Taylor, M.; Goldhaber, S.
2014-12-01
The NCAR-DOE spectral element (SE) dynamical core comes from the HOMME (High-Order Modeling Environment; Dennis et al., 2012) and it is available in CAM. The CAM-SE dynamical core is designed with intrinsic mimetic properties guaranteeing total energy conservation (to time-truncation errors) and mass-conservation, and has demonstrated excellent scalability on massively parallel compute platforms (Taylor, 2011). For applications involving many tracers such as chemistry and biochemistry modeling, CAM-SE has been found to be significantly more computationally costly than the current "workhorse" model CAM-FV (Finite-Volume; Lin 2004). Hence a multi-tracer efficient scheme, called the CSLAM (Conservative Semi-Lagrangian Multi-tracer; Lauritzen et al., 2011) scheme, has been implemented in the HOMME (Erath et al., 2012). The CSLAM scheme has recently been cast in flux-form in HOMME so that it can be coupled to the SE dynamical core through conventional flux-coupling methods where the SE dynamical core provides background air mass fluxes to CSLAM. Since the CSLAM scheme makes use of a finite-volume gnomonic cubed-sphere grid and hence does not operate on the SE quadrature grid, the capability of running tracer advection, the physical parameterization suite and dynamics on separate grids has been implemented in CAM-SE. The default CAM-SE-CSLAM setup is to run physics on the quasi-equal area CSLAM grid. The capability of running physics on a different grid than the SE dynamical core may provide a more consistent coupling since the physics grid option operates with quasi-equal-area cell average values rather than non-equi-distant grid-point (SE quadrature point) values. Preliminary results on the performance of CAM-SE-CSLAM will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachan, John
Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.
A Minimal Three-Dimensional Tropical Cyclone Model.
NASA Astrophysics Data System (ADS)
Zhu, Hongyan; Smith, Roger K.; Ulrich, Wolfgang
2001-07-01
A minimal 3D numerical model designed for basic studies of tropical cyclone behavior is described. The model is formulated in coordinates on an f or plane and has three vertical levels, one characterizing a shallow boundary layer and the other two representing the upper and lower troposphere, respectively. It has three options for treating cumulus convection on the subgrid scale and a simple scheme for the explicit release of latent heat on the grid scale. The subgrid-scale schemes are based on the mass-flux models suggested by Arakawa and Ooyama in the late 1960s, but modified to include the effects of precipitation-cooled downdrafts. They differ from one another in the closure that determines the cloud-base mass flux. One closure is based on the assumption of boundary layer quasi-equilibrium proposed by Raymond and Emanuel.It is shown that a realistic hurricane-like vortex develops from a moderate strength initial vortex, even when the initial environment is slightly stable to deep convection. This is true for all three cumulus schemes as well as in the case where only the explicit release of latent heat is included. In all cases there is a period of gestation during which the boundary layer moisture in the inner core region increases on account of surface moisture fluxes, followed by a period of rapid deepening. Precipitation from the convection scheme dominates the explicit precipitation in the early stages of development, but this situation is reversed as the vortex matures. These findings are similar to those of Baik et al., who used the Betts-Miller parameterization scheme in an axisymmetric model with 11 levels in the vertical. The most striking difference between the model results using different convection schemes is the length of the gestation period, whereas the maximum intensity attained is similar for the three schemes. The calculations suggest the hypothesis that the period of rapid development in tropical cyclones is accompanied by a change in the character of deep convection in the inner core region from buoyantly driven, predominantly upright convection to slantwise forced moist ascent.
NASA Astrophysics Data System (ADS)
Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.
2014-06-01
A new heterogeneous ice nucleation parameterization that covers a~wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by ns, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant ns, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new ns parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiranuma, Naruki; Paukert, Marco; Steinke, Isabelle
2014-12-10
A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 °C to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by n s, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RH ice) in the chamber. Our measurementsmore » showed several different pathways to nucleate ice depending on T and RH ice conditions. For instance, almost independent freezing was observed at -60 °C < T < -50 °C, where RH ice explicitly controlled ice nucleation efficiency, while both T and RH ice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant n s, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new n s parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.« less
Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana
2008-06-01
Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.
Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
Dynamic downscaling over western Himalayas: Impact of cloud microphysics schemes
NASA Astrophysics Data System (ADS)
Tiwari, Sarita; Kar, Sarat C.; Bhatla, R.
2018-03-01
Due to lack of observation data in the region of inhomogeneous terrain of the Himalayas, detailed climate of Himalayas is still unknown. Global reanalysis data are too coarse to represent the hydroclimate over the region with sharp orography gradient in the western Himalayas. In the present study, dynamic downscaling of the European Centre for Medium-Range Weather Forecast (ECMWF) Reanalysis-Interim (ERA-I) dataset over the western Himalayas using high-resolution Weather Research and Forecast (WRF) model has been carried out. Sensitivity studies have also been carried out using convection and microphysics parameterization schemes. The WRF model simulations have been compared against ERA-I and available station observations. Analysis of the results suggests that the WRF model has simulated the hydroclimate of the region well. It is found that in the simulations that the impact of convection scheme is more during summer months than in winter. Examination of simulated results using various microphysics schemes reveal that the WRF single-moment class-6 (WSM6) scheme simulates more precipitation on the upwind region of the high mountain than that in the Morrison and Thompson schemes during the winter period. Vertical distribution of various hydrometeors shows that there are large differences in mixing ratios of ice, snow and graupel in the simulations with different microphysics schemes. The ice mixing ratio in Morrison scheme is more than WSM6 above 400 hPa. The Thompson scheme favors formation of more snow than WSM6 or Morrison schemes while the Morrison scheme has more graupel formation than other schemes.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
NASA Astrophysics Data System (ADS)
Popova, E. E.; Coward, A. C.; Nurser, G. A.; de Cuevas, B.; Fasham, M. J. R.; Anderson, T. R.
2006-12-01
A global general circulation model coupled to a simple six-compartment ecosystem model is used to study the extent to which global variability in primary and export production can be realistically predicted on the basis of advanced parameterizations of upper mixed layer physics, without recourse to introducing extra complexity in model biology. The "K profile parameterization" (KPP) scheme employed, combined with 6-hourly external forcing, is able to capture short-term periodic and episodic events such as diurnal cycling and storm-induced deepening. The model realistically reproduces various features of global ecosystem dynamics that have been problematic in previous global modelling studies, using a single generic parameter set. The realistic simulation of deep convection in the North Atlantic, and lack of it in the North Pacific and Southern Oceans, leads to good predictions of chlorophyll and primary production in these contrasting areas. Realistic levels of primary production are predicted in the oligotrophic gyres due to high frequency external forcing of the upper mixed layer (accompanying paper Popova et al., 2006) and novel parameterizations of zooplankton excretion. Good agreement is shown between model and observations at various JGOFS time series sites: BATS, KERFIX, Papa and HOT. One exception is the northern North Atlantic where lower grazing rates are needed, perhaps related to the dominance of mesozooplankton there. The model is therefore not globally robust in the sense that additional parameterizations are needed to realistically simulate ecosystem dynamics in the North Atlantic. Nevertheless, the work emphasises the need to pay particular attention to the parameterization of mixed layer physics in global ocean ecosystem modelling as a prerequisite to increasing the complexity of ecosystem models.
Sensitivity of Tropical Cyclones to Parameterized Convection in the NASA GEOS5 Model
NASA Technical Reports Server (NTRS)
Lim, Young-Kwon; Schubert, Siegfried D.; Reale, Oreste; Lee, Myong-In; Molod, Andrea M.; Suarez, Max J.
2014-01-01
The sensitivity of tropical cyclones (TCs) to changes in parameterized convection is investigated to improve the simulation of TCs in the North Atlantic. Specifically, the impact of reducing the influence of the Relaxed Arakawa-Schubert (RAS) scheme-based parameterized convection is explored using the Goddard Earth Observing System version5 (GEOS5) model at 0.25 horizontal resolution. The years 2005 and 2006 characterized by very active and inactive hurricane seasons, respectively, are selected for simulation. A reduction in parameterized deep convection results in an increase in TC activity (e.g., TC number and longer life cycle) to more realistic levels compared to the baseline control configuration. The vertical and horizontal structure of the strongest simulated hurricane shows the maximum lower-level (850-950hPa) wind speed greater than 60 ms and the minimum sea level pressure reaching 940mb, corresponding to a category 4 hurricane - a category never achieved by the control configuration. The radius of the maximum wind of 50km, the location of the warm core exceeding 10 C, and the horizontal compactness of the hurricane center are all quite realistic without any negatively affecting the atmospheric mean state. This study reveals that an increase in the threshold of minimum entrainment suppresses parameterized deep convection by entraining more dry air into the typical plume. This leads to cooling and drying at the mid- to upper-troposphere, along with the positive latent heat flux and moistening in the lower-troposphere. The resulting increase in conditional instability provides an environment that is more conducive to TC vortex development and upward moisture flux convergence by dynamically resolved moist convection, thereby increasing TC activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Samuel S. P.
2013-09-01
The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been an interdisciplinary collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen). The motivation and long-term goal underlying this work is the utilization of stochastic radiative transfer theory (Lane-Veron and Somerville, 2004; Lane et al., 2002) to develop a new class of parametric representations of cloud-radiation interactions and closely related processes for atmospheric models. The theoretical advantage of the stochastic approach is that it can accurately calculate the radiative heating rates through a broken cloud layer without requiring an exact description of the cloud geometry.« less
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
NASA Astrophysics Data System (ADS)
Stanford, M.; Varble, A.; Zipser, E. J.; Strapp, J. W.; Leroy, D.; Schwarzenboeck, A.; Korolev, A.; Potts, R.
2016-12-01
A model intercomparison study is conducted to identify biases in simulated tropical convective core microphysical properties using two popular bulk parameterization schemes (Thompson and Morrison) and the Fast Spectral Bin Microphysics (FSBM) scheme. In-situ aircraft measurements of total condensed water content (TWC) and particle size distributions are compared with output from high-resolution WRF simulations of 4 mesoscale convective system (MCS) cases during the High Altitude Ice Crystals-High Ice Water Content (HAIC-HIWC) field campaign conducted in Darwin, Australia in 2014 and Cayenne, French Guiana in 2015. Observations of TWC collected using an isokinetic evaporator probe (IKP) optimized for high IWC measurements in conjunction with particle image processing from two optical array probes aboard the Falcon-20 research aircraft were used to constrain mass-size relationships in the observational dataset. Hydrometeor mass size distributions are compared between retrievals and simulations providing insight into the well-known high bias in simulated convective radar reflectivity. For TWC > 1 g m-3 between -10 and -40°C, simulations generally produce significantly greater median mass diameters (MMDs). Observations indicate that a sharp particle size mode occurs at 300 μm for large TWC values (> 2 g m-3) regardless of temperature. All microphysics schemes fail to reproduce this feature, and relative contributions of different hydrometeor species to this size bias vary between schemes. Despite far greater sample sizes, simulations also fail to produce high TWC conditions with very little of the mass contributed by large particles for a range of temperatures, despite such conditions being observed. Considering vapor grown particles alone in comparison with observations fails to correct the bias present in all schemes. Decreasing horizontal resolution from 1 km to 333 m shifts graupel and rain size distributions to slightly smaller sizes, but increased resolution alone will clearly not eliminate model biases. Results instead indicate that biases in both hydrometeor size distribution assumptions and parameterized processes also exist and need to be addressed before cloud and precipitation properties of convective systems can be adequately predicted.
NASA Astrophysics Data System (ADS)
Gomes, J. L.; Chou, S. C.; Yaguchi, S. M.
2012-04-01
Physics parameterizations and the model vertical and horizontal resolutions, for example, can significantly contribute to the uncertainty in the numerical weather predictions, especially at regions with complex topography. The objective of this study is to assess the influences of model precipitation production schemes and horizontal resolution on the diurnal cycle of precipitation in the Eta Model . The model was run in hydrostatic mode at 3- and 5-km grid sizes, the vertical resolution was set to 50 layers, and the time steps to 6 and 10 s, respectively. The initial and boundary conditions were taken from ERA-Interim reanalysis. Over the sea the 0.25-deg sea surface temperature from NOAA was used. The model was setup to run for each resolution over Angra dos Reis, located in the Southeast region of Brazil, for the rainy period between 18 December 2009 and 01 de January 2010, the model simulation range was 48 hours. In one set of runs the cumulus parameterization was switched off, in this case the model precipitation was fully simulated by cloud microphysics scheme, and in the other set the model was run with weak cumulus convection. The results show that as the model horizontal resolution increases from 5 to 3 km, the spatial pattern of the precipitation hardly changed, although the maximum precipitation core increased in magnitude. Daily data from automatic station data was used to evaluate the runs and shows that the diurnal cycle of temperature and precipitation were better simulated for 3 km when compared against observations. The model configuration results without cumulus convection shows a small contraction in the precipitating area and an increase in the simulated maximum values. The diurnal cycle of precipitation was better simulated with some activity of the cumulus convection scheme. The skill scores for the period and for different forecast ranges are higher at weak and moderate precipitation rates.
The Incorporation and Initialization of Cloud Water/ice in AN Operational Forecast Model
NASA Astrophysics Data System (ADS)
Zhao, Qingyun
Quantitative precipitation forecasts have been one of the weakest aspects of numerical weather prediction models. Theoretical studies show that the errors in precipitation calculation can arise from three sources: errors in the large-scale forecasts of primary variables, errors in the crude treatment of condensation/evaporation and precipitation processes, and errors in the model initial conditions. A new precipitation parameterization scheme has been developed to investigate the forecast value of improved precipitation physics via the introduction of cloud water and cloud ice into a numerical prediction model. The main feature of this scheme is the explicit calculation of cloud water and cloud ice in both the convective and stratiform precipitation parameterization. This scheme has been applied to the eta model at the National Meteorological Center. Four extensive tests have been performed. The statistical results showed a significant improvement in the model precipitation forecasts. Diagnostic studies suggest that the inclusion of cloud ice is important in transferring water vapor to precipitation and in the enhancement of latent heat release; the latter subsequently affects the vertical motion field significantly. Since three-dimensional cloud data is absent from the analysis/assimilation system for most numerical models, a method has been proposed to incorporate observed precipitation and nephanalysis data into the data assimilation system to obtain the initial cloud field for the eta model. In this scheme, the initial moisture and vertical motion fields are also improved at the same time as cloud initialization. The physical initialization is performed in a dynamical initialization framework that uses the Newtonian dynamical relaxation method to nudge the model's wind and mass fields toward analyses during a 12-hour data assimilation period. Results from a case study showed that a realistic cloud field was produced by this method at the end of the data assimilation period. Precipitation forecasts have been significantly improved as a result of the improved initial cloud, moisture and vertical motion fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
NASA Astrophysics Data System (ADS)
Ackerman, A. S.; Kelley, M.; Cheng, Y.; Fridlind, A. M.; Del Genio, A. D.; Bauer, S.
2017-12-01
Reduction in cloud-water sedimentation induced by increasing droplet concentrations has been shown in large-eddy simulations (LES) and direct numerical simulation (DNS) to enhance boundary-layer entrainment, thereby reducing cloud liquid water path and offsetting the Twomey effect when the overlying air is sufficiently dry, which is typical. Among recent upgrades to ModelE3, the latest version of the NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM), are a two-moment stratiform cloud microphysics treatment with prognostic precipitation and a moist turbulence scheme that includes an option in its entrainment closure of a simple parameterization for the effect of cloud-water sedimentation. Single column model (SCM) simulations are compared to LES results for a stratocumulus case study and show that invoking the sedimentation-entrainment parameterization option indeed reduces the dependence of cloud liquid water path on increasing aerosol concentrations. Impacts of variations of the SCM configuration and the sedimentation-entrainment parameterization will be explored. Its impact on global aerosol indirect forcing in the framework of idealized atmospheric GCM simulations will also be assessed.
Evaluation of scale-aware subgrid mesoscale eddy models in a global eddy-rich model
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank
2017-07-01
Two parameterizations for horizontal mixing of momentum and tracers by subgrid mesoscale eddies are implemented in a high-resolution global ocean model. These parameterizations follow on the techniques of large eddy simulation (LES). The theory underlying one parameterization (2D Leith due to Leith, 1996) is that of enstrophy cascades in two-dimensional turbulence, while the other (QG Leith) is designed for potential enstrophy cascades in quasi-geostrophic turbulence. Simulations using each of these parameterizations are compared with a control simulation using standard biharmonic horizontal mixing.Simulations using the 2D Leith and QG Leith parameterizations are more realistic than those using biharmonic mixing. In particular, the 2D Leith and QG Leith simulations have more energy in resolved mesoscale eddies, have a spectral slope more consistent with turbulence theory (an inertial enstrophy or potential enstrophy cascade), have bottom drag and vertical viscosity as the primary sinks of energy instead of lateral friction, and have isoneutral parameterized mesoscale tracer transport. The parameterization choice also affects mass transports, but the impact varies regionally in magnitude and sign.
NASA Astrophysics Data System (ADS)
Boone, Aaron; Samuelsson, Patrick; Gollvik, Stefan; Napoly, Adrien; Jarlan, Lionel; Brun, Eric; Decharme, Bertrand
2017-02-01
Land surface models (LSMs) are pushing towards improved realism owing to an increasing number of observations at the local scale, constantly improving satellite data sets and the associated methodologies to best exploit such data, improved computing resources, and in response to the user community. As a part of the trend in LSM development, there have been ongoing efforts to improve the representation of the land surface processes in the interactions between the soil-biosphere-atmosphere (ISBA) LSM within the EXternalized SURFace (SURFEX) model platform. The force-restore approach in ISBA has been replaced in recent years by multi-layer explicit physically based options for sub-surface heat transfer, soil hydrological processes, and the composite snowpack. The representation of vegetation processes in SURFEX has also become much more sophisticated in recent years, including photosynthesis and respiration and biochemical processes. It became clear that the conceptual limits of the composite soil-vegetation scheme within ISBA had been reached and there was a need to explicitly separate the canopy vegetation from the soil surface. In response to this issue, a collaboration began in 2008 between the high-resolution limited area model (HIRLAM) consortium and Météo-France with the intention to develop an explicit representation of the vegetation in ISBA under the SURFEX platform. A new parameterization has been developed called the ISBA multi-energy balance (MEB) in order to address these issues. ISBA-MEB consists in a fully implicit numerical coupling between a multi-layer physically based snowpack model, a variable-layer soil scheme, an explicit litter layer, a bulk vegetation scheme, and the atmosphere. It also includes a feature that permits a coupling transition of the snowpack from the canopy air to the free atmosphere. It shares many of the routines and physics parameterizations with the standard version of ISBA. This paper is the first of two parts; in part one, the ISBA-MEB model equations, numerical schemes, and theoretical background are presented. In part two (Napoly et al., 2016), which is a separate companion paper, a local scale evaluation of the new scheme is presented along with a detailed description of the new forest litter scheme.
Variability of the Arctic Basin Oceanographic Fields
1996-02-01
the model a very sophisticated turbulence closure scheme. 9. Imitation of the CO2 doubling We parameterized the " greenhouse " effect by changing the...of the Arctic Ocean. A more realistic model of the Arctic Ocean circulation was obtained, and an estimation of the impact of the greenhouse effect on... greenhouse effect is in freshening of the upper Arctic Basin. Although some shortcomings of the model still exist (an unrealistic high coefficient of
Operational Ocean Modelling with the Harvard Ocean Prediction System
2008-11-01
tno.nl TNO-rapportnummer TNO-DV2008 A417 Opdrachtnummer Datum november 2008 Auteur (s) dr. F.P.A. Lam dr. ir. M.W. Schouten dr. L.A. te Raa...area of theory and implementation of numerical schemes and parameterizations, ocean models have grown from experimental tools to full-blown ocean...sound propagation through mesoscale features using 3-D coupled mode theory , Thesis, Naval Postgraduate School, Monterey, USA. 1992. [9] Robinson
NASA Technical Reports Server (NTRS)
Bowling, Laura C.; Lettenmaier, Dennis P.; Nijssen, Bart; Polcher, Jan; Koster, Randal D.; Lohmann, Dag; Houser, Paul R. (Technical Monitor)
2002-01-01
The Project for Intercomparison of Land Surface Parameterization Schemes (PILPS) Phase 2(e) showed that in cold regions the annual runoff production in Land Surface Schemes (LSSs) is closely related to the maximum snow accumulation, which in turn is controlled in large part by winter sublimation. To help further explain the relationship between snow cover, turbulent exchanges and runoff production, a simple equivalent model-(SEM) was devised to reproduce the seasonal and annual fluxes simulated by 13 LSSs that participated in PILPS Phase 2(e). The design of the SEM relates the annual partitioning of precipitation and energy in the LSSs to three primary parameters: snow albedo, effective aerodynamic resistance and evaporation efficiency. Isolation of each of the parameters showed that the annual runoff production was most sensitive to the aerodynamic resistance. The SEM was somewhat successful in reproducing the observed LSS response to a decrease in shortwave radiation and changes in wind speed forcings. SEM parameters derived from the reduced shortwave forcings suggested that increased winter stability suppressed turbulent heat fluxes over snow. Because winter sensible heat fluxes were largely negative, reductions in winter shortwave radiation imply an increase in annual average sensible heat.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
NASA Astrophysics Data System (ADS)
Liu, Jianjun; Zhang, Feimin; Pu, Zhaoxia
2017-04-01
Accurate forecasting of the intensity changes of hurricanes is an important yet challenging problem in numerical weather prediction. The rapid intensification of Hurricane Katrina (2005) before its landfall in the southern US is studied with the Advanced Research version of the WRF (Weather Research and Forecasting) model. The sensitivity of numerical simulations to two popular planetary boundary layer (PBL) schemes, the Mellor-Yamada-Janjic (MYJ) and the Yonsei University (YSU) schemes, is investigated. It is found that, compared with the YSU simulation, the simulation with the MYJ scheme produces better track and intensity evolution, better vortex structure, and more accurate landfall time and location. Large discrepancies (e.g., over 10 hPa in simulated minimum sea level pressure) are found between the two simulations during the rapid intensification period. Further diagnosis indicates that stronger surface fluxes and vertical mixing in the PBL from the simulation with the MYJ scheme lead to enhanced air-sea interaction, which helps generate more realistic simulations of the rapid intensification process. Overall, the results from this study suggest that improved representation of surface fluxes and vertical mixing in the PBL is essential for accurate prediction of hurricane intensity changes.
A unified spectral,parameterization for wave breaking: from the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.
2010-12-01
A new wave-breaking dissipation parameterization designed for spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is fi[|#12#|]rst calculated in the physical space before being distributed over the relevant spectral components. This parameterization allows a seamless numerical model from the deep ocean into the surf zone. This transition from deep to shallow water is made possible by a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth.The parameterization is further tested in the WAVEWATCH III TM code, from the global ocean to the beach scale. Model errors are smaller than with most specialized deep or shallow water parameterizations.
NASA Astrophysics Data System (ADS)
Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.
2017-12-01
NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Jiwen; Liu, Yi-Chin; Xu, Kuan-Man
2015-04-27
The ultimate goal of this study is to improve representation of convective transport by cumulus parameterization for meso-scale and climate models. As Part I of the study, we perform extensive evaluations of cloud-resolving simulations of a squall line and mesoscale convective complexes in mid-latitude continent and tropical regions using the Weather Research and Forecasting (WRF) model with spectral-bin microphysics (SBM) and with two double-moment bulk microphysics schemes: a modified Morrison (MOR) and Milbrandt and Yau (MY2). Compared to observations, in general, SBM gives better simulations of precipitation, vertical velocity of convective cores, and the vertically decreasing trend of radar reflectivitymore » than MOR and MY2, and therefore will be used for analysis of scale-dependence of eddy transport in Part II. The common features of the simulations for all convective systems are (1) the model tends to overestimate convection intensity in the middle and upper troposphere, but SBM can alleviate much of the overestimation and reproduce the observed convection intensity well; (2) the model greatly overestimates radar reflectivity in convective cores (SBM predicts smaller radar reflectivity but does not remove the large overestimation); and (3) the model performs better for mid-latitude convective systems than tropical system. The modeled mass fluxes of the mid latitude systems are not sensitive to microphysics schemes, but are very sensitive for the tropical case indicating strong microphysics modification to convection. Cloud microphysical measurements of rain, snow and graupel in convective cores will be critically important to further elucidate issues within cloud microphysics schemes.« less
NASA Technical Reports Server (NTRS)
D’Alessandro, John J.; Diao, Minghui; Wu, Chenglai; Liu, Xiaohong; Chen, Ming; Morrison, Hugh; Eidhammer, Trude; Jensen, Jorgen B.; Bansemer, Aaron; Zondlo, Mark A.;
2017-01-01
Occurrence frequency and dynamical conditions of ice supersaturation (ISS, where relative humidity with respect to ice (RHi) greater than 100%) are examined in the upper troposphere around convective activity. Comparisons are conducted between in situ airborne observations and the Weather Research and Forecasting model simulations using four double-moment microphysical schemes at temperatures less than or or equal to -40degdegC. All four schemes capture both clear-sky and in-cloud ISS conditions. However, the clear-sky (in-cloud) ISS conditions are completely (significantly) limited to the RHi thresholds of the Cooper parameterization. In all of the simulations, ISS occurrence frequencies are higher by approximately 3-4 orders of magnitude at higher updraft speeds (greater than 1 m s(exp -1) than those at the lower updraft speeds when ice water content (IWC) greater than 0.01 gm(exp -3), while observations show smaller differences up to approximately 1-2 orders of magnitude. The simulated ISS also occurs less frequently at weaker updrafts and downdrafts than observed. These results indicate that the simulations have a greater dependence on stronger updrafts to maintain/generate ISS at higher IWC. At lower IWC (less than or equal or 0.01 gm(exp -3), simulations unexpectedly show lower ISS frequencies at stronger updrafts. Overall, the Thompson aerosol-aware scheme has the closest magnitudes and frequencies of ISS greater than 20% to the observations, and the modified Morrison has the closest correlations between ISS frequencies and vertical velocity at higher IWC and number density. The Cooper parameterization often generates excessive ice crystals and therefore suppresses the frequency and magnitude of ISS, indicating that it should be initiated at higher ISS (e.g.,lees than or equal to 25%).
NASA Astrophysics Data System (ADS)
Skamarock, W. C.
2015-12-01
One of the major problems in atmospheric model applications is the representation of deep convection within the models; explicit simulation of deep convection on fine meshes performs much better than sub-grid parameterized deep convection on coarse meshes. Unfortunately, the high cost of explicit convective simulation has meant it has only been used to down-scale global simulations in weather prediction and regional climate applications, typically using traditional one-way interactive nesting technology. We have been performing real-time weather forecast tests using a global non-hydrostatic atmospheric model (the Model for Prediction Across Scales, MPAS) that employs a variable-resolution unstructured Voronoi horizontal mesh (nominally hexagons) to span hydrostatic to nonhydrostatic scales. The smoothly varying Voronoi mesh eliminates many downscaling problems encountered using traditional one- or two-way grid nesting. Our test weather forecasts cover two periods - the 2015 Spring Forecast Experiment conducted at the NOAA Storm Prediction Center during the month of May in which we used a 50-3 km mesh, and the PECAN field program examining nocturnal convection over the US during the months of June and July in which we used a 15-3 km mesh. An important aspect of this modeling system is that the model physics be scale-aware, particularly the deep convection parameterization. These MPAS simulations employ the Grell-Freitas scale-aware convection scheme. Our test forecasts show that the scheme produces a gradual transition in the deep convection, from the deep unstable convection being handled entirely by the convection scheme on the coarse mesh regions (dx > 15 km), to the deep convection being almost entirely explicit on the 3 km NA region of the meshes. We will present results illustrating the performance of critical aspects of the MPAS model in these tests.
NASA Astrophysics Data System (ADS)
D'Alessandro, John J.; Diao, Minghui; Wu, Chenglai; Liu, Xiaohong; Chen, Ming; Morrison, Hugh; Eidhammer, Trude; Jensen, Jorgen B.; Bansemer, Aaron; Zondlo, Mark A.; DiGangi, Josh P.
2017-03-01
Occurrence frequency and dynamical conditions of ice supersaturation (ISS, where relative humidity with respect to ice (RHi) > 100%) are examined in the upper troposphere around convective activity. Comparisons are conducted between in situ airborne observations and the Weather Research and Forecasting model simulations using four double-moment microphysical schemes at temperatures ≤ -40°C. All four schemes capture both clear-sky and in-cloud ISS conditions. However, the clear-sky (in-cloud) ISS conditions are completely (significantly) limited to the RHi thresholds of the Cooper parameterization. In all of the simulations, ISS occurrence frequencies are higher by 3-4 orders of magnitude at higher updraft speeds (>1 m s-1) than those at the lower updraft speeds when ice water content (IWC) > 0.01 g m-3, while observations show smaller differences up to 1-2 orders of magnitude. The simulated ISS also occurs less frequently at weaker updrafts and downdrafts than observed. These results indicate that the simulations have a greater dependence on stronger updrafts to maintain/generate ISS at higher IWC. At lower IWC (≤0.01 g m-3), simulations unexpectedly show lower ISS frequencies at stronger updrafts. Overall, the Thompson aerosol-aware scheme has the closest magnitudes and frequencies of ISS >20% to the observations, and the modified Morrison has the closest correlations between ISS frequencies and vertical velocity at higher IWC and number density. The Cooper parameterization often generates excessive ice crystals and therefore suppresses the frequency and magnitude of ISS, indicating that it should be initiated at higher ISS (e.g., ≥25%).
Radiatively driven stratosphere-troposphere interactions near the tops of tropical cloud clusters
NASA Technical Reports Server (NTRS)
Churchill, Dean D.; Houze, Robert A., Jr.
1990-01-01
Results are presented of two numerical simulations of the mechanism involved in the dehydration of air, using the model of Churchill (1988) and Churchill and Houze (1990) which combines the water and ice physics parameterizations and IR and solar-radiation parameterization with a convective adjustment scheme in a kinematic nondynamic framework. One simulation, a cirrus cloud simulation, was to test the Danielsen (1982) hypothesis of a dehydration mechanism for the stratosphere; the other was to simulate the mesoscale updraft in order to test an alternative mechanism for 'freeze-drying' the air. The results show that the physical processes simulated in the mesoscale updraft differ from those in the thin-cirrus simulation. While in the thin-cirrus case, eddy fluxes occur in response to IR radiative destabilization, and, hence, no net transfer occurs between troposphere and stratosphere, the mesosphere updraft case has net upward mass transport into the lower stratosphere.
Stellar Atmospheric Parameterization Based on Deep Learning
NASA Astrophysics Data System (ADS)
Pan, R. Y.; Li, X. R.
2016-07-01
Deep learning is a typical learning method widely studied in machine learning, pattern recognition, and artificial intelligence. This work investigates the stellar atmospheric parameterization problem by constructing a deep neural network with five layers. The proposed scheme is evaluated on both real spectra from Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with Kurucz's New Opacity Distribution Function (NEWODF) model. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for the effective temperature (T_{eff}/K), 0.0058 for lg (T_{eff}/K), 0.1706 for surface gravity (lg (g/(cm\\cdot s^{-2}))), and 0.1294 dex for metallicity ([Fe/H]), respectively; On the theoretic spectra, the MAEs are 15.34 for T_{eff}/K, 0.0011 for lg (T_{eff}/K), 0.0214 for lg (g/(cm\\cdot s^{-2})), and 0.0121 dex for [Fe/H], respectively.
Large-eddy simulations of a Salt Lake Valley cold-air pool
NASA Astrophysics Data System (ADS)
Crosman, Erik T.; Horel, John D.
2017-09-01
Persistent cold-air pools are often poorly forecast by mesoscale numerical weather prediction models, in part due to inadequate parameterization of planetary boundary-layer physics in stable atmospheric conditions, and also because of errors in the initialization and treatment of the model surface state. In this study, an improved numerical simulation of the 27-30 January 2011 cold-air pool in Utah's Great Salt Lake Basin is obtained using a large-eddy simulation with more realistic surface state characterization. Compared to a Weather Research and Forecasting model configuration run as a mesoscale model with a planetary boundary-layer scheme where turbulence is highly parameterized, the large-eddy simulation more accurately captured turbulent interactions between the stable boundary-layer and flow aloft. The simulations were also found to be sensitive to variations in the Great Salt Lake temperature and Salt Lake Valley snow cover, illustrating the importance of land surface state in modelling cold-air pools.
NASA Astrophysics Data System (ADS)
Schneider, Tapio; Lan, Shiwei; Stuart, Andrew; Teixeira, João.
2017-12-01
Climate projections continue to be marred by large uncertainties, which originate in processes that need to be parameterized, such as clouds, convection, and ecosystems. But rapid progress is now within reach. New computational tools and methods from data assimilation and machine learning make it possible to integrate global observations and local high-resolution simulations in an Earth system model (ESM) that systematically learns from both and quantifies uncertainties. Here we propose a blueprint for such an ESM. We outline how parameterization schemes can learn from global observations and targeted high-resolution simulations, for example, of clouds and convection, through matching low-order statistics between ESMs, observations, and high-resolution simulations. We illustrate learning algorithms for ESMs with a simple dynamical system that shares characteristics of the climate system; and we discuss the opportunities the proposed framework presents and the challenges that remain to realize it.
Passive tracking scheme for a single stationary observer
NASA Astrophysics Data System (ADS)
Chan, Y. T.; Rea, Terry
2001-08-01
While there are many techniques for Bearings-Only Tracking (BOT) in the ocean environment, they do not apply directly to the land situation. Generally, for tactical reasons, the land observer platform is stationary; but, it has two sensors, visual and infrared, for measuring bearings and a laser range finder (LRF) for measuring range. There is a requirement to develop a new BOT data fusion scheme that fuses the two sets of bearing readings, and together with a single LRF measurement, produces a unique track. This paper first develops a parameterized solution for the target speeds, prior to the occurrence of the LRF measurement, when the problem is unobservable. At, and after, the LRF measurement, a BOT formulated as a least squares (LS) estimator then produces a unique LS estimate of the target states. Bearing readings from the other sensor serve as instrumental variables in a data fusion setting to eliminate the bias in the BOT estimator. The result is recursive, unbiased and decentralized data fusion scheme. Results from two simulation experiments have corroborated the theoretical development and show that the scheme is optimal.
Wind field near complex terrain using numerical weather prediction model
NASA Astrophysics Data System (ADS)
Chim, Kin-Sang
The PennState/NCAR MM5 model was modified to simulate an idealized flow pass through a 3D obstacle in the Micro- Alpha Scale domain. The obstacle used were the idealized Gaussian obstacle and the real topography of Lantau Island of Hong Kong. The Froude number under study is ranged from 0.22 to 1.5. Regime diagrams for both the idealized Gaussian obstacle and Lantau island were constructed. This work is divided into five parts. The first part is the problem definition and the literature review of the related publications. The second part briefly discuss as the PennState/NCAR MM5 model and a case study of long- range transport is included. The third part is devoted to the modification and the verification of the PennState/NCAR MM5 model on the Micro-Alpha Scale domain. The implementation of the Orlanski (1976) open boundary condition is included with the method of single sounding initialization of the model. Moreover, an upper dissipative layer, Klemp and Lilly (1978), is implemented on the model. The simulated result is verified by the Automatic Weather Station (AWS) data and the Wind Profiler data. Four different types of Planetary Boundary Layer (PBL) parameterization schemes have been investigated in order to find out the most suitable one for Micro-Alpha Scale domain in terms of both accuracy and efficiency. Bulk Aerodynamic type of PBL parameterization scheme is found to be the most suitable PBL parameterization scheme. Investigation of the free- slip lower boundary condition is performed and the simulated result is compared with that with friction. The fourth part is the use of the modified PennState/NCAR MM5 model for an idealized flow simulation. The idealized uniform flow used is nonhydrostatic and has constant Froude number. Sensitivity test is performed by varying the Froude number and the regime diagram is constructed. Moreover, nondimensional drag is found to be useful for regime identification. The model result is also compared with the analytic results by Miles (1969) and Smith (1980, 1985), and the numerical results of Stein (1992), Miranda and James (1992) and Olaffson and Bougeault (1997). It is found that the simulated result in the present study is comparable with others. The fifth part is the construction of the regime diagram for the Lantau island of Hong Kong. All eight major wind directions are discussed.
Simulated effect of calcification feedback on atmospheric CO2 and ocean acidification
Zhang, Han; Cao, Long
2016-01-01
Ocean uptake of anthropogenic CO2 reduces pH and saturation state of calcium carbonate materials of seawater, which could reduce the calcification rate of some marine organisms, triggering a negative feedback on the growth of atmospheric CO2. We quantify the effect of this CO2-calcification feedback by conducting a series of Earth system model simulations that incorporate different parameterization schemes describing the dependence of calcification rate on saturation state of CaCO3. In a scenario with SRES A2 CO2 emission until 2100 and zero emission afterwards, by year 3500, in the simulation without CO2-calcification feedback, model projects an accumulated ocean CO2 uptake of 1462 PgC, atmospheric CO2 of 612 ppm, and surface pH of 7.9. Inclusion of CO2-calcification feedback increases ocean CO2 uptake by 9 to 285 PgC, reduces atmospheric CO2 by 4 to 70 ppm, and mitigates the reduction in surface pH by 0.003 to 0.06, depending on the form of parameterization scheme used. It is also found that the effect of CO2-calcification feedback on ocean carbon uptake is comparable and could be much larger than the effect from CO2-induced warming. Our results highlight the potentially important role CO2-calcification feedback plays in ocean carbon cycle and projections of future atmospheric CO2 concentrations. PMID:26838480
NASA Astrophysics Data System (ADS)
van der Ent, R.; Van Beek, R.; Sutanudjaja, E.; Wang-Erlandsson, L.; Hessels, T.; Bastiaanssen, W.; Bierkens, M. F.
2017-12-01
The storage and dynamics of water in the root zone control many important hydrological processes such as saturation excess overland flow, interflow, recharge, capillary rise, soil evaporation and transpiration. These processes are parameterized in hydrological models or land-surface schemes and the effect on runoff prediction can be large. Root zone parameters in global hydrological models are very uncertain as they cannot be measured directly at the scale on which these models operate. In this paper we calibrate the global hydrological model PCR-GLOBWB using a state-of-the-art ensemble of evaporation fields derived by solving the energy balance for satellite observations. We focus our calibration on the root zone parameters of PCR-GLOBWB and derive spatial patterns of maximum root zone storage. We find these patterns to correspond well with previous research. The parameterization of our model allows for the conversion of maximum root zone storage to root zone depth and we find that these correspond quite well to the point observations where available. We conclude that climate and soil type should be taken into account when regionalizing measured root depth for a certain vegetation type. We equally find that using evaporation rather than discharge better allows for local adjustment of root zone parameters within a basin and thus provides orthogonal data to diagnose and optimize hydrological models and land surface schemes.
NASA Astrophysics Data System (ADS)
van der Ent, Ruud; van Beek, Rens; Sutanudjaja, Edwin; Wang-Erlandsson, Lan; Hessels, Tim; Bastiaanssen, Wim; Bierkens, Marc
2017-04-01
The storage and dynamics of water in the root zone control many important hydrological processes such as saturation excess overland flow, interflow, recharge, capillary rise, soil evaporation and transpiration. These processes are parameterized in hydrological models or land-surface schemes and the effect on runoff prediction can be large. For root zone parameters in global hydrological models are very uncertain as they cannot be measured directly at the scale on which these models operate. In this paper we calibrate the global hydrological model PCR-GLOBWB using a state-of-the-art ensemble of evaporation fields derived by solving the energy balance for satellite observations. We focus our calibration on the root zone parameters of PCR-GLOBWB and derive spatial patterns of maximum root zone storage. We find these patterns to correspond well with previous research. The parameterization of our model allows for the conversion of maximum root zone storage to root zone depth and we find that these correspond quite well to the point observations where available. We conclude that climate and soil type should be taken into account when regionalizing measured root depth for a certain vegetation type. We equally find that using evaporation rather than discharge better allows for local adjustment of root zone parameters within a basin and thus provides orthogonal data to diagnose and optimize hydrological models and land surface schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Minghua
1. Understanding of the observed variability of ITCZ in the equatorial eastern Pacific. The annual mean precipitation in the eastern Pacific has a maximum zonal band north of the equator in the ITCZ where the maximum SST is located. During the boreal spring (referring to February, March, and April throughout the present paper), because of the accumulated solar radiation heating and oceanic heat transport, a secondary maximum of SST exists in the southeastern equatorial Pacific. Associated with this warm SST is also a seasonal transitional maximum of precipitation in the same region in boreal spring, exhibited as a weak doublemore » ITCZ pattern in the equatorial eastern Pacific. This climatological seasonal variation, however, varies greatly from year to year: double ITCZ in the boreal spring occurs in some years but not in other years; when there a single ITCZ, it can appear either north, south or at the equator. Understanding this observed variability is critical to find the ultimate cause of the double ITCZ in climate models. Seasonal variation of ITCZ south of the eastern equatorial Pacific: By analyzing data from satellites, field measurements and atmospheric reanalysis, we have found that in the region where spurious ITCZ in models occurs, there is a “seasonal cloud transition” — from stratocumulus to shallow cumulus and eventually to deep convection —in the South Equatorial Pacific (SEP) from September to April that is similar to the spatial cloud transition from the California coast to the equator. This seasonal transition is associated with increasing sea surface temperature (SST), decreasing lower tropospheric stability and large-scale subsidence. This finding of seasonal cloud transition points to the same source of model errors in the ITCZ simulations as in simulation of stratocumulus-cumulus-deep convection transition. It provides a test for climate models to simulate the relationships between clouds and large-scale atmospheric fields in a region that features a spurious double Inter-tropical Convergence Zone (ITCZ) in most models. This work is recently published in Yu et al. (2016). Interannual variation of ITCZ south of the eastern equatorial Pacific: By analyzing data from satellites, field measurements and atmospheric reanalysis, we have characterized the interannual variation of boreal spring precipitation in the eastern tropical Pacific and found the cause of the observed interannual variability. We have shown that ITCZ in this region can occur as a single ITCZ along the Equator, single ITCZ north of the Equator, single ITCZ south of the Equator, and double ITCZ on both sides of the Equator. We have found that convective instability only plays a secondary role in the ITCZ interannual variability. Instead, the remote impact of the Pacific basin-wide SST on the horizontal gradient of surface pressure and wind convergence is the primary driver of this interannual variability. Results point to the need to include moisture convergence in convection schemes to improve the simulation of precipitation in the eastern tropical Pacific. This result has been recently submitted for publication (Yu and Zhang 2016). 2. Improvement of model parameterizations to reduce the double ITCZ bias We analyzed the current status of climate model performance in simulating precipitation in the equatorial Pacific. We have found that the double ITCZ bias has not been reduced in CMIP5 models relative to CMIP4 models. We have characterized the dynamic structure of the common bias by using precipitation, sea surface temperature, surface winds and sea-level. Results are published in Zhang et al. (2015): Since cumulus convection plays a significant role in the double ITCZ behavior in models, we have used measurements from ARM and other sources to carry out a systematic analysis of the roles of shallow and deep convection in the CAM. We found that in both CAM4 and CAM5, when the intensity of deep convection decreases as a result of parameterization change, the intensity of shallow convection increases, leading to very different changes in precipitation partitions but little change in the total precipitation. The different precipitation partition however can manifest themselves in other measures of model performances including temperature and humidity. This study points to the need to treat model physical parameterizations as integrated system rather than individual components. Results from this study are published in Wang and Zhang (2013). Since shallow convection interacts with the deep convection scheme and surface turbulence to trigger the double ITCZ, we studied methods to improve the shallow convection scheme in climate models. We investigated the bulk budgets of the vertical velocity and its parameterization in convective cores, convective updrafts, and clouds by using large-eddy simulation (LES) of four shallow convection cases including one from ARM. We proposed optimal forms of the Simpson and Wiggert equation to calculate the vertical velocity in bulk mass flux convection schemes for convective cores, convective updrafts, and convective clouds as parameterization schemes. The new scheme is published in Wang and Zhang (2014). By using long-term radar-based ground measurements from ARM, we derived a scale-aware inhomogeneity parameterization of cloud liquid water in climate models. We found a relationship between the inhomogeneity parameter and the model grid size as well as atmospheric stability. This relationship is implemented in the CESM to describe the subgrid-scale cloud inhomogeneity. Relative to the default CESM with the finite-volume dynamic core at 2-degree resolution, the new parameterization leads to smaller cloud inhomogeneity and larger cloud liquid-water path in high latitudes, and the opposite effect in low latitudes, with the regional impact on shortwave cloud radiation effect of up to 10 W/m 2. This is due to both the smaller (larger) grid size in high (low) latitudes in the longitude-latitude grid setting of CESM and the more stable (unstable) atmosphere. This parameterization is expected lead to more realistic simulation of tropical precipitation in high-resolution models. Results from this study are reported in Xie and Zhang (2015).« less
NASA Astrophysics Data System (ADS)
Alfieri, J. G.; Kustas, W. P.; Gao, F.; Nieto, H.; Prueger, J. H.; Hipps, L.
2017-12-01
Because the judicious application of water is key to ensuring berry quality, information regarding evapotranspiration (ET) is critical when making irrigation and other crop management decisions for vineyards. Increasingly, wine grape producers seek to use remote sensing-based models to monitor ET and inform management decisions. However, the parameterization schemes used by these models do not fully account for the effects of the highly-structured canopy architecture on either the roughness characteristics of the vineyard or the turbulent transport and exchange within and above the vines. To investigate the effects of vineyard structure on the roughness length (zo) and displacement height (do) of vineyards, data collected from 2013 to 2016 as a part of the Grape Remote Sensing Atmospheric Profiling and Evapotranspiration Experiment (GRAPEX), an ongoing multi-agency field campaign conducted in the Central Valley of California, was used. Specifically, vertical profiles (2.5 m, 3.75 m, 5 m, and 8 m, agl) of wind velocity collected under near-neutral conditions were used to estimate do and zo and characterize how these roughness parameters vary in response changing environmental conditions. The roughness length was found to vary as a function of wind direction. It increased sigmoidally from a minimum near 0.15 m when the wind direction was parallel to the vine rows to a maximum between 0.3 m and 0.4 m when the winds were perpendicularly to the rows. Similarly, do was found responds strongly to changes in vegetation density as measured via leaf area index (LAI). Although the maximum varied from year-to-year, do increased rapidly after bud break in all cases and then remained constant for the remainder of the growing season. A comparison of the model output from the remote sensing-based two-source energy balance (TSEB) model using the standard roughness parameterization scheme and the empirical relationships derived from observations indicates a that the modeled ET estimates decrease by 10% to 40%. These results not only demonstrate the unique effects of highly-structured canopies on aerodynamic characteristics, they also provide well-behaved relationships that may be used to improve the accuracy of the model parameterization of do and zo, thus the turbulent fluxes including ET, within vineyards.
A test harness for accelerating physics parameterization advancements into operations
NASA Astrophysics Data System (ADS)
Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.
2017-12-01
The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a comparison between the 2017 operational GFS suite and one containing the Grell-Freitas convective parameterization. An overview of the physics test harness and its early use will be presented.
Remote sensing technology research and instrumentation platform design
NASA Technical Reports Server (NTRS)
1992-01-01
An instrumented pallet concept and definition of an aircraft with performance and payload capability to meet NASA's airborne turbulent flux measurement needs for advanced multiple global climate research and field experiments is presented. The report addresses airborne measurement requirements for general circulation model sub-scale parameterization research, specifies instrumentation capable of making these measurements, and describes a preliminary support pallet design. Also, a review of aircraft types and a recommendation of a manned and an unmanned aircraft capable of meeting flux parameterization research needs is given.
An economical state-dependent telecloning for a multiparticle GHZ state
NASA Astrophysics Data System (ADS)
Meng, Fan-Xu; Yu, Xu-Tao; Zhang, Zai-Chen
2018-03-01
The scheme for a 1-3 economical state-dependent telecloning of a multiparticle GHZ state is proposed. It shows that every one of spatially separated three receivers obtains one copy which is dependent on original state. Fidelity can hit to the optimal fidelity 5/6. Meantime, we also propose a 1-3 asymmetric economical telecloning of a particular multiparticle GHZ state by parameterizing coefficients of state in the channel. The three fidelities can reach the best match that is the same as the symmetric case. Furthermore, the above two schemes can be generalized into the case of 1-M(M=2k+1,k>0) telecloning of a multiparticle GHZ state. Satisfying some certain conditions, optimal fidelities with 1/2+(M+1)/4M can be obtained. As without ancilla in the channel, the number of entangled particles is less than one in current schemes and fidelities can be optimal if the original state is an equatorial state.
Parameterization of bulk condensation in numerical cloud models
NASA Technical Reports Server (NTRS)
Kogan, Yefim L.; Martin, William J.
1994-01-01
The accuracy of the moist saturation adjustment scheme has been evaluated using a three-dimensional explicit microphysical cloud model. It was found that the error in saturation adjustment depends strongly on the Cloud Condensation Nucleii (CCN) concentration in the ambient atmosphere. The scheme provides rather accurate results in the case where a sufficiently large number of CCN (on the order of several hundred per cubic centimeter) is available. However, under conditions typical of marine stratocumulus cloud layers with low CCN concentration, the error in the amounts of condensed water vapor and released latent heat may be as large as 40%-50%. A revision of the saturation adjustment scheme is devised that employs the CCN concentration, dynamical supersaturation, and cloud water content as additional variables in the calculation of the condensation rate. The revised condensation model reduced the error in maximum updraft and cloud water content in the climatically significant case of marine stratocumulus cloud layers by an order of magnitude.
A scheme for computing surface layer turbulent fluxes from mean flow surface observations
NASA Technical Reports Server (NTRS)
Hoffert, M. I.; Storch, J.
1978-01-01
A physical model and computational scheme are developed for generating turbulent surface stress, sensible heat flux and humidity flux from mean velocity, temperature and humidity at some fixed height in the atmospheric surface layer, where conditions at this reference level are presumed known from observations or the evolving state of a numerical atmospheric circulation model. The method is based on coupling the Monin-Obukov surface layer similarity profiles which include buoyant stability effects on mean velocity, temperature and humidity to a force-restore formulation for the evolution of surface soil temperature to yield the local values of shear stress, heat flux and surface temperature. A self-contained formulation is presented including parameterizations for solar and infrared radiant fluxes at the surface. Additional parameters needed to implement the scheme are the thermal heat capacity of the soil per unit surface area, surface aerodynamic roughness, latitude, solar declination, surface albedo, surface emissivity and atmospheric transmissivity to solar radiation.
Enhanced representation of soil NO emissions in the ...
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12 km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and
Explicit simulation of a midlatitude Mesoscale Convective System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, G.D.; Cotton, W.R.
1996-04-01
We have explicitly simulated the mesoscale convective system (MCS) observed on 23-24 June 1985 during PRE-STORM, the Preliminary Regional Experiment for the Stormscale Operational and Research and Meterology Program. Stensrud and Maddox (1988), Johnson and Bartels (1992), and Bernstein and Johnson (1994) are among the researchers who have investigated various aspects of this MCS event. We have performed this MCS simulation (and a similar one of a tropical MCS; Alexander and Cotton 1994) in the spirit of the Global Energy and Water Cycle Experiment Cloud Systems Study (GCSS), in which cloud-resolving models are used to assist in the formulation andmore » testing of cloud parameterization schemes for larger-scale models. In this paper, we describe (1) the nature of our 23-24 June MCS dimulation and (2) our efforts to date in using our explicit MCS simulations to assist in the development of a GCM parameterization for mesoscale flow branches. The paper is organized as follows. First, we discuss the synoptic situation surrounding the 23-24 June PRE-STORM MCS followed by a discussion of the model setup and results of our simulation. We then discuss the use of our MCS simulation. We then discuss the use of our MCS simulations in developing a GCM parameterization for mesoscale flow branches and summarize our results.« less
Selection and parameterization of cortical neurons for neuroprosthetic control.
Wahnoun, Remy; He, Jiping; Helms Tillery, Stephen I
2006-06-01
When designing neuroprosthetic interfaces for motor function, it is crucial to have a system that can extract reliable information from available neural signals and produce an output suitable for real life applications. Systems designed to date have relied on establishing a relationship between neural discharge patterns in motor cortical areas and limb movement, an approach not suitable for patients who require such implants but who are unable to provide proper motor behavior to initially tune the system. We describe here a method that allows rapid tuning of a population vector-based system for neural control without arm movements. We trained highly motivated primates to observe a 3D center-out task as the computer played it very slowly. Based on only 10-12 s of neuronal activity observed in M1 and PMd, we generated an initial mapping between neural activity and device motion that the animal could successfully use for neuroprosthetic control. Subsequent tunings of the parameters led to improvements in control, but the initial selection of neurons and estimated preferred direction for those cells remained stable throughout the remainder of the day. Using this system, we have observed that the contribution of individual neurons to the overall control of the system is very heterogeneous. We thus derived a novel measure of unit quality and an indexing scheme that allowed us to rate each neuron's contribution to the overall control. In offline tests, we found that fewer than half of the units made positive contributions to the performance. We tested this experimentally by having the animals control the neuroprosthetic system using only the 20 best neurons. We found that performance in this case was better than when the entire set of available neurons was used. Based on these results, we believe that, with careful task design, it is feasible to parameterize control systems without any overt behaviors and that subsequent control system design will be enhanced with cautious unit selection. These improvements can lead to systems demanding lower bandwidth and computational power, and will pave the way for more feasible clinical systems.
NASA Technical Reports Server (NTRS)
Beagley, Stephen R.; Degrandpre, Jean; Mcconnell, John C.; Laprise, Rene; Mcfarlane, Norman
1994-01-01
The Canadian Climate Center (CCC) GCM has been modified to allow its use for studies in atmospheric chemistry. The initial experiments reported here have been run to test and allow sensitivity studies of the new transport module. The impact of different types of parameterization for the convective mixing have been studied based on the large scale evolution of Rn-222 and Pb-210. Preliminary results have shown that the use of a scheme, which mixes unstable columns over a very short time scale, produces a global distribution of lead that agrees in some aspects with observations. The local impact of different mixing schemes on a short lived tracer like the radon is very important.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.
Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Astrophysics Data System (ADS)
Garratt, J. R.
1993-03-01
Aspects of the land-surface and boundary-layer treatments in some 20 or so atmospheric general circulation models (GCMS) are summarized. In only a small fraction of these have significant sensitivity studies been carried out and published. Predominantly, the sensitivity studies focus upon the parameterization of land-surface processes and specification of land-surface properties-the most important of these include albedo, roughness length, soil moisture status, and vegetation density. The impacts of surface albedo and soil moisture upon the climate simulated in GCMs with bare-soil land surfaces are well known. Continental evaporation and precipitation tend to decrease with increased albedo and decreased soil moisture availability. For example, results from numerous studies give an average decrease in continental precipitation of 1 mm day1 in response to an average albedo increase of 0.13. Few conclusive studies have been carried out on the impact of a gross roughness-length change-the primary study included an important statistical assessment of the impact upon the mean July climate around the globe of a decreased continental roughness (by three orders of magnitude). For example, such a decrease reduced the precipitation over Amazonia by 1 to 2 mm day1.The inclusion of a canopy scheme in a GCM ensures the combined impacts of roughness (canopies tend to be rougher than bare soil), albedo (canopies tend to be less reflective than bare soil), and soil-moisture availability (canopies prevent the near-surface soil region from drying out and can access the deep soil moisture) upon the simulated climate. The most revealing studies to date involve the regional impact of Amazonian deforestation. The results of four such studies show that replacing tropical forest with a degraded pasture results in decreased evaporation ( 1 mm day1) and precipitation (1-2 mm day1), and increased near-surface air temperatures (2 K).Sensitivity studies as a whole suggest the need for a realistic surface representation in general circulation models of the atmosphere. It is not yet clear how detailed this representation needs to be, but even allowing for the importance of surface processes, the parameterization of boundary-layer and convective clouds probably represents a greater challenge to improved climate simulations. This is illustrated in the case of surface net radiation for Aniazonia, which is not well simulated and tends to be overestimated, leading to evaporation rates that are too large. Underestimates in cloudiness, cloud albedo, and clear-sky shortwave absorption, rather than in surface albedo, appear to be the main culprits.There are three major tasks that confront the researcher so far as the development and validation of atmospheric boundary-layer (ABL) and surface schemes in GCMs are concerned:(i) There is a need to as' critically the impact of `improved' parameterization schemes on WM simulations, taking into account the problem of natural variability and hence the statistical significance of the induced changes.(ii) There is a need to compare GCM simulations of surface and ABL behavior (particularly regarding the diurnal cycle of surface fluxes, air temperature, and ABL depth) with observations over a range of surface types (vegetation, desert, ocean). In this context, area-average values of surface fluxes will be required to calibrate directly the ABL/land-surface scheme in the GCM.(iii) There is a need for intercomparisons of ABL and land-surface schemes used in GCMS, both for one- dimensional stand-alone models and for GCMs that incorporate the respective schemes.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Hierachical Object Recognition Using Libraries of Parameterized Model Sub-Parts.
1987-06-01
SketchI Structure Hierarchy Constrained Search 20. AUISTR ACT (Ce.ntU..w se reveres. 01411 at 00 OW 4MI 9smtilp Me"h aindo" This thesis describes the... theseU hierarchies to achieve robust recognition based on effective organization and indexing schemes for model libraries. The goal of the system is to...with different relative scaling, rotation, or translation than in the models. The approach taken in this thesis is to develop an object shape
2012-07-06
layer affected by ground interference. Using this approach for measurements acquired over the Salinas Valley , we showed that additional range gates...demonstrated the benefits of the two-step approach using measurements acquired over the Salinas Valley in central California. The additional range gates...four hours of data between the surface and 3000 m MSL along a 40 km segment of the Salinas Valley during this day. The airborne lidar measurements
Stochastic Models for Precipitable Water in Convection
NASA Astrophysics Data System (ADS)
Leung, Kimberly
Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends. Results indicate that simulation of a highly volatile time series of observed PWV is certainly achievable and has potential to improve prediction capabilities in climate modeling. Further, this study demonstrates the feasibility of adding a mathematics- and statistics-based stochastic scheme to an existing deterministic parameterization to simulate observed fractal behavior.
Structural test of the parameterized-backbone method for protein design.
Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom
2004-09-03
Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.
Knowledge-based system for detailed blade design of turbines
NASA Astrophysics Data System (ADS)
Goel, Sanjay; Lamson, Scott
1994-03-01
A design optimization methodology that couples optimization techniques to CFD analysis for design of airfoils is presented. This technique optimizes 2D airfoil sections of a blade by minimizing the deviation of the actual Mach number distribution on the blade surface from a smooth fit of the distribution. The airfoil is not reverse engineered by specification of a precise distribution of the desired Mach number plot, only general desired characteristics of the distribution are specified for the design. Since the Mach number distribution is very complex, and cannot be conveniently represented by a single polynomial, it is partitioned into segments, each of which is characterized by a different order polynomial. The sum of the deviation of all the segments is minimized during optimization. To make intelligent changes to the airfoil geometry, it needs to be associated with features observed in the Mach number distribution. Associating the geometry parameters with independent features of the distribution is a fairly complex task. Also, for different optimization techniques to work efficiently the airfoil geometry needs to be parameterized into independent parameters, with enough degrees of freedom for adequate geometry manipulation. A high-pressure, low reaction steam turbine blade section was optimized using this methodology. The Mach number distribution was partitioned into pressure and suction surfaces and the suction surface distribution was further subdivided into leading edge, mid section and trailing edge sections. Two different airfoil representation schemes were used for defining the design variables of the optimization problem. The optimization was performed by using a combination of heuristic search and numerical optimization. The optimization results for the two schemes are discussed in the paper. The results are also compared to a manual design improvement study conducted independently by an experienced airfoil designer. The turbine blade optimization system (TBOS) is developed using the described methodology of coupling knowledge engineering with multiple search techniques for blade shape optimization. TBOS removes a major bottleneck in the design cycle by performing multiple design optimizations in parallel, and improves design quality at the same time. TBOS not only improves the design but also the designers' quality of work by taking the mundane repetitive task of design iterations away and leaving them more time for innovative design.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2017-04-01
In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1
Numerical Simulations of a Multiscale Model of Stratified Langmuir Circulation
NASA Astrophysics Data System (ADS)
Malecha, Ziemowit; Chini, Gregory; Julien, Keith
2012-11-01
Langmuir circulation (LC), a prominent form of wind and surface-wave driven shear turbulence in the ocean surface boundary layer (BL), is commonly modeled using the Craik-Leibovich (CL) equations, a phase-averaged variant of the Navier-Stokes (NS) equations. Although surface-wave filtering renders the CL equations more amenable to simulation than are the instantaneous NS equations, simulations in wide domains, hundreds of times the BL depth, currently earn the ``grand challenge'' designation. To facilitate simulations of LC in such spatially-extended domains, we have derived multiscale CL equations by exploiting the scale separation between submesoscale and BL flows in the upper ocean. The numerical algorithm for simulating this multiscale model resembles super-parameterization schemes used in meteorology, but retains a firm mathematical basis. We have validated our algorithm and here use it to perform multiscale simulations of the interaction between LC and upper ocean density stratification. ZMM, GPC, KJ gratefully acknowledge funding from NSF CMG Award 0934827.
NASA Technical Reports Server (NTRS)
Branscome, Lee E.; Bleck, Rainer; Obrien, Enda
1990-01-01
The project objectives are to develop process models to investigate the interaction of planetary and synoptic-scale waves including the effects of latent heat release (precipitation), nonlinear dynamics, physical and boundary-layer processes, and large-scale topography; to determine the importance of latent heat release for temporal variability and time-mean behavior of planetary and synoptic-scale waves; to compare the model results with available observations of planetary and synoptic wave variability; and to assess the implications of the results for monitoring precipitation in oceanic-storm tracks by satellite observing systems. Researchers have utilized two different models for this project: a two-level quasi-geostrophic model to study intraseasonal variability, anomalous circulations and the seasonal cycle, and a 10-level, multi-wave primitive equation model to validate the two-level Q-G model and examine effects of convection, surface processes, and spherical geometry. It explicitly resolves several planetary and synoptic waves and includes specific humidity (as a predicted variable), moist convection, and large-scale precipitation. In the past year researchers have concentrated on experiments with the multi-level primitive equation model. The dynamical part of that model is similar to the spectral model used by the National Meteorological Center for medium-range forecasts. The model includes parameterizations of large-scale condensation and moist convection. To test the validity of results regarding the influence of convective precipitation, researchers can use either one of two different convective schemes in the model, a Kuo convective scheme or a modified Arakawa-Schubert scheme which includes downdrafts. By choosing one or the other scheme, they can evaluate the impact of the convective parameterization on the circulation. In the past year researchers performed a variety of initial-value experiments with the primitive-equation model. Using initial conditions typical of climatological winter conditions, they examined the behavior of synoptic and planetary waves growing in moist and dry environments. Surface conditions were representative of a zonally averaged ocean. They found that moist convection associated with baroclinic wave development was confined to the subtropics.
A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone
NASA Astrophysics Data System (ADS)
Filipot, J.-F.; Ardhuin, F.
2012-11-01
A new wave-breaking dissipation parameterization designed for phase-averaged spectral wave models is presented. It combines wave breaking basic physical quantities, namely, the breaking probability and the dissipation rate per unit area. The energy lost by waves is first explicitly calculated in physical space before being distributed over the relevant spectral components. The transition from deep to shallow water is made possible by using a dissipation rate per unit area of breaking waves that varies with the wave height, wavelength and water depth. This parameterization is implemented in the WAVEWATCH III modeling framework, which is applied to a wide range of conditions and scales, from the global ocean to the beach scale. Wave height, peak and mean periods, and spectral data are validated using in situ and remote sensing data. Model errors are comparable to those of other specialized deep or shallow water parameterizations. This work shows that it is possible to have a seamless parameterization from the deep ocean to the surf zone.
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, David L.
It is well known that cirrus clouds play a major role in regulating the earth’s climate, but the details of how this works are just beginning to be understood. This project targeted the main property of cirrus clouds that influence climate processes; the ice fall speed. That is, this project improves the representation of the mass-weighted ice particle fall velocity, V m, in climate models, used to predict future climate on global and regional scales. Prior to 2007, the dominant sizes of ice particles in cirrus clouds were poorly understood, making it virtually impossible to predict how cirrus clouds interactmore » with sunlight and thermal radiation. Due to several studies investigating the performance of optical probes used to measure the ice particle size distribution (PSD), as well as the remote sensing results from our last ARM project, it is now well established that the anomalously high concentrations of small ice crystals often reported prior to 2007 were measurement artifacts. Advances in the design and data processing of optical probes have greatly reduced these ice artifacts that resulted from the shattering of ice particles on the probe tips and/or inlet tube, and PSD measurements from one of these improved probes (the 2-dimensional Stereo or 2D-S probe) are utilized in this project to parameterize V m for climate models. Our original plan in the proposal was to parameterize the ice PSD (in terms of temperature and ice water content) and ice particle mass and projected area (in terms of mass- and area-dimensional power laws or m-D/A-D expressions) since these are the microphysical properties that determine V m, and then proceed to calculate V m from these parameterized properties. But the 2D-S probe directly measures ice particle projected area and indirectly estimates ice particle mass for each size bin. It soon became apparent that the original plan would introduce more uncertainty in the V m calculations than simply using the 2D-S measurements to directly calculate V m. By calculating V m directly from the measured PSD, ice particle projected area and estimated mass, more accurate estimates of V m are obtained. These V m values were then parameterized for climate models by relating them to (1) sampling temperature and ice water content (IWC) and (2) the effective diameter (D e) of the ice PSD. Parameterization (1) is appropriate for climate models having single-moment microphysical schemes whereas (2) is appropriate for double-moment microphysical schemes and yields more accurate V m estimates. These parameterizations were developed for tropical cirrus clouds, Arctic cirrus, mid-latitude synoptic cirrus and mid-latitude anvil cirrus clouds based on field campaigns in these regions. An important but unexpected result of this research was the discovery of microphysical evidence indicating the mechanisms by which ice crystals are produced in cirrus clouds. This evidence, derived from PSD measurements, indicates that homogeneous freezing ice nucleation dominates in mid-latitude synoptic cirrus clouds, whereas heterogeneous ice nucleation processes dominate in mid-latitude anvil cirrus. Based on these findings, D e was parameterized in terms of temperature (T) for conditions dominated by (1) homo- and (2) heterogeneous ice nucleation. From this, an experiment was designed for global climate models (GCMs). The net radiative forcing from cirrus clouds may be affected by the means ice is produced (homo- or heterogeneously), and this net forcing contributes to climate sensitivity (i.e. the change in mean global surface temperature resulting from a doubling of CO 2). The objective of this GCM experiment was to determine how a change in ice nucleation mode affects the predicted global radiation balance. In the first simulation (Run 1), the D e-T relationship for homogeneous nucleation is used at all latitudes, while in the second simulation (Run 2), the D e-T relationship for heterogeneous nucleation is used at all latitudes. For both runs, V m is calculated from D e. Two GCMs were used; the Community Atmosphere Model version 5 (CAM5) and a European GCM known as ECHAM5 (thanks to our European colleagues who collaborated with us). Similar results were obtained from both GCMs in the Northern Hemisphere mid-latitudes, with a net cooling of ~ 1.0 W m -2 due to heterogeneous nucleation, relative to Run 1. The mean global net cooling was 2.4 W m -2 for the ECHAM5 GCM while CAM5 produced a mean global net cooling of about 0.8 W m -2. This dependence of the radiation balance on nucleation mode is substantial when one considers the direct radiative forcing from a CO 2 doubling is 4 W m -2. The differences between GCMs in mean global net cooling estimates may demonstrate a need for improving the representation of cirrus clouds in GCMs, including the coupling between microphysical and radiative properties. Unfortunately, after completing this GCM experiment, we learned from the company that provided the 2D-S microphysical data that the data was corrupted due to a computer program coding problem. Therefore the microphysical data had to be reprocessed and reanalyzed, and the GCM experiments were redone under our current ASR project but using an improved experimental design.« less
NASA Astrophysics Data System (ADS)
Monicke, A.; Katajisto, H.; Leroy, M.; Petermann, N.; Kere, P.; Perillo, M.
2012-07-01
For many years, layered composites have proven essential for the successful design of high-performance space structures, such as launchers or satellites. A generic cylindrical composite structure for a launcher application was optimized with respect to objectives and constraints typical for space applications. The studies included the structural stability, laminate load response and failure analyses. Several types of cylinders (with and without stiffeners) were considered and optimized using different lay-up parameterizations. Results for the best designs are presented and discussed. The simulation tools, ESAComp [1] and modeFRONTIER [2], employed in the optimization loop are elucidated and their value for the optimization process is explained.
NASA Astrophysics Data System (ADS)
Salah, Zeinab; Shalaby, Ahmed; Steiner, Allison L.; Zakey, Ashraf S.; Gautam, Ritesh; Abdel Wahab, Mohamed M.
2018-02-01
This study assesses the direct and indirect effects of natural and anthropogenic aerosols (e.g., black carbon and sulfate) over West and Central Africa during the West African monsoon (WAM) period (June-July-August). We investigate the impacts of aerosols on the amount of cloudiness, the influences on the precipitation efficiency of clouds, and the associated radiative forcing (direct and indirect). Our study includes the implementation of three new formulations of auto-conversion parameterization [namely, the Beheng (BH), Tripoli and Cotton (TC) and Liu and Daum (R6) schemes] in RegCM4.4.1, besides the default model's auto-conversion scheme (Kessler). Among the new schemes, BH reduces the precipitation wet bias by more than 50% over West Africa and achieves a bias reduction of around 25% over Central Africa. Results from detailed sensitivity experiments suggest a significant path forward in terms of addressing the long-standing issue of the characteristic wet bias in RegCM. In terms of aerosol-induced radiative forcing, the impact of the various schemes is found to vary considerably (ranging from -5 to -25 W m-2).
The Tropical Subseasonal Variability Simulated in the NASA GISS General Circulation Model
NASA Technical Reports Server (NTRS)
Kim, Daehyun; Sobel, Adam H.; DelGenio, Anthony D.; Chen, Yonghua; Camargo, Suzana J.; Yao, Mao-Sung; Kelley, Maxwell; Nazarenko, Larissa
2012-01-01
The tropical subseasonal variability simulated by the Goddard Institute for Space Studies general circulation model, Model E2, is examined. Several versions of Model E2 were developed with changes to the convective parameterization in order to improve the simulation of the Madden-Julian oscillation (MJO). When the convective scheme is modified to have a greater fractional entrainment rate, Model E2 is able to simulate MJO-like disturbances with proper spatial and temporal scales. Increasing the rate of rain reevaporation has additional positive impacts on the simulated MJO. The improvement in MJO simulation comes at the cost of increased biases in the mean state, consistent in structure and amplitude with those found in other GCMs when tuned to have a stronger MJO. By reinitializing a relatively poor-MJO version with restart files from a relatively better-MJO version, a series of 30-day integrations is constructed to examine the impacts of the parameterization changes on the organization of tropical convection. The poor-MJO version with smaller entrainment rate has a tendency to allow convection to be activated over a broader area and to reduce the contrast between dry and wet regimes so that tropical convection becomes less organized. Besides the MJO, the number of tropical-cyclone-like vortices simulated by the model is also affected by changes in the convection scheme. The model simulates a smaller number of such storms globally with a larger entrainment rate, while the number increases significantly with a greater rain reevaporation rate.
Climate Impacts of Fire-Induced Land-Surface Changes
NASA Astrophysics Data System (ADS)
Liu, Y.; Hao, X.; Qu, J. J.
2017-12-01
One of the consequences of wildfires is the changes in land-surface properties such as removal of vegetation. This will change local and regional climate through modifying the land-air heat and water fluxes. This study investigates mechanism by developing and a parameterization of fire-induced land-surface property changes and applying it to modeling of the climate impacts of large wildfires in the United States. Satellite remote sensing was used to quantitatively evaluate the land-surface changes from large fires provided from the Monitoring Trends in Burning Severity (MTBS) dataset. It was found that the changes in land-surface properties induced by fires are very complex, depending on vegetation type and coverage, climate type, season and time after fires. The changes in LAI are remarkable only if the actual values meet a threshold. Large albedo changes occur in winter for fires in cool climate regions. The signs are opposite between the first post-fire year and the following years. Summer day-time temperature increases after fires, while nigh-time temperature changes in various patterns. The changes are larger in forested lands than shrub / grassland lands. In the parameterization scheme, the detected post-fire changes are decomposed into trends using natural exponential functions and fluctuations of periodic variations with the amplitudes also determined by natural exponential functions. The final algorithm is a combination of the trends, periods, and amplitude functions. This scheme is used with Earth system models to simulate the local and regional climate effects of wildfires.
Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
A model with a small amount of numerical dispersion was used to represent saltwater 7 intrusion in a homogeneous aquifer for a 10-year historical calibration period with one 8 groundwater withdrawal location followed by a 10-year prediction period with two groundwater 9 withdrawal locations. Time-varying groundwater concentrations at arbitrary locations in this low-10 dispersion model were then used as observations to calibrate a model with a greater amount of 11 numerical dispersion. The low-dispersion model was solved using a Total Variation Diminishing 12 numerical scheme; an implicit finite difference scheme with upstream weighting was used for 13 the calibration simulations. Calibration focused on estimating a three-dimensional hydraulic 14 conductivity field that was parameterized using a regular grid of pilot points in each layer and a 15 smoothness constraint. Other model parameters (dispersivity, porosity, recharge, etc.) were 16 fixed at the known values. The discrepancy between observed and simulated concentrations 17 (due solely to numerical dispersion) was reduced by adjusting hydraulic conductivity through the 18 calibration process. Within the transition zone, hydraulic conductivity tended to be lower than 19 the true value for the calibration runs tested. The calibration process introduced lower hydraulic 20 conductivity values to compensate for numerical dispersion and improve the match between 21 observed and simulated concentration breakthrough curves at monitoring locations. 22 Concentrations were underpredicted at both groundwater withdrawal locations during the 10-23 year prediction period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less
NASA Astrophysics Data System (ADS)
Calvo, N.; Garcia, R. R.; Kinnison, D. E.
2017-04-01
The latest version of the Whole Atmosphere Community Climate Model (WACCM), which includes a new chemistry scheme and an updated parameterization of orographic gravity waves, produces temperature trends in the Antarctic lower stratosphere in excellent agreement with radiosonde observations for 1969-1998 as regards magnitude, location, timing, and persistence. The maximum trend, reached in November at 100 hPa, is -4.4 ± 2.8 K decade-1, which is a third smaller than the largest trend in the previous version of WACCM. Comparison with a simulation without the updated orographic gravity wave parameterization, together with analysis of the model's thermodynamic budget, reveals that the reduced trend is due to the effects of a stronger Brewer-Dobson circulation in the new simulations, which warms the polar cap. The effects are both direct (a trend in adiabatic warming in late spring) and indirect (a smaller trend in ozone, hence a smaller reduction in shortwave heating, due to the warmer environment).
NASA Astrophysics Data System (ADS)
Arnold, N.; Barahona, D.
2017-12-01
Atmospheric general circulation models (AGCMs) have long struggled to realistically represent tropical intraseasonal variability. Here we report progress in simulating the Madden Julian Oscillation (MJO) with the NASA Goddard Earth Observing System (GEOS) AGCM, in free-running simulations utilizing a new two-moment microphysics scheme and the University of Washington shallow cumulus parameterization. Lag composites of intraseasonal signals show significantly improved eastward propagation over the Indian Ocean and maritime region, with increased eastward precipitation variance and more coherent large-scale structure. The dynamics of the MJO are analyzed using a vertically resolved moisture budget, assuming weak temperature gradient conditions. We find that positive longwave radiative heating anomalies associated with high clouds contribute to low-level ascent and moistening, coincident with intraseasonal precipitation anomalies. Horizontal advection generally damps intraseasonal moisture anomalies, but at some longitudes contributes to their eastward tendency. Shallow convection is enhanced to the east of the intraseasonal precipitation maximum, and its associated moistening of the lower free troposphere encourages eastward propagation of deep convection.
Speeding up the learning of robot kinematics through function decomposition.
Ruiz de Angulo, Vicente; Torras, Carme
2005-11-01
The main drawback of using neural networks or other example-based learning procedures to approximate the inverse kinematics (IK) of robot arms is the high number of training samples (i.e., robot movements) required to attain an acceptable precision. We propose here a trick, valid for most industrial robots, that greatly reduces the number of movements needed to learn or relearn the IK to a given accuracy. This trick consists in expressing the IK as a composition of learnable functions, each having half the dimensionality of the original mapping. Off-line and on-line training schemes to learn these component functions are also proposed. Experimental results obtained by using nearest neighbors and parameterized self-organizing map, with and without the decomposition, show that the time savings granted by the proposed scheme grow polynomially with the precision required.
A Bulk Microphysics Parameterization with Multiple Ice Precipitation Categories.
NASA Astrophysics Data System (ADS)
Straka, Jerry M.; Mansell, Edward R.
2005-04-01
A single-moment bulk microphysics scheme with multiple ice precipitation categories is described. It has 2 liquid hydrometeor categories (cloud droplets and rain) and 10 ice categories that are characterized by habit, size, and density—two ice crystal habits (column and plate), rimed cloud ice, snow (ice crystal aggregates), three categories of graupel with different densities and intercepts, frozen drops, small hail, and large hail. The concept of riming history is implemented for conversions among the graupel and frozen drops categories. The multiple precipitation ice categories allow a range of particle densities and fall velocities for simulating a variety of convective storms with minimal parameter tuning. The scheme is applied to two cases—an idealized continental multicell storm that demonstrates the ice precipitation process, and a small Florida maritime storm in which the warm rain process is important.
Issues and recent advances in optimal experimental design for site investigation (Invited)
NASA Astrophysics Data System (ADS)
Nowak, W.
2013-12-01
This presentation provides an overview over issues and recent advances in model-based experimental design for site exploration. The addressed issues and advances are (1) how to provide an adequate envelope to prior uncertainty, (2) how to define the information needs in a task-oriented manner, (3) how to measure the expected impact of a data set that it not yet available but only planned to be collected, and (4) how to perform best the optimization of the data collection plan. Among other shortcomings of the state-of-the-art, it is identified that there is a lack of demonstrator studies where exploration schemes based on expert judgment are compared to exploration schemes obtained by optimal experimental design. Such studies will be necessary do address the often voiced concern that experimental design is an academic exercise with little improvement potential over the well- trained gut feeling of field experts. When addressing this concern, a specific focus has to be given to uncertainty in model structure, parameterizations and parameter values, and to related surprises that data often bring about in field studies, but never in synthetic-data based studies. The background of this concern is that, initially, conceptual uncertainty may be so large that surprises are the rule rather than the exception. In such situations, field experts have a large body of experience in handling the surprises, and expert judgment may be good enough compared to meticulous optimization based on a model that is about to be falsified by the incoming data. In order to meet surprises accordingly and adapt to them, there needs to be a sufficient representation of conceptual uncertainty within the models used. Also, it is useless to optimize an entire design under this initial range of uncertainty. Thus, the goal setting of the optimization should include the objective to reduce conceptual uncertainty. A possible way out is to upgrade experimental design theory towards real-time interaction with the ongoing site investigation, such that surprises in the data are immediately accounted for to restrict the conceptual uncertainty and update the optimization of the plan.
NASA Astrophysics Data System (ADS)
Cholakian, Arineh; Beekmann, Matthias; Colette, Augustin; Coll, Isabelle; Siour, Guillaume; Sciare, Jean; Marchand, Nicolas; Couvidat, Florian; Pey, Jorge; Gros, Valerie; Sauvage, Stéphane; Michoud, Vincent; Sellegri, Karine; Colomb, Aurélie; Sartelet, Karine; Langley DeWitt, Helen; Elser, Miriam; Prévot, André S. H.; Szidat, Sonke; Dulac, François
2018-05-01
The simulation of fine organic aerosols with CTMs (chemistry-transport models) in the western Mediterranean basin has not been studied until recently. The ChArMEx (the Chemistry-Aerosol Mediterranean Experiment) SOP 1b (Special Observation Period 1b) intensive field campaign in summer of 2013 gathered a large and comprehensive data set of observations, allowing the study of different aspects of the Mediterranean atmosphere including the formation of organic aerosols (OAs) in 3-D models. In this study, we used the CHIMERE CTM to perform simulations for the duration of the SAFMED (Secondary Aerosol Formation in the MEDiterranean) period (July to August 2013) of this campaign. In particular, we evaluated four schemes for the simulation of OA, including the CHIMERE standard scheme, the VBS (volatility basis set) standard scheme with two parameterizations including aging of biogenic secondary OA, and a modified version of the VBS scheme which includes fragmentation and formation of nonvolatile OA. The results from these four schemes are compared to observations at two stations in the western Mediterranean basin, located on Ersa, Cap Corse (Corsica, France), and at Cap Es Pinar (Mallorca, Spain). These observations include OA mass concentration, PMF (positive matrix factorization) results of different OA fractions, and 14C observations showing the fossil or nonfossil origins of carbonaceous particles. Because of the complex orography of the Ersa site, an original method for calculating an orographic representativeness error (ORE) has been developed. It is concluded that the modified VBS scheme is close to observations in all three aspects mentioned above; the standard VBS scheme without BSOA (biogenic secondary organic aerosol) aging also has a satisfactory performance in simulating the mass concentration of OA, but not for the source origin analysis comparisons. In addition, the OA sources over the western Mediterranean basin are explored. OA shows a major biogenic origin, especially at several hundred meters height from the surface; however over the Gulf of Genoa near the surface, the anthropogenic origin is of similar importance. A general assessment of other species was performed to evaluate the robustness of the simulations for this particular domain before evaluating OA simulation schemes. It is also shown that the Cap Corse site presents important orographic complexity, which makes comparison between model simulations and observations difficult. A method was designed to estimate an orographic representativeness error for species measured at Ersa and yields an uncertainty of between 50 and 85 % for primary pollutants, and around 2-10 % for secondary species.
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys
Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
NASA Astrophysics Data System (ADS)
Bertram, Sascha; Bechtold, Michel; Hendriks, Rob; Piayda, Arndt; Regina, Kristiina; Myllys, Merja; Tiemeyer, Bärbel
2017-04-01
Peat soils form a major share of soil suitable for agriculture in northern Europe. Successful agricultural production depends on hydrological and pedological conditions, local climate and agricultural management. Climate change impact assessment on food production and development of mitigation and adaptation strategies require reliable yield forecasts under given emission scenarios. Coupled soil hydrology - crop growth models, driven by regionalized future climate scenarios are a valuable tool and widely used for this purpose. Parameterization on local peat soil conditions and crop breed or grassland specie performance, however, remains a major challenge. The subject of this study is to evaluate the performance and sensitivity of the SWAP-WOFOST coupled soil hydrology and plant growth model with respect to the application on peat soils under different regional conditions across northern Europe. Further, the parameterization of region-specific crop and grass species is discussed. First results of the model application and parameterization at deep peat sites in southern Finland are presented. The model performed very well in reproducing two years of observed, daily ground water level data on four hydrologically contrasting sites. Naturally dry and wet sites could be modelled with the same performance as sites with active water table management by regulated drains in order to improve peat conservation. A simultaneous multi-site calibration scheme was used to estimate plant growth parameters of the local oat breed. Cross-site validation of the modelled yields against two years of observations proved the robustness of the chosen parameter set and gave no indication of possible overparameterization. This study proves the suitability of the coupled SWAP-WOFOST model for the prediction of crop yields and water table dynamics of peat soils in agricultural use under given climate conditions.
Internal wave emission from baroclinic jets: experimental results
NASA Astrophysics Data System (ADS)
Borcia, Ion D.; Rodda, Costanza; Harlander, Uwe
2016-04-01
Large-scale balanced flows can spontaneously radiate meso-scale inertia-gravity waves (IGWs) and are thus in fact unbalanced. While flow-dependent parameterizations for the radiation of IGWs from orographic and convective sources do exist, the situation is less developed for spontaneously emitted IGWs. Observations identify increased IGW activity in the vicinity of jet exit regions. A direct interpretation of those based on geostrophic adjustment might be tempting. However, directly applying this concept to the parameterization of spontaneous imbalance is difficult since the dynamics itself is continuously re-establishing an unbalanced flow which then sheds imbalances by GW radiation. Examining spontaneous IGW emission in the atmosphere and validating parameterization schemes confronts the scientist with particular challenges. Due to its extreme complexity, GW emission will always be embedded in the interaction of a multitude of interdependent processes, many of which are hardly detectable from analysis or campaign data. The benefits of repeated and more detailed measurements, while representing the only source of information about the real atmosphere, are limited by the non-repeatability of an atmospheric situation. The same event never occurs twice. This argues for complementary laboratory experiments, which can provide a more focused dialogue between experiment and theory. Indeed, life cycles are also examined in rotating-annulus laboratory experiments. Thus, these experiments might form a useful empirical benchmark for theoretical and modeling work that is also independent of any sort of subgrid model. In addition, the more direct correspondence between experimental and model data and the data reproducibility makes lab experiments a powerful testbed for parameterizations. Here we show first results from a small rotating annulus experiments and we will further present our new experimental facility to study wave emission from jets and fronts.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
NASA Astrophysics Data System (ADS)
Ciarelli, Giancarlo; Aksoyoglu, Sebnem; El Haddad, Imad; Bruns, Emily A.; Crippa, Monica; Poulain, Laurent; Äijälä, Mikko; Carbone, Samara; Freney, Evelyn; O'Dowd, Colin; Baltensperger, Urs; Prévôt, André S. H.
2017-06-01
We evaluated a modified VBS (volatility basis set) scheme to treat biomass-burning-like organic aerosol (BBOA) implemented in CAMx (Comprehensive Air Quality Model with extensions). The updated scheme was parameterized with novel wood combustion smog chamber experiments using a hybrid VBS framework which accounts for a mixture of wood burning organic aerosol precursors and their further functionalization and fragmentation in the atmosphere. The new scheme was evaluated for one of the winter EMEP intensive campaigns (February-March 2009) against aerosol mass spectrometer (AMS) measurements performed at 11 sites in Europe. We found a considerable improvement for the modelled organic aerosol (OA) mass compared to our previous model application with the mean fractional bias (MFB) reduced from -61 to -29 %. We performed model-based source apportionment studies and compared results against positive matrix factorization (PMF) analysis performed on OA AMS data. Both model and observations suggest that OA was mainly of secondary origin at almost all sites. Modelled secondary organic aerosol (SOA) contributions to total OA varied from 32 to 88 % (with an average contribution of 62 %) and absolute concentrations were generally under-predicted. Modelled primary hydrocarbon-like organic aerosol (HOA) and primary biomass-burning-like aerosol (BBPOA) fractions contributed to a lesser extent (HOA from 3 to 30 %, and BBPOA from 1 to 39 %) with average contributions of 13 and 25 %, respectively. Modelled BBPOA fractions were found to represent 12 to 64 % of the total residential-heating-related OA, with increasing contributions at stations located in the northern part of the domain. Source apportionment studies were performed to assess the contribution of residential and non-residential combustion precursors to the total SOA. Non-residential combustion and road transportation sector contributed about 30-40 % to SOA formation (with increasing contributions at urban and near industrialized sites), whereas residential combustion (mainly related to wood burning) contributed to a larger extent, around 60-70 %. Contributions to OA from residential combustion precursors in different volatility ranges were also assessed: our results indicate that residential combustion gas-phase precursors in the semivolatile range (SVOC) contributed from 6 to 30 %, with higher contributions predicted at stations located in the southern part of the domain. On the other hand, the oxidation products of higher-volatility precursors (the sum of intermediate-volatility compounds (IVOCs) and volatile organic compounds (VOCs)) contribute from 15 to 38 % with no specific gradient among the stations. Although the new parameterization leads to a better agreement between model results and observations, it still under-predicts the SOA fraction, suggesting that uncertainties in the new scheme and other sources and/or formation mechanisms remain to be elucidated. Moreover, a more detailed characterization of the semivolatile components of the emissions is needed.
NASA Astrophysics Data System (ADS)
Bell, C.; Li, Y.; Lopez, E.; Hogue, T. S.
2017-12-01
Decision support tools that quantitatively estimate the cost and performance of infrastructure alternatives are valuable for urban planners. Such a tool is needed to aid in planning stormwater projects to meet diverse goals such as the regulation of stormwater runoff and its pollutants, minimization of economic costs, and maximization of environmental and social benefits in the communities served by the infrastructure. This work gives a brief overview of an integrated decision support tool, called i-DST, that is currently being developed to serve this need. This presentation focuses on the development of a default database for the i-DST that parameterizes water quality treatment efficiency of stormwater best management practices (BMPs) by region. Parameterizing the i-DST by region will allow the tool to perform accurate simulations in all parts of the United States. A national dataset of BMP performance is analyzed to determine which of a series of candidate regionalizations explains the most variance in the national dataset. The data used in the regionalization analysis comes from the International Stormwater BMP Database and data gleaned from an ongoing systematic review of peer-reviewed and gray literature. In addition to identifying a regionalization scheme for water quality performance parameters in the i-DST, our review process will also provide example methods and protocols for systematic reviews in the field of Earth Science.
Offline GCSS Intercomparison of Cloud-Radiation Interaction and Surface Fluxes
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Johnson, D.; Krueger, S.; Zulauf, M.; Donner, L.; Seman, C.; Petch, J.; Gregory, J.
2004-01-01
Simulations of deep tropical clouds by both cloud-resolving models (CRMs) and single-column models (SCMs) in the GEWEX Cloud System Study (GCSS) Working Group 4 (WG4; Precipitating Convective Cloud Systems), Case 2 (19-27 December 1992, TOGA-COARE IFA) have produced large differences in the mean heating and moistening rates (-1 to -5 K and -2 to 2 grams per kilogram respectively). Since the large-scale advective temperature and moisture "forcing" are prescribed for this case, a closer examination of two of the remaining external types of "forcing", namely radiative heating and air/sea hear and moisture transfer, are warranted. This paper examines the current radiation and surface flux of parameterizations used in the cloud models participating in the GCSS WG4, be executing the models "offline" for one time step (12 s) for a prescribed atmospheric state, then examining the surface and radiation fluxes from each model. The dynamic, thermodynamic, and microphysical fluids are provided by the GCE-derived model output for Case 2 during a period of very active deep convection (westerly wind burst). The surface and radiation fluxes produced from the models are then divided into prescribed convective, stratiform, and clear regions in order to examine the role that clouds play in the flux parameterizations. The results suggest that the differences between the models are attributed more to the surface flux parameterizations than the radiation schemes.
Quality Assessment of the Cobel-Isba Numerical Forecast System of Fog and Low Clouds
NASA Astrophysics Data System (ADS)
Bergot, Thierry
2007-06-01
Short-term forecasting of fog is a difficult issue which can have a large societal impact. Fog appears in the surface boundary layer and is driven by the interactions between land surface and the lower layers of the atmosphere. These interactions are still not well parameterized in current operational NWP models, and a new methodology based on local observations, an adaptive assimilation scheme and a local numerical model is tested. The proposed numerical forecast method of foggy conditions has been run during three years at Paris-CdG international airport. This test over a long-time period allows an in-depth evaluation of the forecast quality. This study demonstrates that detailed 1-D models, including detailed physical parameterizations and high vertical resolution, can reasonably represent the major features of the life cycle of fog (onset, development and dissipation) up to +6 h. The error on the forecast onset and burn-off time is typically 1 h. The major weakness of the methodology is related to the evolution of low clouds (stratus lowering). Even if the occurrence of fog is well forecasted, the value of the horizontal visibility is only crudely forecasted. Improvements in the microphysical parameterization and in the translation algorithm converting NWP prognostic variables into a corresponding horizontal visibility seems necessary to accurately forecast the value of the visibility.
NASA Astrophysics Data System (ADS)
Mazoyer, M.; Roehrig, R.; Nuissier, O.; Duffourg, F.; Somot, S.
2017-12-01
Most regional climate models (RCSMs) face difficulties in representing a reasonable pre-cipitation probability density function in the Mediterranean area and especially over land.Small amounts of rain are too frequent, preventing any realistic representation of droughts orheat waves, while the intensity of heavy precipitating events is underestimated and not welllocated by most state-of-the-art RCSMs using parameterized convection (resolution from10 to 50 km). Convective parameterization is a key point for the representation of suchevents and recently, the new physics implemented in the CNRM-RCSM has been shown toremarkably improve it, even at a 50-km scale.The present study seeks to further analyse the representation of heavy precipitating eventsby this new version of CNRM-RCSM using a process oriented approach. We focus on oneparticular event in the south-east of France, over the Cévennes. Two hindcast experimentswith the CNRM-RCSM (12 and 50 km) are performed and compared with a simulationbased on the convection-permitting model Meso-NH, which makes use of a very similarsetup as CNRM-RCSM hindcasts. The role of small-scale features of the regional topogra-phy and its interaction with the impinging large-scale flow in triggering the convective eventare investigated. This study provides guidance in the ongoing implementation and use of aspecific parameterization dedicated to account for subgrid-scale orography in the triggeringand closure conditions of the CNRM-RCSM convection scheme.
NASA Technical Reports Server (NTRS)
Han, Qingyuan; Rossow, William B.; Chou, Joyce; Welch, Ronald M.
1997-01-01
Cloud microphysical parameterizations have attracted a great deal of attention in recent years due to their effect on cloud radiative properties and cloud-related hydrological processes in large-scale models. The parameterization of cirrus particle size has been demonstrated as an indispensable component in the climate feedback analysis. Therefore, global-scale, long-term observations of cirrus particle sizes are required both as a basis of and as a validation of parameterizations for climate models. While there is a global scale, long-term survey of water cloud droplet sizes (Han et al. 1994), there is no comparable study for cirrus ice crystals. In this paper a near-global survey of cirrus ice crystal sizes is conducted using ISCCP satellite data analysis. The retrieval scheme uses phase functions based upon hexagonal crystals calculated by a ray tracing technique. The results show that global mean values of D(e) are about 60 micro-m. This study also investigates the possible reasons for the significant difference between satellite retrieved effective radii (approx. 60 micro-m) and aircraft measured particle sizes (approx. 200 micro-m) during the FIRE I IFO experiment. They are (1) vertical inhomogeneity of cirrus particle sizes; (2) lower limit of the instrument used in aircraft measurements; (3) different definitions of effective particle sizes; and (4) possible inappropriate phase functions used in satellite retrieval.
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
On the Specification of Smoke Injection Heights for Aerosol Forecasting
NASA Astrophysics Data System (ADS)
da Silva, A.; Schaefer, C.; Randles, C. A.
2014-12-01
The proper forecasting of biomass burning (BB) aerosols in global or regional transport models requires not only the specification of emission rates with sufficient temporal resolution but also the injection layers of such emissions. While current near realtime biomass burning inventories such as GFAS, QFED, FINN, GBBEP and FLAMBE provide such emission rates, it is left for each modeling system to come up with its own scheme for distributing these emissions in the vertical. A number of operational aerosol forecasting models deposits BB emissions in the near surface model layers, relying on the model's parameterization of turbulent and convective transport to determine the vertical mass distribution of BB aerosols. Despite their simplicity such schemes have been relatively successful reproducing the vertical structure of BB aerosols, except for those large fires that produce enough buoyancy to puncture the PBL and deposit the smoke at higher layers. Plume Rise models such as the so-called 'Freitas model', parameterize this sub-grid buoyancy effect, but require the specification of fire size and heat fluxes, none of which is readily available in near real-time from current remotely-sensed products. In this talk we will introduce a bayesian algorithm for estimating file size and heat fluxes from MODIS brightness temperatures. For small to moderate fires the Freitas model driven by these heat flux estimates produces plume tops that are highly correlated with the GEOS-5 model estimate of PBL height. Comparison to MINX plume height estimates from MISR indicates moderate skill of this scheme predicting the injection height of large fires. As an alternative, we make use of OMPS UV aerosol index data in combination with estimates of Overshooting Convective Tops (from MODIS and Geo-stationary satellites) to detect PyCu events and specify the BB emission vertical mass distribution in such cases. We will present a discussion of case studies during the SEAC4RS field campaign in August-September 2013.
NASA Astrophysics Data System (ADS)
Campbell, Lucy J.; Shepherd, Theodore G.
2005-12-01
This study examines the effect of combining equatorial planetary wave drag and gravity wave drag in a one-dimensional zonal mean model of the quasi-biennial oscillation (QBO). Several different combinations of planetary wave and gravity wave drag schemes are considered in the investigations, with the aim being to assess which aspects of the different schemes affect the nature of the modeled QBO. Results show that it is possible to generate a realistic-looking QBO with various combinations of drag from the two types of waves, but there are some constraints on the wave input spectra and amplitudes. For example, if the phase speeds of the gravity waves in the input spectrum are large relative to those of the equatorial planetary waves, critical level absorption of the equatorial planetary waves may occur. The resulting mean-wind oscillation, in that case, is driven almost exclusively by the gravity wave drag, with only a small contribution from the planetary waves at low levels. With an appropriate choice of wave input parameters, it is possible to obtain a QBO with a realistic period and to which both types of waves contribute. This is the regime in which the terrestrial QBO appears to reside. There may also be constraints on the initial strength of the wind shear, and these are similar to the constraints that apply when gravity wave drag is used without any planetary wave drag.In recent years, it has been observed that, in order to simulate the QBO accurately, general circulation models require parameterized gravity wave drag, in addition to the drag from resolved planetary-scale waves, and that even if the planetary wave amplitudes are incorrect, the gravity wave drag can be adjusted to compensate. This study provides a basis for knowing that such a compensation is possible.
NASA Astrophysics Data System (ADS)
Mo, Jingyue; Huang, Tao; Zhang, Xiaodong; Zhao, Yuan; Liu, Xiao; Li, Jixiang; Gao, Hong; Ma, Jianmin
2017-12-01
As a renewable and clean energy source, wind power has become the most rapidly growing energy resource worldwide in the past decades. Wind power has been thought not to exert any negative impacts on the environment. However, since a wind farm can alter the local meteorological conditions and increase the surface roughness lengths, it may affect air pollutants passing through and over the wind farm after released from their sources and delivered to the wind farm. In the present study, we simulated the nitrogen dioxide (NO2) air concentration within and around the world's largest wind farm (Jiuquan wind farm in Gansu Province, China) using a coupled meteorology and atmospheric chemistry model WRF-Chem. The results revealed an edge effect
, which featured higher NO2 levels at the immediate upwind and border region of the wind farm and lower NO2 concentration within the wind farm and the immediate downwind transition area of the wind farm. A surface roughness length scheme and a wind turbine drag force scheme were employed to parameterize the wind farm in this model investigation. Modeling results show that both parameterization schemes yield higher concentration in the immediate upstream of the wind farm and lower concentration within the wind farm compared to the case without the wind farm. We infer this edge effect and the spatial distribution of air pollutants to be the result of the internal boundary layer induced by the changes in wind speed and turbulence intensity driven by the rotation of the wind turbine rotor blades and the enhancement of surface roughness length over the wind farm. The step change in the roughness length from the smooth to rough surfaces (overshooting) in the upstream of the wind farm decelerates the atmospheric transport of air pollutants, leading to their accumulation. The rough to the smooth surface (undershooting) in the downstream of the wind farm accelerates the atmospheric transport of air pollutants, resulting in lower concentration level.
NASA Astrophysics Data System (ADS)
Rothenberg, Daniel; Avramov, Alexander; Wang, Chien
2018-06-01
Interactions between aerosol particles and clouds contribute a great deal of uncertainty to the scientific community's understanding of anthropogenic climate forcing. Aerosol particles serve as the nucleation sites for cloud droplets, establishing a direct linkage between anthropogenic particulate emissions and clouds in the climate system. To resolve this linkage, the community has developed parameterizations of aerosol activation which can be used in global climate models to interactively predict cloud droplet number concentrations (CDNCs). However, different activation schemes can exhibit different sensitivities to aerosol perturbations in different meteorological or pollution regimes. To assess the impact these different sensitivities have on climate forcing, we have coupled three different core activation schemes and variants with the CESM-MARC (two-Moment, Multi-Modal, Mixing-state-resolving Aerosol model for Research of Climate (MARC) coupled with the National Center for Atmospheric Research's (NCAR) Community Earth System Model (CESM; version 1.2)). Although the model produces a reasonable present-day CDNC climatology when compared with observations regardless of the scheme used, ΔCDNCs between the present and preindustrial era regionally increase by over 100 % in zonal mean when using the most sensitive parameterization. These differences in activation sensitivity may lead to a different evolution of the model meteorology, and ultimately to a spread of over 0.8 W m-2 in global average shortwave indirect effect (AIE) diagnosed from the model, a range which is as large as the inter-model spread from the AeroCom intercomparison. Model-derived AIE strongly scales with the simulated preindustrial CDNC burden, and those models with the greatest preindustrial CDNC tend to have the smallest AIE, regardless of their ΔCDNC. This suggests that present-day evaluations of aerosol-climate models may not provide useful constraints on the magnitude of the AIE, which will arise from differences in model estimates of the preindustrial aerosol and cloud climatology.
Reimers, Jeffrey R; Cai, Zheng-Li; Bilić, Ante; Hush, Noel S
2003-12-01
As molecular electronics advances, efficient and reliable computation procedures are required for the simulation of the atomic structures of actual devices, as well as for the prediction of their electronic properties. Density-functional theory (DFT) has had widespread success throughout chemistry and solid-state physics, and it offers the possibility of fulfilling these roles. In its modern form it is an empirically parameterized approach that cannot be extended toward exact solutions in a prescribed way, ab initio. Thus, it is essential that the weaknesses of the method be identified and likely shortcomings anticipated in advance. We consider four known systematic failures of modern DFT: dispersion, charge transfer, extended pi conjugation, and bond cleavage. Their ramifications for molecular electronics applications are outlined and we suggest that great care is required when using modern DFT to partition charge flow across electrode-molecule junctions, screen applied electric fields, position molecular orbitals with respect to electrode Fermi energies, and in evaluating the distance dependence of through-molecule conductivity. The causes of these difficulties are traced to errors inherent in the types of density functionals in common use, associated with their inability to treat very long-range electron correlation effects. Heuristic enhancements of modern DFT designed to eliminate individual problems are outlined, as are three new schemes that each represent significant departures from modern DFT implementations designed to provide a priori improvements in at least one and possible all problem areas. Finally, fully semiempirical schemes based on both Hartree-Fock and Kohn-Sham theory are described that, in the short term, offer the means to avoid the inherent problems of modern DFT and, in the long term, offer competitive accuracy at dramatically reduced computational costs.
NASA Astrophysics Data System (ADS)
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; MacBean, Natasha; Alexander, M. Ross; Dye, Alex; Bishop, Daniel A.; Trouet, Valerie; Babst, Flurin; Hessl, Amy E.; Pederson, Neil; Blanken, Peter D.; Bohrer, Gil; Gough, Christopher M.; Litvak, Marcy E.; Novick, Kimberly A.; Phillips, Richard P.; Wood, Jeffrey D.; Moore, David J. P.
2017-09-01
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.-iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m-2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m-2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C-LAI relationship in the model did not match the observed leaf C-LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic Cstem / Cleaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; ...
2017-09-22
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.
How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocationmore » schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m -2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m -2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the observed leaf C–LAI relationship at our sites. Although the four C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, D-Litton gave more realistic C stem/C leaf ratios and strongly reduced the overestimation of initial aboveground biomass and aboveground NPP for deciduous forests by D-CLM4.5. We identified key structural and parameterization deficits that need refinement to improve the accuracy of LSMs in the near future. These include changing how C is allocated in fixed and dynamic schemes based on data from current forest syntheses and different parameterization of allocation schemes for different forest types. Our results highlight the utility of using measurements of aboveground biomass to evaluate and constrain the C allocation scheme in LSMs, and suggest that stem turnover is overestimated by CLM4.5 for these AmeriFlux sites. Understanding the controls of turnover will be critical to improving long-term C processes in LSMs.« less
Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes
NASA Technical Reports Server (NTRS)
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-01-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
Lidar Ice nuclei estimates and how they relate with airborne in-situ measurements
NASA Astrophysics Data System (ADS)
Marinou, Eleni; Amiridis, Vassilis; Ansmann, Albert; Nenes, Athanasios; Balis, Dimitris; Schrod, Jann; Binietoglou, Ioannis; Solomos, Stavros; Mamali, Dimitra; Engelmann, Ronny; Baars, Holger; Kottas, Michael; Tsekeri, Alexandra; Proestakis, Emmanouil; Kokkalis, Panagiotis; Goloub, Philippe; Cvetkovic, Bojan; Nichovic, Slobodan; Mamouri, Rodanthi; Pikridas, Michael; Stavroulas, Iasonas; Keleshis, Christos; Sciare, Jean
2018-04-01
By means of available ice nucleating particle (INP) parameterization schemes we compute profiles of dust INP number concentration utilizing Polly-XT and CALIPSO lidar observations during the INUIT-BACCHUS-ACTRIS 2016 campaign. The polarization-lidar photometer networking (POLIPHON) method is used to separate dust and non-dust aerosol backscatter, extinction, mass concentration, particle number concentration (for particles with radius > 250 nm) and surface area concentration. The INP final products are compared with aerosol samples collected from unmanned aircraft systems (UAS) and analyzed using the ice nucleus counter FRIDGE.
2015-10-21
rolls) in preparation for modifying current EDMF expressions We also continued to investigate the sensitivity of the WRF and COAMPS model to modified...allow non-collinear models to interact. During the fourth year, the TODWL data was also utilized by both the WRF and COAMPS model to help characterize...includes the contribution from both corrective and shear driven rolls within SCM, COAMPS and WRF <.’u:^--<^y\\,i/uU
Assimilation of MODIS and VIIRS AOD to improve aerosols forecasts with FV3-GOCART
NASA Astrophysics Data System (ADS)
Pagowski, M.
2017-12-01
In 2016 NOAA chose the FV3 dynamical core as a basis for its future global modeling system. We present an implementation of aerosol module in the FV3 model and its assimilation framework. The parameterization of aerosols is based on the GOCART scheme. The assimilation methodology relies on hybrid 3D-Var and EnKF methods. Aerosol observations include aerosol optical depth at 550 nm from VIIRS satellite. Results and evaluation of the system against independent observations and NASA's MERRA-2 is shown.
Freeform lens generation for quasi-far-field successive illumination targets
NASA Astrophysics Data System (ADS)
Zhuang, Zhenfeng; Thibault, Simon
2018-07-01
A predefined mapping to tailor one or more freeform surfaces is employed to build a freeform illumination system. The emergent rays from the light source corresponding to the prescribed target mesh for a pre-determined lighting distance are mapped by a point-to-point algorithm with respect to the freeform optics, which involves limiting design flexibility. To tackle the problem of design limitation and find the optimum design results, a freeform lens is exploited to produce the desired rectangular illumination distribution at successive target planes at quasi-far-field lighting distances. It is generated using numerical solutions to find out an initial starting point, and an appropriate approach to obtain variables for parameterization of the freeform surface is introduced. The relative standard deviation, which is a useful figure of merit for the analysis, is set up as merit function with respect to illumination non-uniformity at the successive sampled target planes. Therefore, the irradiance distribution in terms of the specific lighting distance range can be ensured by the proposed scheme. A design example of a freeform illumination system, composed of a spherical surface and a freeform surface, is given to produce desired irradiance distribution within the lighting distance range. An optical performance with low non-uniformity and high efficiency is achieved. Compared with the conventional approach, the uniformity of the sampled targets is dramatically enhanced; meanwhile, a design result with a large tolerance of LED size is offered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, Larry K.; Gustafson, William I.; Kassianov, Evgueni I.
A new treatment for shallow clouds has been introduced into the Weather Research and Forecasting (WRF) model. The new scheme, called the cumulus potential (CuP) scheme, replaces the ad-hoc trigger function used in the Kain-Fritsch cumulus parameterization with a trigger function related to the distribution of temperature and humidity in the convective boundary layer via probability density functions (PDFs). An additional modification to the default version of WRF is the computation of a cumulus cloud fraction based on the time scales relevant for shallow cumuli. Results from three case studies over the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM)more » site in north central Oklahoma are presented. These days were selected because of the presence of shallow cumuli over the ARM site. The modified version of WRF does a much better job predicting the cloud fraction and the downwelling shortwave irradiance thancontrol simulations utilizing the default Kain-Fritsch scheme. The modified scheme includes a number of additional free parameters, including the number and size of bins used to define the PDF, the minimum frequency of a bin within the PDF before that bin is considered for shallow clouds to form, and the critical cumulative frequency of bins required to trigger deep convection. A series of tests were undertaken to evaluate the sensitivity of the simulations to these parameters. Overall, the scheme was found to be relatively insensitive to each of the parameters.« less
NASA Astrophysics Data System (ADS)
Cuchiara, G. C.; Li, X.; Carvalho, J.; Rappenglück, B.
2014-10-01
With over 6 million inhabitants the Houston metropolitan area is the fourth-largest in the United States. Ozone concentration in this southeast Texas region frequently exceeds the National Ambient Air Quality Standard (NAAQS). For this reason our study employed the Weather Research and Forecasting model with Chemistry (WRF/Chem) to quantify meteorological prediction differences produced by four widely used PBL schemes and analyzed its impact on ozone predictions. The model results were compared to observational data in order to identify one superior PBL scheme better suited for the area. The four PBL schemes include two first-order closure schemes, the Yonsei University (YSU) and the Asymmetric Convective Model version 2 (ACM2); as well as two turbulent kinetic energy closure schemes, the Mellor-Yamada-Janjic (MYJ) and Quasi-Normal Scale Elimination (QNSE). Four 24 h forecasts were performed, one for each PBL scheme. Simulated vertical profiles for temperature, potential temperature, relative humidity, water vapor mixing ratio, and the u-v components of the wind were compared to measurements collected during the Second Texas Air Quality Study (TexAQS-II) Radical and Aerosol Measurements Project (TRAMP) experiment in summer 2006. Simulated ozone was compared against TRAMP data, and air quality stations from Continuous Monitoring Station (CAMS). Also, the evolutions of the PBL height and vertical mixing properties within the PBL for the four simulations were explored. Although the results yielded high correlation coefficients and small biases in almost all meteorological variables, the overall results did not indicate any preferred PBL scheme for the Houston case. However, for ozone prediction the YSU scheme showed greatest agreements with observed values.
NASA Astrophysics Data System (ADS)
Cuchiara, Gustavo C.; Li, Xiangshang; Carvalho, Jonas; Rappenglück, Bernhard
2015-04-01
With over 6 million inhabitants the Houston metropolitan area is the fourth-largest in the United States. Ozone concentration in this southeast Texas region frequently exceeds the National Ambient Air Quality Standard (NAAQS). For this reason our study employed the Weather Research and Forecasting model with Chemistry (WRF/Chem) to quantify meteorological prediction differences produced by four widely used PBL schemes and analyzed its impact on ozone predictions. The model results were compared to observational data in order to identify one superior PBL scheme better suited for the area. The four PBL schemes include two first-order closure schemes, the Yonsei University (YSU) and the Asymmetric Convective Model version 2 (ACM2); as well as two turbulent kinetic energy closure schemes, the Mellor-Yamada-Janjic (MYJ) and Quasi-Normal Scale Elimination (QNSE). Four 24 h forecasts were performed, one for each PBL scheme. Simulated vertical profiles for temperature, potential temperature, relative humidity, water vapor mixing ratio, and the u-v components of the wind were compared to measurements collected during the Second Texas Air Quality Study (TexAQS-II) Radical and Aerosol Measurements Project (TRAMP) experiment in summer 2006. Simulated ozone was compared against TRAMP data, and air quality stations from Continuous Monitoring Station (CAMS). Also, the evolutions of the PBL height and vertical mixing properties within the PBL for the four simulations were explored. Although the results yielded high correlation coefficients and small biases in almost all meteorological variables, the overall results did not indicate any preferred PBL scheme for the Houston case. However, for ozone prediction the YSU scheme showed greatest agreements with observed values.
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
NASA Astrophysics Data System (ADS)
Schirrer, A.; Westermayer, C.; Hemedi, M.; Kozek, M.
2013-12-01
This paper shows control design results, performance, and limitations of robust lateral control law designs based on the DGK-iteration mixed-μ-synthesis procedure for a large, flexible blended wing body (BWB) passenger aircraft. The aircraft dynamics is preshaped by a low-complexity inner loop control law providing stabilization, basic response shaping, and flexible mode damping. The μ controllers are designed to further improve vibration damping of the main flexible modes by exploiting the structure of the arising significant parameter-dependent plant variations. This is achieved by utilizing parameterized Linear Fractional Representations (LFR) of the aircraft rigid and flexible dynamics. Designs with various levels of LFR complexity are carried out and discussed, showing the achieved performance improvement over the initial controller and their robustness and complexity properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, Kuo-Nan
2016-02-09
Under the support of the aforementioned DOE Grant, we have made two fundamental contributions to atmospheric and climate sciences: (1) Develop an efficient 3-D radiative transfer parameterization for application to intense and intricate inhomogeneous mountain/snow regions. (2) Innovate a stochastic parameterization for light absorption by internally mixed black carbon and dust particles in snow grains for understanding and physical insight into snow albedo reduction in climate models. With reference to item (1), we divided solar fluxes reaching mountain surfaces into five components: direct and diffuse fluxes, direct- and diffuse-reflected fluxes, and coupled mountain-mountain flux. “Exact” 3D Monte Carlo photon tracingmore » computations can then be performed for these solar flux components to compare with those calculated from the conventional plane-parallel (PP) radiative transfer program readily available in climate models. Subsequently, Parameterizations of the deviations of 3D from PP results for five flux components are carried out by means of the multiple linear regression analysis associated with topographic information, including elevation, solar incident angle, sky view factor, and terrain configuration factor. We derived five regression equations with high statistical correlations for flux deviations and successfully incorporated this efficient parameterization into WRF model, which was used as the testbed in connection with the Fu-Liou-Gu PP radiation scheme that has been included in the WRF physics package. Incorporating this 3D parameterization program, we conducted simulations of WRF and CCSM4 to understand and evaluate the mountain/snow effect on snow albedo reduction during seasonal transition and the interannual variability for snowmelt, cloud cover, and precipitation over the Western United States presented in the final report. With reference to item (2), we developed in our previous research a geometric-optics surface-wave approach (GOS) for the computation of light absorption and scattering by complex and inhomogeneous particles for application to aggregates and snow grains with external and internal mixing structures. We demonstrated that a small black (BC) particle on the order of 1 μm internally mixed with snow grains could effectively reduce visible snow albedo by as much as 5–10%. Following this work and within the context of DOE support, we have made two key accomplishments presented in the attached final report.« less
NASA Astrophysics Data System (ADS)
Neggers, R.
2017-12-01
Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask). Powerlaw scaling is still evident, but with a reduced exponent, suggesting that this behavior could be parameterized.
NASA Astrophysics Data System (ADS)
Melas, Evangelos
2011-07-01
The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.
On the stability of the Atlantic meridional overturning circulation.
Hofmann, Matthias; Rahmstorf, Stefan
2009-12-08
One of the most important large-scale ocean current systems for Earth's climate is the Atlantic meridional overturning circulation (AMOC). Here we review its stability properties and present new model simulations to study the AMOC's hysteresis response to freshwater perturbations. We employ seven different versions of an Ocean General Circulation Model by using a highly accurate tracer advection scheme, which minimizes the problem of numerical diffusion. We find that a characteristic freshwater hysteresis also exists in the predominantly wind-driven, low-diffusion limit of the AMOC. However, the shape of the hysteresis changes, indicating that a convective instability rather than the advective Stommel feedback plays a dominant role. We show that model errors in the mean climate can make the hysteresis disappear, and we investigate how model innovations over the past two decades, like new parameterizations and mixing schemes, affect the AMOC stability. Finally, we discuss evidence that current climate models systematically overestimate the stability of the AMOC.
Incorporation of UK Met Office's radiation scheme into CPTEC's global model
NASA Astrophysics Data System (ADS)
Chagas, Júlio C. S.; Barbosa, Henrique M. J.
2009-03-01
Current parameterization of radiation in the CPTEC's (Center for Weather Forecast and Climate Studies, Cachoeira Paulista, SP, Brazil) operational AGCM has its origins in the work of Harshvardhan et al. (1987) and uses the formulation of Ramaswamy and Freidenreich (1992) for the short-wave absorption by water vapor. The UK Met Office's radiation code (Edwards and Slingo, 1996) was incorporated into CPTEC's global model, initially for short-wave only, and some impacts of that were shown by Chagas and Barbosa (2006). Current paper presents some impacts of the complete incorporation (both short-wave and long-wave) of UK Met Office's scheme. Selected results from off-line comparisons with line-by-line benchmark calculations are shown. Impacts on the AGCM's climate are assessed by comparing output of climate runs of current and modified AGCM with products from GEWEX/SRB (Surface Radiation Budget) project.
Limited Rank Matrix Learning, discriminative dimension reduction and visualization.
Bunte, Kerstin; Schneider, Petra; Hammer, Barbara; Schleif, Frank-Michael; Villmann, Thomas; Biehl, Michael
2012-02-01
We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Hairong; Zhang, Jianyun; Zeng, Xiaofan; Ye, Lei; Liu, Yi; Tayyab, Muhammad; Chen, Yufan
2017-07-01
An accurate flood forecasting with long lead time can be of great value for flood prevention and utilization. This paper develops a one-way coupled hydro-meteorological modeling system consisting of the mesoscale numerical weather model Weather Research and Forecasting (WRF) model and the Chinese Xinanjiang hydrological model to extend flood forecasting lead time in the Jinshajiang River Basin, which is the largest hydropower base in China. Focusing on four typical precipitation events includes: first, the combinations and mode structures of parameterization schemes of WRF suitable for simulating precipitation in the Jinshajiang River Basin were investigated. Then, the Xinanjiang model was established after calibration and validation to make up the hydro-meteorological system. It was found that the selection of the cloud microphysics scheme and boundary layer scheme has a great impact on precipitation simulation, and only a proper combination of the two schemes could yield accurate simulation effects in the Jinshajiang River Basin and the hydro-meteorological system can provide instructive flood forecasts with long lead time. On the whole, the one-way coupled hydro-meteorological model could be used for precipitation simulation and flood prediction in the Jinshajiang River Basin because of its relatively high precision and long lead time.
NASA Astrophysics Data System (ADS)
Madala, Srikanth; Srinivas, C. V.; Satyanarayana, A. N. V.
2018-01-01
The land-sea breezes (LSBs) play an important role in transporting air pollution from urban areas on the coast. In this study, the Advanced Research WRF (ARW) mesoscale model is used for predicting boundary layer features to understand the transport of pollution in different seasons over the coastal region of Chennai in Southern India. Sensitivity experiments are conducted with two non-local [Yonsei University (YSU) and Asymmetric Convective Model version 2 (ACM2)] and three turbulence kinetic energy (TKE) closure [Mellor-Yamada-Nakanishi and Niino Level 2.5 (MYNN2) and Mellor-Yamada-Janjic (MYJ) and quasi-normal scale elimination (QNSE)], planetary boundary layer (PBL) parameterization schemes for simulating the thermodynamic structure, and low-level atmospheric flow in different seasons. Comparison of simulations with observations from a global positioning system (GPS) radiosonde, meteorological tower, automated weather stations, and Doppler weather radar (DWR)-derived wind data reveals that the characteristics of LSBs vary widely in different seasons and are more prominent during the pre-monsoon and monsoon seasons (March-September) with large horizontal and vertical extents compared to the post-monsoon and winter seasons. The qualitative and quantitative results indicate that simulations with ACM2 followed by MYNN2 and YSU produced various features of the LSBs, boundary layer parameters and the thermo-dynamical structure in better agreement with observations than other tested physical parameterization schemes. Simulations revealed seasonal variation of onset time, vertical extent of LSBs, and mixed layer depth, which would influence the air pollution dispersion in different seasons over the study region.
Parameterizations of Dry Deposition for the Industrial Source Complex Model
NASA Astrophysics Data System (ADS)
Wesely, M. L.; Doskey, P. V.; Touma, J. S.
2002-05-01
Improved algorithms have been developed to simulate the dry deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex model system. The dry deposition velocities are described in conventional resistance schemes, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake of gases at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. Standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed to provide a means to evaluate the role of lipid solubility on uptake by the waxy outer cuticle of vegetative plant leaves. The dry deposition velocities of particulate HAPs are simulated with a resistance scheme in which deposition velocity is described for two size modes: a fine mode with particles less than about 2.5 microns in diameter and a coarse mode with larger particles but excluding very coarse particles larger than about 10 microns in diameter. For the fine mode, the deposition velocity is calculated with a parameterization based on observations of sulfate dry deposition. For the coarse mode, a representative settling velocity is assumed. Then the total deposition velocity is estimated as the sum of the two deposition velocities weighted according to the amount of mass expected in the two modes.
Objective calibration of numerical weather prediction models
NASA Astrophysics Data System (ADS)
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
NASA Astrophysics Data System (ADS)
Zhong, Efang; Li, Qian; Sun, Shufen; Chen, Wen; Chen, Shangfeng; Nath, Debashis
2017-11-01
The presence of light-absorbing aerosols (LAA) in snow profoundly influence the surface energy balance and water budget. However, most snow-process schemes in land-surface and climate models currently do not take this into consideration. To better represent the snow process and to evaluate the impacts of LAA on snow, this study presents an improved snow albedo parameterization in the Snow-Atmosphere-Soil Transfer (SAST) model, which includes the impacts of LAA on snow. Specifically, the Snow, Ice and Aerosol Radiation (SNICAR) model is incorporated into the SAST model with an LAA mass stratigraphy scheme. The new coupled model is validated against in-situ measurements at the Swamp Angel Study Plot (SASP), Colorado, USA. Results show that the snow albedo and snow depth are better reproduced than those in the original SAST, particularly during the period of snow ablation. Furthermore, the impacts of LAA on snow are estimated in the coupled model through case comparisons of the snowpack, with or without LAA. The LAA particles directly absorb extra solar radiation, which accelerates the growth rate of the snow grain size. Meanwhile, these larger snow particles favor more radiative absorption. The average total radiative forcing of the LAA at the SASP is 47.5 W m-2. This extra radiative absorption enhances the snowmelt rate. As a result, the peak runoff time and "snow all gone" day have shifted 18 and 19.5 days earlier, respectively, which could further impose substantial impacts on the hydrologic cycle and atmospheric processes.
Reintroducing radiometric surface temperature into the Penman-Monteith formulation
NASA Astrophysics Data System (ADS)
Mallick, Kaniska; Boegh, Eva; Trebs, Ivonne; Alfieri, Joseph G.; Kustas, William P.; Prueger, John H.; Niyogi, Dev; Das, Narendra; Drewry, Darren T.; Hoffmann, Lucien; Jarvis, Andrew J.
2015-08-01
Here we demonstrate a novel method to physically integrate radiometric surface temperature (TR) into the Penman-Monteith (PM) formulation for estimating the terrestrial sensible and latent heat fluxes (H and λE) in the framework of a modified Surface Temperature Initiated Closure (STIC). It combines TR data with standard energy balance closure models for deriving a hybrid scheme that does not require parameterization of the surface (or stomatal) and aerodynamic conductances (gS and gB). STIC is formed by the simultaneous solution of four state equations and it uses TR as an additional data source for retrieving the "near surface" moisture availability (M) and the Priestley-Taylor coefficient (α). The performance of STIC is tested using high-temporal resolution TR observations collected from different international surface energy flux experiments in conjunction with corresponding net radiation (RN), ground heat flux (G), air temperature (TA), and relative humidity (RH) measurements. A comparison of the STIC outputs with the eddy covariance measurements of λE and H revealed RMSDs of 7-16% and 40-74% in half-hourly λE and H estimates. These statistics were 5-13% and 10-44% in daily λE and H. The errors and uncertainties in both surface fluxes are comparable to the models that typically use land surface parameterizations for determining the unobserved components (gS and gB) of the surface energy balance models. However, the scheme is simpler, has the capabilities for generating spatially explicit surface energy fluxes and independent of submodels for boundary layer developments. This article was corrected on 27 AUG 2015. See the end of the full text for details.
NASA Astrophysics Data System (ADS)
Chen, Xuelong; Su, Bob
2017-04-01
Remote sensing has provided us an opportunity to observe Earth land surface with a much higher resolution than any of GCM simulation. Due to scarcity of information for land surface physical parameters, up-to-date GCMs still have large uncertainties in the coupled land surface process modeling. One critical issue is a large amount of parameters used in their land surface models. Thus remote sensing of land surface spectral information can be used to provide information on these parameters or assimilated to decrease the model uncertainties. Satellite imager could observe the Earth land surface with optical, thermal and microwave bands. Some basic Earth land surface status (land surface temperature, canopy height, canopy leaf area index, soil moisture etc.) has been produced with remote sensing technique, which already help scientists understanding Earth land and atmosphere interaction more precisely. However, there are some challenges when applying remote sensing variables to calculate global land-air heat and water exchange fluxes. Firstly, a global turbulent exchange parameterization scheme needs to be developed and verified, especially for global momentum and heat roughness length calculation with remote sensing information. Secondly, a compromise needs to be innovated to overcome the spatial-temporal gaps in remote sensing variables to make the remote sensing based land surface fluxes applicable for GCM model verification or comparison. A flux network data library (more 200 flux towers) was collected to verify the designed method. Important progress in remote sensing of global land flux and evaporation will be presented and its benefits for GCM models will also be discussed. Some in-situ studies on the Tibetan Plateau and problems of land surface process simulation will also be discussed.
Normalized Implicit Radial Models for Scattered Point Cloud Data without Normal Vectors
2009-03-23
points by shrinking a discrete membrane, Computer Graphics Forum, Vol. 24-4, 2005, pp. 791-808 [8] Floater , M. S., Reimers, M.: Meshless...Parameterization and Surface Reconstruction, Computer Aided Geometric Design 18, 2001, pp 77-92 [9] Floater , M. S.: Parameterization of Triangulations and...Unorganized Points, In: Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak and M. S. Floater (eds.), Springer , 2002, pp. 287-316 [10
Aircraft applications of fault detection and isolation techniques
NASA Astrophysics Data System (ADS)
Marcos Esteban, Andres
In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.
NASA Astrophysics Data System (ADS)
Skamarock, W. C.
2017-12-01
We have performed week-long full-physics simulations with the MPAS global model at 15 km cell spacing using vertical mesh spacings of 800, 400, 200 and 100 meters in the mid-troposphere through the mid-stratosphere. We find that the horizontal kinetic energy spectra in the upper troposphere and stratosphere does not converge with increasing vertical resolution until we reach 200 meter level spacing. Examination of the solutions indicates that significant inertia-gravity waves are not vertically resolved at the lower vertical resolutions. Diagnostics from the simulations indicate that the primary kinetic energy dissipation results from the vertical mixing within the PBL parameterization and from the gravity-wave drag parameterization, with smaller but significant contributions from damping in the vertical transport scheme and from the horizontal filters in the dynamical core. Most of the kinetic energy dissipation in the free atmosphere occurs within breaking mid-latitude baroclinic waves. We will briefly review these results and their implications for atmospheric model configuration and for atmospheric dynamics, specifically that related to the dynamics associated with the mesoscale kinetic energy spectrum.
NASA Astrophysics Data System (ADS)
Sullivan, Sylvia; Hoose, Corinna; Nenes, Athanasios
2016-04-01
Measurements of in-cloud ice crystal number concentrations can be three or four orders of magnitude greater than the in-cloud ice nuclei number concentrations. This discrepancy can be explained by various secondary ice formation processes, which occur after initial ice nucleation, but the relative importance of these processes, and even the exact physics of each, is still unclear. A simple bin microphysics model (2IM) is constructed to investigate these knowledge gaps. 2IM extends the time-lag collision parameterization of Yano and Phillips, 2011 to include rime splintering, ice-ice aggregation, and droplet shattering and to incorporate the aspect ratio evolution as in Jensen and Harrington, 2015. The relative contribution of the secondary processes under various conditions are shown. In particular, temperature-dependent efficiencies are adjusted for ice-ice aggregation versus collision around -15°C, when rime splintering is no longer active, and the effect of aspect ratio on the process weighting is explored. The resulting simulations are intended to guide secondary ice formation parameterizations in larger-scale mixed-phase cloud schemes.
NASA Astrophysics Data System (ADS)
Gruber, Simon; Unterstrasser, Simon; Bechtold, Jan; Vogel, Heike; Jung, Martin; Pak, Henry; Vogel, Bernhard
2018-05-01
A high-resolution regional-scale numerical model was extended by a parameterization that allows for both the generation and the life cycle of contrails and contrail cirrus to be calculated. The life cycle of contrails and contrail cirrus is described by a two-moment cloud microphysical scheme that was extended by a separate contrail ice class for a better representation of the high concentration of small ice crystals that occur in contrails. The basic input data set contains the spatially and temporally highly resolved flight trajectories over Central Europe derived from real-time data. The parameterization provides aircraft-dependent source terms for contrail ice mass and number. A case study was performed to investigate the influence of contrails and contrail cirrus on the shortwave radiative fluxes at the earth's surface. Accounting for contrails produced by aircraft enabled the model to simulate high clouds that were otherwise missing on this day. The effect of these extra clouds was to reduce the incoming shortwave radiation at the surface as well as the production of photovoltaic power by up to 10 %.
NASA Astrophysics Data System (ADS)
Zhang, G. J.; Song, X.
2017-12-01
The double ITCZ bias has been a long-standing problem in coupled atmosphere-ocean models. A previous study indicates that uncertainty in the projection of global warming due to doubling of CO2 is closely related to the double ITCZ biases in global climate models. Thus, reducing the double ITCZ biases is not only important to getting the current climate features right, but also important to narrowing the uncertainty in future climate projection. In this work, we will first review the possible factors contributing to the ITCZ problem. Then, we will focus on atmospheric convection, presenting recent progress in alleviating the double ITCZ problem and its sensitivity to details of convective parameterization, including trigger conditions for convection onset, convective memory, entrainment rate, updraft model and closure in the NCAR CESM1. These changes together can result in dramatic improvements in the simulation of ITCZ. Results based on both atmospheric only and coupled simulations with incremental changes of convection scheme will be shown to demonstrate the roles of convection parameterization and coupled interaction between convection, atmospheric circulation and ocean circulation in the simulation of ITCZ.
Planning energy-efficient bipedal locomotion on patterned terrain
NASA Astrophysics Data System (ADS)
Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad
2016-05-01
Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.
Atmospheric form drag over Arctic sea ice derived from high-resolution IceBridge elevation data
NASA Astrophysics Data System (ADS)
Petty, A.; Tsamados, M.; Kurtz, N. T.
2016-02-01
Here we present a detailed analysis of atmospheric form drag over Arctic sea ice, using high resolution, three-dimensional surface elevation data from the NASA Operation IceBridge Airborne Topographic Mapper (ATM) laser altimeter. Surface features in the sea ice cover are detected using a novel feature-picking algorithm. We derive information regarding the height, spacing and orientation of unique surface features from 2009-2014 across both first-year and multiyear ice regimes. The topography results are used to explicitly calculate atmospheric form drag coefficients; utilizing existing form drag parameterizations. The atmospheric form drag coefficients show strong regional variability, mainly due to variability in ice type/age. The transition from a perennial to a seasonal ice cover therefore suggest a decrease in the atmospheric form drag coefficients over Arctic sea ice in recent decades. These results are also being used to calibrate a recent form drag parameterization scheme included in the sea ice model CICE, to improve the representation of form drag over Arctic sea ice in global climate models.
Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Wall, Johnm W.
2011-01-01
In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.
NASA Astrophysics Data System (ADS)
Prein, A. F.; Langhans, W.; Fosser, G.; Ferrone, A.; Ban, N.; Goergen, K.; Keller, M.; Tölle, M.; Gutjahr, O.; Feser, F.; Brisson, E.; Kollet, S. J.; Schmidli, J.; Van Lipzig, N. P. M.; Leung, L. R.
2015-12-01
Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. We aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.
Prein, Andreas F; Langhans, Wolfgang; Fosser, Giorgia; Ferrone, Andrew; Ban, Nikolina; Goergen, Klaus; Keller, Michael; Tölle, Merja; Gutjahr, Oliver; Feser, Frauke; Brisson, Erwan; Kollet, Stefan; Schmidli, Juerg; van Lipzig, Nicole P M; Leung, Ruby
2015-06-01
Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.
NASA Astrophysics Data System (ADS)
Prein, Andreas F.; Langhans, Wolfgang; Fosser, Giorgia; Ferrone, Andrew; Ban, Nikolina; Goergen, Klaus; Keller, Michael; Tölle, Merja; Gutjahr, Oliver; Feser, Frauke; Brisson, Erwan; Kollet, Stefan; Schmidli, Juerg; van Lipzig, Nicole P. M.; Leung, Ruby
2015-06-01
Regional climate modeling using convection-permitting models (CPMs; horizontal grid spacing <4 km) emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large-scale models (LSMs; horizontal grid spacing >10 km). CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs.
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong
2018-06-01
This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.
Griffin, Brian M.; Larson, Vincent E.
2016-11-25
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
NASA Astrophysics Data System (ADS)
Mariani, S.; Casaioli, M.; Lastoria, B.; Accadia, C.; Flavoni, S.
2009-04-01
The Institute for Environmental Protection and Research - ISPRA (former Agency for Environmental Protection and Technical Services - APAT) runs operationally since 2000 an integrated meteo-marine forecasting chain, named the Hydro-Meteo-Marine Forecasting System (Sistema Idro-Meteo-Mare - SIMM), formed by a cascade of four numerical models, telescoping from the Mediterranean basin to the Venice Lagoon, and initialized by means of analyses and forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF). The operational integrated system consists of a meteorological model, the parallel verision of BOlogna Limited Area Model (BOLAM), coupled over the Mediterranean sea with a WAve Model (WAM), a high-resolution shallow-water model of the Adriatic and Ionian Sea, namely the Princeton Ocean Model (POM), and a finite-element version of the same model (VL-FEM) on the Venice Lagoon, aimed to forecast the acqua alta events. Recently, the physically based, fully distributed, rainfall-runoff TOPographic Kinematic APproximation and Integration (TOPKAPI) model has been integrated into the system, coupled to BOLAM, over two river basins, located in the central and northeastern part of Italy, respectively. However, at the present time, this latter part of the forecasting chain is not operational and it is used in a research configuration. BOLAM was originally implemented in 2000 onto the Quadrics parallel supercomputer (and for this reason referred to as QBOLAM, as well) and only at the end of 2006 it was ported (together with the other operational marine models of the forecasting chain) onto the Silicon Graphics Inc. (SGI) Altix 8-processor machine. In particular, due to the Quadrics implementation, the Kuo scheme was formerly implemented into QBOLAM for the cumulus convection parameterization. On the contrary, when porting SIMM onto the Altix Linux cluster, it was achievable to implement into QBOLAM the more advanced convection parameterization by Kain and Fritsch. A fully updated serial version of the BOLAM code has been recently acquired. Code improvements include a more precise advection scheme (Weighted Average Flux); explicit advection of five hydrometeors, and state-of-the-art parameterization schemes for radiation, convection, boundary layer turbulence and soil processes (also with possible choice among different available schemes). The operational implementation of the new code into the SIMM model chain, which requires the development of a parallel version, will be achieved during 2009. In view of this goal, the comparative verification of the different model versions' skill represents a fundamental task. On this purpose, it has been decided to evaluate the performance improvement of the new BOLAM code (in the available serial version, hereinafter BOLAM 2007) with respect to the version with the Kain-Fritsch scheme (hereinafter KF version) and to the older one employing the Kuo scheme (hereinafter Kuo version). In the present work, verification of precipitation forecasts from the three BOLAM versions is carried on in a case study approach. The intense rainfall episode occurred on 10th - 17th December 2008 over Italy has been considered. This event produced indeed severe damages in Rome and its surrounding areas. Objective and subjective verification methods have been employed in order to evaluate model performance against an observational dataset including rain gauge observations and satellite imagery. Subjective comparison of observed and forecast precipitation fields is suitable to give an overall description of the forecast quality. Spatial errors (e.g., shifting and pattern errors) and rainfall volume error can be assessed quantitatively by means of object-oriented methods. By comparing satellite images with model forecast fields, it is possible to investigate the differences between the evolution of the observed weather system and the predicted ones, and its sensitivity to the improvements in the model code. Finally, the error in forecasting the cyclone evolution can be tentatively related with the precipitation forecast error.
Importance of convective parameterization in ENSO predictions
NASA Astrophysics Data System (ADS)
Zhu, Jieshun; Kumar, Arun; Wang, Wanqiu; Hu, Zeng-Zhen; Huang, Bohua; Balmaseda, Magdalena A.
2017-06-01
This letter explored the influence of atmospheric convection scheme on El Niño-Southern Oscillation (ENSO) predictions using a set of hindcast experiments. Specifically, a low-resolution version of the Climate Forecast System version 2 is used for 12 month hindcasts starting from each April during 1982-2011. The hindcast experiments are repeated with three atmospheric convection schemes. All three hindcasts apply the identical initialization with ocean initial conditions taken from the European Centre for Medium-Range Weather Forecasts and atmosphere/land initial states from the National Centers for Environmental Prediction. Assessments indicate a substantial sensitivity of the sea surface temperature prediction skill to the different convection schemes, particularly over the eastern tropical Pacific. For the Niño 3.4 index, the anomaly correlation skill can differ by 0.1-0.2 at lead times longer than 2 months. Long-term simulations are further conducted with the three convection schemes to understand the differences in prediction skill. By conducting heat budget analyses for the mixed-layer temperature anomalies, it is suggested that the convection scheme having the highest skill simulates stronger and more realistic coupled feedbacks related to ENSO. Particularly, the strength of the Ekman pumping feedback is better represented, which is traced to more realistic simulation of surface wind stress. Our results imply that improving the mean state simulations in coupled (ocean-atmosphere) general circulation model (e.g., ameliorating the Intertropical Convergence Zone simulation) might further improve our ENSO prediction capability.
NASA Astrophysics Data System (ADS)
Song, Hwan-Jin; Sohn, Byung-Ju
2018-01-01
The Korean peninsula is the region of distinctly showing the heavy rain associated with relatively low storm height and small ice water content in the upper part of cloud system (i.e., so-called warm-type heavy rainfall). The satellite observations for the warmtype rain over Korea led to a conjecture that the cloud microphysics parameterization suitable for the continental deep convection may not work well for the warm-type heavy rainfall over the Korean peninsula. Therefore, there is a growing need to examine the performance of cloud microphysics schemes for simulating the warm-type heavy rain structures over the Korean peninsula. This study aims to evaluate the capabilities of eight microphysics schemes in the Weather Research and Forecasting (WRF) model how warmtype heavy rain structures can be simulated, in reference to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) reflectivity measurements. The results indicate that the WRF Double Moment 6-class (WDM6) scheme simulated best the vertical structure of warm-type heavy rain by virtue of a reasonable collisioncoalescence process between liquid droplets and the smallest amount of snow. Nonetheless the WDM6 scheme appears to have limitations that need to be improved upon for a realistic reflectivity structure, in terms of the reflectivity slope below the melting layer, discontinuity in reflectivity profiles around the melting layer, and overestimation of upper-level reflectivity due to high graupel content.
NASA Astrophysics Data System (ADS)
Song, Hwan-Jin; Sohn, Byung-Ju
2018-05-01
The Korean peninsula is the region of distinctly showing the heavy rain associated with relatively low storm height and small ice water content in the upper part of cloud system (i.e., so-called warm-type heavy rainfall). The satellite observations for the warm-type rain over Korea led to a conjecture that the cloud microphysics parameterization suitable for the continental deep convection may not work well for the warm-type heavy rainfall over the Korean peninsula. Therefore, there is a growing need to examine the performance of cloud microphysics schemes for simulating the warm-type heavy rain structures over the Korean peninsula. This study aims to evaluate the capabilities of eight microphysics schemes in the Weather Research and Forecasting (WRF) model how warm-type heavy rain structures can be simulated, in reference to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) reflectivity measurements. The results indicate that the WRF Double Moment 6-class (WDM6) scheme simulated best the vertical structure of warm-type heavy rain by virtue of a reasonable collision-coalescence process between liquid droplets and the smallest amount of snow. Nonetheless the WDM6 scheme appears to have limitations that need to be improved upon for a realistic reflectivity structure, in terms of the reflectivity slope below the melting layer, discontinuity in reflectivity profiles around the melting layer, and overestimation of upper-level reflectivity due to high graupel content.
NASA Astrophysics Data System (ADS)
Meinke, I.
2003-04-01
A new method is presented to validate cloud parametrization schemes in numerical atmospheric models with satellite data of scanning radiometers. This method is applied to the regional atmospheric model HRM (High Resolution Regional Model) using satellite data from ISCCP (International Satellite Cloud Climatology Project). Due to the limited reliability of former validations there has been a need for developing a new validation method: Up to now differences between simulated and measured cloud properties are mostly declared as deficiencies of the cloud parametrization scheme without further investigation. Other uncertainties connected with the model or with the measurements have not been taken into account. Therefore changes in the cloud parametrization scheme based on such kind of validations might not be realistic. The new method estimates uncertainties of the model and the measurements. Criteria for comparisons of simulated and measured data are derived to localize deficiencies in the model. For a better specification of these deficiencies simulated clouds are classified regarding their parametrization. With this classification the localized model deficiencies are allocated to a certain parametrization scheme. Applying this method to the regional model HRM the quality of forecasting cloud properties is estimated in detail. The overestimation of simulated clouds in low emissivity heights especially during the night is localized as model deficiency. This is caused by subscale cloudiness. As the simulation of subscale clouds in the regional model HRM is described by a relative humidity parametrization these deficiencies are connected with this parameterization.
Mapping the Martian Meteorology
NASA Technical Reports Server (NTRS)
Allison, Michael; Ross, J. D.; Soloman, N.
1999-01-01
The Mars-adapted version of the NASA/GISS general circulation model (GCM) has been applied to the hourly/daily simulation of the planet's meteorology over several seasonal orbits. The current running version of the model includes a diurnal solar cycle, CO2 sublimation, and a mature parameterization of upper level wave drag with a vertical domain extending from the surface up to the 6 micro b level. The benchmark simulations provide a four-dimensional archive for the comparative evaluation of various schemes for the retrieval of winds from anticipated polar orbiter measurements of temperatures by the Pressure Modulator Infrared Radiometer.
2014-10-20
have received several versions of the EDMF from Joao Teixeira for testing . RESULTS Most of the results of our last year’s research effort were...as a comparison to the ABL over cold water. Note that the MATERHORN was co-funded by ONR (Ferek). As this research has progressed, we have added a...transports. Emmitt and de Wekker are using the WRF and COAMPs models to test out sensitivities to changes in the EDMF related to our field data. We
NASA Astrophysics Data System (ADS)
Chawla, Ila; Osuri, Krishna K.; Mujumdar, Pradeep P.; Niyogi, Dev
2018-02-01
Reliable estimates of extreme rainfall events are necessary for an accurate prediction of floods. Most of the global rainfall products are available at a coarse resolution, rendering them less desirable for extreme rainfall analysis. Therefore, regional mesoscale models such as the advanced research version of the Weather Research and Forecasting (WRF) model are often used to provide rainfall estimates at fine grid spacing. Modelling heavy rainfall events is an enduring challenge, as such events depend on multi-scale interactions, and the model configurations such as grid spacing, physical parameterization and initialization. With this background, the WRF model is implemented in this study to investigate the impact of different processes on extreme rainfall simulation, by considering a representative event that occurred during 15-18 June 2013 over the Ganga Basin in India, which is located at the foothills of the Himalayas. This event is simulated with ensembles involving four different microphysics (MP), two cumulus (CU) parameterizations, two planetary boundary layers (PBLs) and two land surface physics options, as well as different resolutions (grid spacing) within the WRF model. The simulated rainfall is evaluated against the observations from 18 rain gauges and the Tropical Rainfall Measuring Mission Multi-Satellite Precipitation Analysis (TMPA) 3B42RT version 7 data. From the analysis, it should be noted that the choice of MP scheme influences the spatial pattern of rainfall, while the choice of PBL and CU parameterizations influences the magnitude of rainfall in the model simulations. Further, the WRF run with Goddard MP, Mellor-Yamada-Janjic PBL and Betts-Miller-Janjic CU scheme is found to perform best
in simulating this heavy rain event. The selected configuration is evaluated for several heavy to extremely heavy rainfall events that occurred across different months of the monsoon season in the region. The model performance improved through incorporation of detailed land surface processes involving prognostic soil moisture evolution in Noah scheme compared to the simple Slab model. To analyse the effect of model grid spacing, two sets of downscaling ratios - (i) 1 : 3, global to regional (G2R) scale and (ii) 1 : 9, global to convection-permitting scale (G2C) - are employed. Results indicate that a higher downscaling ratio (G2C) causes higher variability and consequently large errors in the simulations. Therefore, G2R is adopted as a suitable choice for simulating heavy rainfall event in the present case study. Further, the WRF-simulated rainfall is found to exhibit less bias when compared with the NCEP FiNaL (FNL) reanalysis data.
Vertical Transport Processes for Inert and Scavenged Species: TRACE-A Measurements
NASA Technical Reports Server (NTRS)
Chatfield, Robert B.; Chan, K. Roland (Technical Monitor)
1997-01-01
The TRACE-A mission of the NASA DC-8 aircraft made a large-scale survey of the tropical and subtropical atmosphere in September and October of 1992. Both In-situ measurements of CO (G. Sachsen NASA Langley) and aerosol size (J. Browell group, NASA Langley) provide excellent data sets with which to constrain vertical transport by planetary boundary layer mixing and deep-cloud cumulus convection. Lidar profiles of aerosol-induced scattering and ozone (also by Bremen) are somewhat require more subtle interpretation as tracers, but the vertical information on layering largely compensates for these complexities. The reason this DC-8 dataset is so useful is that very large areas of biomass burning over Africa and South America provide surface sources of appropriate sizes with which to characterize vertical and horizontal motions; the major limitation of our source description is that biomass burning patterns move considerably every few days, and daily burning inventories are a matter of concurrent, intensive research. We use the Penn State / NCAR MM5 model in an assimilation mode on the synoptic and intercontinental scale, and assess the success it shows in vertical transport descriptions. We find that the general level of emissions suggested by the climatological approach (Will. Has, U. of Montana) appears to be approximately correct, possibly a bit low, for this October, 1992, time period. Vertical transport in planetary boundary layer mixing to 5.5 kin was observed and reproduced in our simulations. Furthermore we find evidence that Blackader "transilient" or matrix-transport scheme is needed, but may require some adaptation in our tracer model: CO seems to exhibit very high values at the top of the planetary boundary layer, a process that stretches the eddy-diffusion parameterization. We will report on progress in improving the deep convective transport of carbon monoxide: the Grail scheme as we used it at 100 kin resolution did not transport enough material to the upper troposphere. We expect to be able to attribute this to either parameterization reasons (inadequacy of this parameterization at the large 100km scale) or other reasons. Nevertheless, the qualitative nature of deep transport by clouds shows up well in the simulations. As for scavengable species, the simulations predict tens of micrograms per standard cubic meter of smoke aerosol in the boundary layer. In a straightforward illustration of our simple bulk-mass scavenging parameterization, to one or two micrograms per standard cubic meter of smoke aerosol in the free troposphere just above the source regions: very high concentrations for the free troposphere. We expect to report on comparisons of these predictions to a variety of observations.
Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery
NASA Astrophysics Data System (ADS)
Leeser, Miriam E.; Belanovic, Pavle; Estlick, Michael; Gokhale, Maya; Szymanski, John J.; Theiler, James P.
2002-01-01
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.