NASA Astrophysics Data System (ADS)
Li, Y.; McDougall, T. J.
2016-02-01
Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.
NASA Astrophysics Data System (ADS)
Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.
2012-12-01
This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.
Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling
NASA Astrophysics Data System (ADS)
Reed, Seann M.
2003-09-01
The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.
Downscaling soil moisture over regions that include multiple coarse-resolution grid cells
USDA-ARS?s Scientific Manuscript database
Many applications require soil moisture estimates over large spatial extents (30-300 km) and at fine-resolutions (10-30 m). Remote-sensing methods can provide soil moisture estimates over very large spatial extents (continental to global) at coarse resolutions (10-40 km), but their output must be d...
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
Coarsening of three-dimensional structured and unstructured grids for subsurface flow
NASA Astrophysics Data System (ADS)
Aarnes, Jørg Espen; Hauge, Vera Louise; Efendiev, Yalchin
2007-11-01
We present a generic, semi-automated algorithm for generating non-uniform coarse grids for modeling subsurface flow. The method is applicable to arbitrary grids and does not impose smoothness constraints on the coarse grid. One therefore avoids conventional smoothing procedures that are commonly used to ensure that the grids obtained with standard coarsening procedures are not too rough. The coarsening algorithm is very simple and essentially involves only two parameters that specify the level of coarsening. Consequently the algorithm allows the user to specify the simulation grid dynamically to fit available computer resources, and, e.g., use the original geomodel as input for flow simulations. This is of great importance since coarse grid-generation is normally the most time-consuming part of an upscaling phase, and therefore the main obstacle that has prevented simulation workflows with user-defined resolution. We apply the coarsening algorithm to a series of two-phase flow problems on both structured (Cartesian) and unstructured grids. The numerical results demonstrate that one consistently obtains significantly more accurate results using the proposed non-uniform coarsening strategy than with corresponding uniform coarse grids with roughly the same number of cells.
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
NASA Astrophysics Data System (ADS)
Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando
2016-04-01
Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran
2014-04-23
The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less
Spatial scaling of net primary productivity using subpixel landcover information
NASA Astrophysics Data System (ADS)
Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.
2008-10-01
Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
NASA Astrophysics Data System (ADS)
Yu, Karen; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Miller, Christopher C.; Travis, Katherine R.; Zhu, Lei; Yantosca, Robert M.; Sulprizio, Melissa P.; Cohen, Ron C.; Dibb, Jack E.; Fried, Alan; Mikoviny, Tomas; Ryerson, Thomas B.; Wennberg, Paul O.; Wisthaler, Armin
2016-04-01
Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO) or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2). We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25° × 0.3125°, 2° × 2.5°, 4° × 5°) to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25° × 0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25° × 0.3125° resolution (54 %) than at coarser resolution (59 %). The cumulative probability distribution functions (CDFs) of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes) are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy) changing little across model resolutions. Model concentrations in the lower free troposphere are also insensitive to grid resolution. The overall low sensitivity of modeled concentrations to grid resolution implies that coarse resolution is adequate when modeling continental boundary layer chemistry for global applications.
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.
NASA Astrophysics Data System (ADS)
Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.
2017-12-01
Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.
Petrovskaya, Natalia B.; Forbes, Emily; Petrovskii, Sergei V.; Walters, Keith F. A.
2018-01-01
Studies addressing many ecological problems require accurate evaluation of the total population size. In this paper, we revisit a sampling procedure used for the evaluation of the abundance of an invertebrate population from assessment data collected on a spatial grid of sampling locations. We first discuss how insufficient information about the spatial population density obtained on a coarse sampling grid may affect the accuracy of an evaluation of total population size. Such information deficit in field data can arise because of inadequate spatial resolution of the population distribution (spatially variable population density) when coarse grids are used, which is especially true when a strongly heterogeneous spatial population density is sampled. We then argue that the average trap count (the quantity routinely used to quantify abundance), if obtained from a sampling grid that is too coarse, is a random variable because of the uncertainty in sampling spatial data. Finally, we show that a probabilistic approach similar to bootstrapping techniques can be an efficient tool to quantify the uncertainty in the evaluation procedure in the presence of a spatial pattern reflecting a patchy distribution of invertebrates within the sampling grid. PMID:29495513
Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations
NASA Technical Reports Server (NTRS)
Singer, Bart A.
1996-01-01
Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.
The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental Uni...
SoilGrids250m: Global gridded soil information based on machine learning
Mendes de Jesus, Jorge; Heuvelink, Gerard B. M.; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N.; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A.; Batjes, Niels H.; Leenaars, Johan G. B.; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas
2017-01-01
This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods—random forest and gradient boosting and/or multinomial logistic regression—as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10–fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License. PMID:28207752
SoilGrids250m: Global gridded soil information based on machine learning.
Hengl, Tomislav; Mendes de Jesus, Jorge; Heuvelink, Gerard B M; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A; Batjes, Niels H; Leenaars, Johan G B; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas
2017-01-01
This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
A Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF)
NASA Astrophysics Data System (ADS)
Trotta, Francesco; Fenu, Elisa; Pinardi, Nadia; Bruciaferri, Diego; Giacomelli, Luca; Federico, Ivan; Coppini, Giovanni
2016-11-01
We present a numerical platform named Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF). The platform is developed for short-time forecasts and is designed to be embedded in any region of the large-scale Mediterranean Forecasting System (MFS) via downscaling. We employ CTD data collected during a campaign around the Elba island to calibrate and validate SURF. The model requires an initial spin up period of a few days in order to adapt the initial interpolated fields and the subsequent solutions to the higher-resolution nested grids adopted by SURF. Through a comparison with the CTD data, we quantify the improvement obtained by SURF model compared to the coarse-resolution MFS model.
Effect of elevation resolution on evapotranspiration simulations using MODFLOW.
Kambhammettu, B V N P; Schmid, Wolfgang; King, James P; Creel, Bobby J
2012-01-01
Surface elevations represented in MODFLOW head-dependent packages are usually derived from digital elevation models (DEMs) that are available at much high resolution. Conventional grid refinement techniques to simulate the model at DEM resolution increases computational time, input file size, and in many cases are not feasible for regional applications. This research aims at utilizing the increasingly available high resolution DEMs for effective simulation of evapotranspiration (ET) in MODFLOW as an alternative to grid refinement techniques. The source code of the evapotranspiration package is modified by considering for a fixed MODFLOW grid resolution and for different DEM resolutions, the effect of variability in elevation data on ET estimates. Piezometric head at each DEM cell location is corrected by considering the gradient along row and column directions. Applicability of the research is tested for the lower Rio Grande (LRG) Basin in southern New Mexico. The DEM at 10 m resolution is aggregated to resampled DEM grid resolutions which are integer multiples of MODFLOW grid resolution. Cumulative outflows and ET rates are compared at different coarse resolution grids. Results of the analysis conclude that variability in depth-to-groundwater within the MODFLOW cell is a major contributing parameter to ET outflows in shallow groundwater regions. DEM aggregation methods for the LRG Basin have resulted in decreased volumetric outflow due to the formation of a smoothing error, which lowered the position of water table to a level below the extinction depth. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Jin, Meibing; Deal, Clara; Maslowski, Wieslaw; Matrai, Patricia; Roberts, Andrew; Osinski, Robert; Lee, Younjoo J.; Frants, Marina; Elliott, Scott; Jeffery, Nicole; Hunke, Elizabeth; Wang, Shanlin
2018-01-01
The current coarse-resolution global Community Earth System Model (CESM) can reproduce major and large-scale patterns but is still missing some key biogeochemical features in the Arctic Ocean, e.g., low surface nutrients in the Canada Basin. We incorporated the CESM Version 1 ocean biogeochemical code into the Regional Arctic System Model (RASM) and coupled it with a sea-ice algal module to investigate model limitations. Four ice-ocean hindcast cases are compared with various observations: two in a global 1° (40˜60 km in the Arctic) grid: G1deg and G1deg-OLD with/without new sea-ice processes incorporated; two on RASM's 1/12° (˜9 km) grid R9km and R9km-NB with/without a subgrid scale brine rejection parameterization which improves ocean vertical mixing under sea ice. Higher-resolution and new sea-ice processes contributed to lower model errors in sea-ice extent, ice thickness, and ice algae. In the Bering Sea shelf, only higher resolution contributed to lower model errors in salinity, nitrate (NO3), and chlorophyll-a (Chl-a). In the Arctic Basin, model errors in mixed layer depth (MLD) were reduced 36% by brine rejection parameterization, 20% by new sea-ice processes, and 6% by higher resolution. The NO3 concentration biases were caused by both MLD bias and coarse resolution, because of excessive horizontal mixing of high NO3 from the Chukchi Sea into the Canada Basin in coarse resolution models. R9km showed improvements over G1deg on NO3, but not on Chl-a, likely due to light limitation under snow and ice cover in the Arctic Basin.
Spatial heterogeneity of leaf area index across scales from simulation and remote sensing
NASA Astrophysics Data System (ADS)
Reichenau, Tim G.; Korres, Wolfgang; Montzka, Carsten; Schneider, Karl
2016-04-01
Leaf area index (LAI, single sided leaf area per ground area) influences mass and energy exchange of vegetated surfaces. Therefore LAI is an input variable for many land surface schemes of coupled large scale models, which do not simulate LAI. Since these models typically run on rather coarse resolution grids, LAI is often inferred from coarse resolution remote sensing. However, especially in agriculturally used areas, a grid cell of these products often covers more than a single land-use. In that case, the given LAI does not apply to any single land-use. Therefore, the overall spatial heterogeneity in these datasets differs from that on resolutions high enough to distinguish areas with differing land-use. Detailed process-based plant growth models simulate LAI for separate plant functional types or specific species. However, limited availability of observations causes reduced spatial heterogeneity of model input data (soil, weather, land-use). Since LAI is strongly heterogeneous in space and time and since processes depend on LAI in a nonlinear way, a correct representation of LAI spatial heterogeneity is also desirable on coarse resolutions. The current study assesses this issue by comparing the spatial heterogeneity of LAI from remote sensing (RapidEye) and process-based simulations (DANUBIA simulation system) across scales. Spatial heterogeneity is assessed by analyzing LAI frequency distributions (spatial variability) and semivariograms (spatial structure). Test case is the arable land in the fertile loess plain of the Rur catchment near the Germany-Netherlands border.
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.; Lytle, John K.
1989-01-01
An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.
Dual-resolution dose assessments for proton beamlet using MCNPX 2.6.0
NASA Astrophysics Data System (ADS)
Chao, T. C.; Wei, S. C.; Wu, S. W.; Tung, C. J.; Tu, S. J.; Cheng, H. W.; Lee, C. C.
2015-11-01
The purpose of this study is to access proton dose distribution in dual resolution phantoms using MCNPX 2.6.0. The dual resolution phantom uses higher resolution in Bragg peak, area near large dose gradient, or heterogeneous interface and lower resolution in the rest. MCNPX 2.6.0 was installed in Ubuntu 10.04 with MPI for parallel computing. FMesh1 tallies were utilized to record the energy deposition which is a special designed tally for voxel phantoms that converts dose deposition from fluence. 60 and 120 MeV narrow proton beam were incident into Coarse, Dual and Fine resolution phantoms with pure water, water-bone-water and water-air-water setups. The doses in coarse resolution phantoms are underestimated owing to partial volume effect. The dose distributions in dual or high resolution phantoms agreed well with each other and dual resolution phantoms were at least 10 times more efficient than fine resolution one. Because the secondary particle range is much longer in air than in water, the dose of low density region may be under-estimated if the resolution or calculation grid is not small enough.
Global climate models (GCMs) are currently used to obtain information about future changes in the large-scale climate. However, such simulations are typically done at coarse spatial resolutions, with model grid boxes on the order of 100 km on a horizontal side. Therefore, techniq...
Two- and three-dimensional natural and mixed convection simulation using modular zonal models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurtz, E.; Nataf, J.M.; Winkelmann, F.
We demonstrate the use of the zonal model approach, which is a simplified method for calculating natural and mixed convection in rooms. Zonal models use a coarse grid and use balance equations, state equations, hydrostatic pressure drop equations and power law equations of the form {ital m} = {ital C}{Delta}{sup {ital n}}. The advantage of the zonal approach and its modular implementation are discussed. The zonal model resolution of nonlinear equation systems is demonstrated for three cases: a 2-D room, a 3-D room and a pair of 3-D rooms separated by a partition with an opening. A sensitivity analysis withmore » respect to physical parameters and grid coarseness is presented. Results are compared to computational fluid dynamics (CFD) calculations and experimental data.« less
Sub-grid drag models for horizontal cylinder arrays immersed in gas-particle multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran
2013-09-08
Immersed cylindrical tube arrays often are used as heat exchangers in gas-particle fluidized beds. In multiphase computational fluid dynamics (CFD) simulations of large fluidized beds, explicit resolution of small cylinders is computationally infeasible. Instead, the cylinder array may be viewed as an effective porous medium in coarse-grid simulations. The cylinders' influence on the suspension as a whole, manifested as an effective drag force, and on the relative motion between gas and particles, manifested as a correction to the gas-particle drag, must be modeled via suitable sub-grid constitutive relationships. In this work, highly resolved unit-cell simulations of flow around an arraymore » of horizontal cylinders, arranged in a staggered configuration, are filtered to construct sub-grid, or `filtered', drag models, which can be implemented in coarse-grid simulations. The force on the suspension exerted by the cylinders is comprised of, as expected, a buoyancy contribution, and a kinetic component analogous to fluid drag on a single cylinder. Furthermore, the introduction of tubes also is found to enhance segregation at the scale of the cylinder size, which, in turn, leads to a reduction in the filtered gas-particle drag.« less
NASA Astrophysics Data System (ADS)
Vennam, L. P.; Vizuete, W.; Talgo, K.; Omary, M.; Binkowski, F. S.; Xing, J.; Mathur, R.; Arunachalam, S.
2017-12-01
Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9-12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed 1.3% (mean, min-max: 0.46, 0.3-0.5 ppbv) and 0.2% (0.013, 0.004-0.02 μg/m3) of total O3 and PM2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O3 0.69, 0.5-0.85 ppbv) and 0.5% (PM2.5 0.03, 0.01-0.05 μg/m3)) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km2) and fine (36 × 36 km2) grid resolutions in NA showed 70 times and 13 times higher aviation impacts for O3 and PM2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions.
Vennam, L. P.; Vizuete, W.; Talgo, K.; Omary, M.; Binkowski, F. S.; Xing, J.; Mathur, R.; Arunachalam, S.
2018-01-01
Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9–12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed ~1.3% (mean, min–max: 0.46, 0.3–0.5 ppbv) and 0.2% (0.013, 0.004–0.02 μg/m3) of total O3 and PM2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O3 0.69, 0.5–0.85 ppbv) and 0.5% (PM2.5 0.03, 0.01–0.05 μg/m3)) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km2) and fine (36 × 36 km2) grid resolutions in NA showed ~70 times and ~13 times higher aviation impacts for O3 and PM2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions. PMID:29707471
Vennam, L P; Vizuete, W; Talgo, K; Omary, M; Binkowski, F S; Xing, J; Mathur, R; Arunachalam, S
2017-01-01
Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9-12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed ~1.3% (mean, min-max: 0.46, 0.3-0.5 ppbv) and 0.2% (0.013, 0.004-0.02 μg/m 3 ) of total O 3 and PM 2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O 3 0.69, 0.5-0.85 ppbv) and 0.5% (PM 2.5 0.03, 0.01-0.05 μg/m 3 )) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km 2 ) and fine (36 × 36 km 2 ) grid resolutions in NA showed ~70 times and ~13 times higher aviation impacts for O 3 and PM 2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions.
Quantitative Comparisons of a Coarse-Grid LES with Experimental Data for Backward-Facing Step Flow
NASA Astrophysics Data System (ADS)
McDonough, J. M.
1999-11-01
A novel approach to LES employing an additive decomposition of both solutions and governing equations (similar to ``multi-level'' approaches of Dubois et al.,Dynamic Multilevel Methods and the Simulation of Turbulence, Cambridge University Press, 1999) is presented; its main structural features are lack of filtering of governing equations (instead, solutions are filtered to remove aliasing due to under resolution) and direct modeling of subgrid-scale primitive variables (rather than modeling their correlations) in the manner proposed by Hylin and McDonough (Int. J. Fluid Mech. Res. 26, 228-256, 1999). A 2-D implementation of this formalism is applied to the backward-facing step flow studied experimentally by Driver and Seegmiller (AIAA J. 23, 163-171, 1985) and Driver et al. (AIAA J. 25, 914-919, 1987), and run on grids sufficiently coarse to permit easy extension to 3-D, industrially-realistic problems. Comparisons of computed and experimental mean quantities (velocity profiles, turbulence kinetic energy, reattachment lengths, etc.) and effects of grid refinement will be presented.
The Effects of Dissipation and Coarse Grid Resolution for Multigrid in Flow Problems
NASA Technical Reports Server (NTRS)
Eliasson, Peter; Engquist, Bjoern
1996-01-01
The objective of this paper is to investigate the effects of the numerical dissipation and the resolution of the solution on coarser grids for multigrid with the Euler equation approximations. The convergence is accomplished by multi-stage explicit time-stepping to steady state accelerated by FAS multigrid. A theoretical investigation is carried out for linear hyperbolic equations in one and two dimensions. The spectra reveals that for stability and hence robustness of spatial discretizations with a small amount of numerical dissipation the grid transfer operators have to be accurate enough and the smoother of low temporal accuracy. Numerical results give grid independent convergence in one dimension. For two-dimensional problems with a small amount of numerical dissipation, however, only a few grid levels contribute to an increased speed of convergence. This is explained by the small numerical dissipation leading to dispersion. Increasing the mesh density and hence making the problem over resolved increases the number of mesh levels contributing to an increased speed of convergence. If the steady state equations are elliptic, all grid levels contribute to the convergence regardless of the mesh density.
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...
2016-09-16
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
Satellite-Scale Snow Water Equivalent Assimilation into a High-Resolution Land Surface Model
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle J.M.; Reichle, Rolf H.; Houser, Paul R.; Arsenault, Kristi R.; Verhoest, Niko E.C.; Paulwels, Valentijn R.N.
2009-01-01
An ensemble Kalman filter (EnKF) is used in a suite of synthetic experiments to assimilate coarse-scale (25 km) snow water equivalent (SWE) observations (typical of satellite retrievals) into fine-scale (1 km) model simulations. Coarse-scale observations are assimilated directly using an observation operator for mapping between the coarse and fine scales or, alternatively, after disaggregation (re-gridding) to the fine-scale model resolution prior to data assimilation. In either case observations are assimilated either simultaneously or independently for each location. Results indicate that assimilating disaggregated fine-scale observations independently (method 1D-F1) is less efficient than assimilating a collection of neighboring disaggregated observations (method 3D-Fm). Direct assimilation of coarse-scale observations is superior to a priori disaggregation. Independent assimilation of individual coarse-scale observations (method 3D-C1) can bring the overall mean analyzed field close to the truth, but does not necessarily improve estimates of the fine-scale structure. There is a clear benefit to simultaneously assimilating multiple coarse-scale observations (method 3D-Cm) even as the entire domain is observed, indicating that underlying spatial error correlations can be exploited to improve SWE estimates. Method 3D-Cm avoids artificial transitions at the coarse observation pixel boundaries and can reduce the RMSE by 60% when compared to the open loop in this study.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach
NASA Astrophysics Data System (ADS)
Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.
2017-08-01
Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a realistic coastal region.
Ray Drapek; John B. Kim; Ronald P. Neilson
2015-01-01
Land managers need to include climate change in their decisionmaking, but the climate models that project future climates operate at spatial scales that are too coarse to be of direct use. To create a dataset more useful to managers, soil and historical climate were assembled for the United States and Canada at a 5-arcminute grid resolution. Nine CMIP3 future climate...
Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation
Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane
2017-06-02
Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less
A coarse-grid-projection acceleration method for finite-element incompressible flow computations
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne; FiN Lab Team
2015-11-01
Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.
Bathymetric terrain model of the Atlantic margin for marine geological investigations
Andrews, Brian D.; Chaytor, Jason D.; ten Brink, Uri S.; Brothers, Daniel S.; Gardner, James V.; Lobecker, Elizabeth A.; Calder, Brian R.
2016-01-01
A bathymetric terrain model of the Atlantic margin covering almost 725,000 square kilometers of seafloor from the New England Seamounts in the north to the Blake Basin in the south is compiled from existing multibeam bathymetric data for marine geological investigations. Although other terrain models of the same area are extant, they are produced from either satellite-derived bathymetry at coarse resolution (ETOPO1), or use older bathymetric data collected by using a combination of single beam and multibeam sonars (Coastal Relief Model). The new multibeam data used to produce this terrain model have been edited by using hydrographic data processing software to maximize the quality, usability, and cartographic presentation of the combined 100-meter resolution grid. The final grid provides the largest high-resolution, seamless terrain model of the Atlantic margin..
Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows
NASA Astrophysics Data System (ADS)
Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.
2017-12-01
Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Coppola, Erika; Giorgi, Filippo
2010-05-01
Since anthropogenic climate change is a rather important factor for the future human life all over the planet and its effects are not globally uniform, climate information at regional or local scales become more and more important for an accurate assessment of the potential impact of climate change on societies and ecosystems. High resolution information with suitably fine-scale for resolving complex geographical features could be a critical factor for successful linkage between climate models and impact assessment studies. However, scale mismatch between them still remains major problem. One method for overcoming the resolution limitations of global climate models and for adding regional details to coarse-grid global projections is to use dynamical downscaling by means of a regional climate model. In this study, the ECHAM5/MPI-OM (1.875 degree) A1B scenario simulation has been dynamically downscaled by using two different approaches within the framework of RegCM3 modeling system. First, a mosaic-type parameterization of subgrid-scale topography and land use (Sub-BATS) is applied over the European Alpine region. The Sub-BATS system is composed of 15 km coarse-grid cell and 3 km sub-grid cell. Second, we developed the RegCM3 one-way double-nested system, with the mother domain encompassing the eastern regions of Asia at 60 km grid spacing and the nested domain covering the Korean Peninsula at 20 km grid spacing. By comparing the regional climate model output and the driving global model ECHAM5/MPI-OM output, it is possible to estimate the added value of physically-based dynamical downscaling when for example impact studies at hydrological scale are performed.
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions
Mehl, S.; Hill, M.C.
2010-01-01
This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.
Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.
2001-01-01
A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.
Global tropospheric ozone modeling: Quantifying errors due to grid resolution
NASA Astrophysics Data System (ADS)
Wild, Oliver; Prather, Michael J.
2006-06-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.
NASA Astrophysics Data System (ADS)
Gruber, S.; Fiddes, J.
2013-12-01
In mountainous topography, the difference in scale between atmospheric reanalyses (typically tens of kilometres) and relevant processes and phenomena near the Earth surface, such as permafrost or snow cover (meters to tens of meters) is most obvious. This contrast of scales is one of the major obstacles to using reanalysis data for the simulation of surface phenomena and to confronting reanalyses with independent observation. At the example of modelling permafrost in mountain areas (but simple to generalise to other phenomena and heterogeneous environments), we present and test methods against measurements for (A) scaling atmospheric data from the reanalysis to the ground level and (B) smart sampling of the heterogeneous landscape in order to set up a lumped model simulation that represents the high-resolution land surface. TopoSCALE (Part A, see http://dx.doi.org/10.5194/gmdd-6-3381-2013) is a scheme, which scales coarse-grid climate fields to fine-grid topography using pressure level data. In addition, it applies necessary topographic corrections e.g. those variables required for computation of radiation fields. This provides the necessary driving fields to the LSM. Tested against independent ground data, this scheme has been shown to improve the scaling and distribution of meteorological parameters in complex terrain, as compared to conventional methods, e.g. lapse rate based approaches. TopoSUB (Part B, see http://dx.doi.org/10.5194/gmd-5-1245-2012) is a surface pre-processor designed to sample a fine-grid domain (defined by a digital elevation model) along important topographical (or other) dimensions through a clustering scheme. This allows constructing a lumped model representing the main sources of fine-grid variability and applying a 1D LSM efficiently over large areas. Results can processed to derive (i) summary statistics at coarse-scale re-analysis grid resolution, (ii) high-resolution data fields spatialized to e.g., the fine-scale digital elevation model grid, or (iii) validation products for locations at which measurements exist, only. The ability of TopoSUB to approximate results simulated by a 2D distributed numerical LSM at a factor of ~10,000 less computations is demonstrated by comparison of 2D and lumped simulations. Successful application of the combined scheme in the European Alps is reported and based on its results, open issues for future research are outlined.
VP Structure of Mount St. Helens, Washington, USA, imaged with local earthquake tomography
Waite, G.P.; Moran, S.C.
2009-01-01
We present a new P-wave velocity model for Mount St. Helens using local earthquake data recorded by the Pacific Northwest Seismograph Stations and Cascades Volcano Observatory since the 18 May 1980 eruption. These data were augmented with records from a dense array of 19 temporary stations deployed during the second half of 2005. Because the distribution of earthquakes in the study area is concentrated beneath the volcano and within two nearly linear trends, we used a graded inversion scheme to compute a coarse-grid model that focused on the regional structure, followed by a fine-grid inversion to improve spatial resolution directly beneath the volcanic edifice. The coarse-grid model results are largely consistent with earlier geophysical studies of the area; we find high-velocity anomalies NW and NE of the edifice that correspond with igneous intrusions and a prominent low-velocity zone NNW of the edifice that corresponds with the linear zone of high seismicity known as the St. Helens Seismic Zone. This low-velocity zone may continue past Mount St. Helens to the south at depths below 5??km. Directly beneath the edifice, the fine-grid model images a low-velocity zone between about 2 and 3.5??km below sea level that may correspond to a shallow magma storage zone. And although the model resolution is poor below about 6??km, we found low velocities that correspond with the aseismic zone between about 5.5 and 8??km that has previously been modeled as the location of a large magma storage volume. ?? 2009 Elsevier B.V.
Initialization of high resolution surface wind simulations using NWS gridded data
J. Forthofer; K. Shannon; Bret Butler
2010-01-01
WindNinja is a standalone computer model designed to provide the user with simulations of surface wind flow. It is deterministic and steady state. It is currently being modified to allow the user to initialize the flow calculation using National Digital Forecast Database. It essentially allows the user to downscale the coarse scale simulations from meso-scale models to...
NASA Astrophysics Data System (ADS)
Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.
2016-10-01
The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.
Bridging the scales in atmospheric composition simulations using a nudging technique
NASA Astrophysics Data System (ADS)
D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco
2010-05-01
Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.
Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure
NASA Astrophysics Data System (ADS)
Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.
2017-12-01
The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomov, I; Pember, R; Greenough, J
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less
A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model
NASA Astrophysics Data System (ADS)
Pouliot, George Antoine
2000-10-01
The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.
A Cell-Centered Multigrid Algorithm for All Grid Sizes
NASA Technical Reports Server (NTRS)
Gjesdal, Thor
1996-01-01
Multigrid methods are optimal; that is, their rate of convergence is independent of the number of grid points, because they use a nested sequence of coarse grids to represent different scales of the solution. This nesting does, however, usually lead to certain restrictions of the permissible size of the discretised problem. In cases where the modeler is free to specify the whole problem, such constraints are of little importance because they can be taken into consideration from the outset. We consider the situation in which there are other competing constraints on the resolution. These restrictions may stem from the physical problem (e.g., if the discretised operator contains experimental data measured on a fixed grid) or from the need to avoid limitations set by the hardware. In this paper we discuss a modification to the cell-centered multigrid algorithm, so that it can be used br problems with any resolution. We discuss in particular a coarsening strategy and choice of intergrid transfer operators that can handle grids with both an even or odd number of cells. The method is described and applied to linear equations obtained by discretization of two- and three-dimensional second-order elliptic PDEs.
Land surface modeling in convection permitting simulations
NASA Astrophysics Data System (ADS)
van Heerwaarden, Chiel; Benedict, Imme
2017-04-01
The next generation of weather and climate models permits convection, albeit at a grid spacing that is not sufficient to resolve all details of the clouds. Whereas much attention is being devoted to the correct simulation of convective clouds and associated precipitation, the role of the land surface has received far less interest. In our view, convective permitting simulations pose a set of problems that need to be solved before accurate weather and climate prediction is possible. The heart of the problem lies at the direct runoff and at the nonlinearity of the surface stress as a function of soil moisture. In coarse resolution simulations, where convection is not permitted, precipitation that reaches the land surface is uniformly distributed over the grid cell. Subsequently, a fraction of this precipitation is intercepted by vegetation or leaves the grid cell via direct runoff, whereas the remainder infiltrates into the soil. As soon as we move to convection permitting simulations, this precipitation falls often locally in large amounts. If the same land-surface model is used as in simulations with parameterized convection, this leads to an increase in direct runoff. Furthermore, spatially non-uniform infiltration leads to a very different surface stress, when scaled up to the course resolution of simulations without convection. Based on large-eddy simulation of realistic convection events at a large domain, this study presents a quantification of the errors made at the land surface in convection permitting simulation. It compares the magnitude of the errors to those made in the convection itself due to the coarse resolution of the simulation. We find that, convection permitting simulations have less evaporation than simulations with parameterized convection, resulting in a non-realistic drying of the atmosphere. We present solutions to resolve this problem.
A Critical Study of Agglomerated Multigrid Methods for Diffusion
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.
2011-01-01
Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Ma, Po-Lun; Xiao, Heng
2013-08-29
The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less
A Critical Study of Agglomerated Multigrid Methods for Diffusion
NASA Technical Reports Server (NTRS)
Thomas, James L.; Nishikawa, Hiroaki; Diskin, Boris
2009-01-01
Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and highly stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Actual cycle results are verified using quantitative analysis methods in which parts of the cycle are replaced by their idealized counterparts.
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
Computation of Flow Over a Drag Prediction Workshop Wing/Body Transport Configuration Using CFL3D
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Biedron, Robert T.
2001-01-01
A Drag Prediction Workshop was held in conjunction with the 19th AIAA Applied Aerodynamics Conference in June 2001. The purpose of the workshop was to assess the prediction of drag by computational methods for a wing/body configuration (DLR-F4) representative of subsonic transport aircraft. This report details computed results submitted to this workshop using the Reynolds-averaged Navier-Stokes code CFL3D. Two supplied grids were used: a point-matched 1-to-1 multi-block grid, and an overset multi-block grid. The 1-to-1 grid, generally of much poorer quality and with less streamwise resolution than the overset grid, is found to be too coarse to adequately resolve the surface pressures. However, the global forces and moments are nonetheless similar to those computed using the overset grid. The effect of three different turbulence models is assessed using the 1-to-1 grid. Surface pressures are very similar overall, and the drag variation due to turbulence model is 18 drag counts. Most of this drag variation is in the friction component, and is attributed in part to insufficient grid resolution of the 1-to-1 grid. The misnomer of 'fully turbulent' computations is discussed; comparisons are made using different transition locations and their effects on the global forces and moments are quantified. Finally, the effect of two different versions of a widely used one-equation turbulence model is explored.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
On a turbulent wall model to predict hemolysis numerically in medical devices
NASA Astrophysics Data System (ADS)
Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung
2017-11-01
Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.
The Sensitivity of Numerical Simulations of Cloud-Topped Boundary Layers to Cross-Grid Flow
NASA Astrophysics Data System (ADS)
Wyant, Matthew C.; Bretherton, Christopher S.; Blossey, Peter N.
2018-02-01
In mesoscale and global atmospheric simulations with large horizontal domains, strong horizontal flow across the grid is often unavoidable, but its effects on cloud-topped boundary layers have received comparatively little study. Here the effects of cross-grid flow on large-eddy simulations of stratocumulus and trade-cumulus marine boundary layers are studied across a range of grid resolutions (horizontal × vertical) between 500 m × 20 m and 35 m × 5 m. Three cases are simulated: DYCOMS nocturnal stratocumulus, BOMEX trade cumulus, and a GCSS stratocumulus-to-trade cumulus case. Simulations are performed with a stationary grid (with 4-8 m s-1 horizontal winds blowing through the cyclic domain) and a moving grid (equivalent to subtracting off a fixed vertically uniform horizontal wind) approximately matching the mean boundary-layer wind speed. For stratocumulus clouds, cross-grid flow produces two primary effects on stratocumulus clouds: a filtering of fine-scale resolved turbulent eddies, which reduces stratocumulus cloud-top entrainment, and a vertical broadening of the stratocumulus-top inversion which enhances cloud-top entrainment. With a coarse (20 m) vertical grid, the former effect dominates and leads to strong increases in cloud cover and LWP, especially as horizontal resolution is coarsened. With a finer (5 m) vertical grid, the latter effect is stronger and leads to small reductions in cloud cover and LWP. For the BOMEX trade cumulus case, cross-grid flow tends to produce fewer and larger clouds with higher LWP, especially for coarser vertical grid spacing. The results presented are robust to choice of scalar advection scheme and Courant number.
A two-way nesting procedure for the WAM model: Application to the Spanish coast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lahoz, M.G.; Albiach, J.C.C.
1997-02-01
The performance of the standard one-way nesting procedure for a regional application of a third-generation wave model is investigated. It is found that this nesting procedure is not applicable when the resolution has to be enhanced drastically, unless intermediate grids are placed between the coarse and the fine grid areas. This solution, in turn, requires an excess of computing resources. A two-way nesting procedure is developed and implemented in the model. Advantages and disadvantages of both systems are discussed. The model output for a test case is compared with observed data and the results are discussed in the paper.
Coarse-grained hydrodynamics from correlation functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, Bruce
This paper will describe a formalism for using correlation functions between different grid cells as the basis for determining coarse-grained hydrodynamic equations for modeling the behavior of mesoscopic fluid systems. Configuration from a molecular dynamics simulation are projected onto basis functions representing grid cells in a continuum hydrodynamic simulation. Equilbrium correlation functions between different grid cells are evaluated from the molecular simulation and used to determine the evolution operator for the coarse-grained hydrodynamic system. The formalism is applied to some simple hydrodynamic cases to determine the feasibility of applying this to realistic nanoscale systems.
Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases
NASA Astrophysics Data System (ADS)
Vilhelmsen, T. N.; Christensen, S.
2009-12-01
In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12
Barriers to Achieving Textbook Multigrid Efficiency (TME) in CFD
NASA Technical Reports Server (NTRS)
Brandt, Achi
1998-01-01
As a guide to attaining this optimal performance for general CFD problems, the table below lists every foreseen kind of computational difficulty for achieving that goal, together with the possible ways for resolving that difficulty, their current state of development, and references. Included in the table are staggered and nonstaggered, conservative and nonconservative discretizations of viscous and inviscid, incompressible and compressible flows at various Mach numbers, as well as a simple (algebraic) turbulence model and comments on chemically reacting flows. The listing of associated computational barriers involves: non-alignment of streamlines or sonic characteristics with the grids; recirculating flows; stagnation points; discretization and relaxation on and near shocks and boundaries; far-field artificial boundary conditions; small-scale singularities (meaning important features, such as the complete airplane, which are not visible on some of the coarse grids); large grid aspect ratios; boundary layer resolution; and grid adaption.
Turner, D.P.; Dodson, R.; Marks, D.
1996-01-01
Spatially distributed biogeochemical models may be applied over grids at a range of spatial resolutions, however, evaluation of potential errors and loss of information at relatively coarse resolutions is rare. In this study, a georeferenced database at the 1-km spatial resolution was developed to initialize and drive a process-based model (Forest-BGC) of water and carbon balance over a gridded 54976 km2 area covering two river basins in mountainous western Oregon. Corresponding data sets were also prepared at 10-km and 50-km spatial resolutions using commonly employed aggregation schemes. Estimates were made at each grid cell for climate variables including daily solar radiation, air temperature, humidity, and precipitation. The topographic structure, water holding capacity, vegetation type and leaf area index were likewise estimated for initial conditions. The daily time series for the climatic drivers was developed from interpolations of meteorological station data for the water year 1990 (1 October 1989-30 September 1990). Model outputs at the 1-km resolution showed good agreement with observed patterns in runoff and productivity. The ranges for model inputs at the 10-km and 50-km resolutions tended to contract because of the smoothed topography. Estimates for mean evapotranspiration and runoff were relatively insensitive to changing the spatial resolution of the grid whereas estimates of mean annual net primary production varied by 11%. The designation of a vegetation type and leaf area at the 50-km resolution often subsumed significant heterogeneity in vegetation, and this factor accounted for much of the difference in the mean values for the carbon flux variables. Although area wide means for model outputs were generally similar across resolutions, difference maps often revealed large areas of disagreement. Relatively high spatial resolution analyses of biogeochemical cycling are desirable from several perspectives and may be particularly important in the study of the potential impacts of climate change.
NASA Astrophysics Data System (ADS)
Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin
2017-10-01
Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.
Compact cell-centered discretization stencils at fine-coarse block structured grid interfaces
NASA Astrophysics Data System (ADS)
Pletzer, Alexander; Jamroz, Ben; Crockett, Robert; Sides, Scott
2014-03-01
Different strategies for coupling fine-coarse grid patches are explored in the context of the adaptive mesh refinement (AMR) method. We show that applying linear interpolation to fill in the fine grid ghost values can produce a finite volume stencil of comparable accuracy to quadratic interpolation provided the cell volumes are adjusted. The volume of fine cells expands whereas the volume of neighboring coarse cells contracts. The amount by which the cells contract/expand depends on whether the interface is a face, an edge, or a corner. It is shown that quadratic or better interpolation is required when the conductivity is spatially varying, anisotropic, the refinement ratio is other than two, or when the fine-coarse interface is concave.
Evaluation of tropical channel refinement using MPAS-A aquaplanet simulations
Martini, Matus N.; Gustafson, Jr., William I.; O'Brien, Travis A.; ...
2015-09-13
Climate models with variable-resolution grids offer a computationally less expensive way to provide more detailed information at regional scales and increased accuracy for processes that cannot be resolved by a coarser grid. This study uses the Model for Prediction Across Scales–Atmosphere (MPAS22A), consisting of a nonhydrostatic dynamical core and a subset of Advanced Research Weather Research and Forecasting (ARW-WRF) model atmospheric physics that have been modified to include the Community Atmosphere Model version 5 (CAM5) cloud fraction parameterization, to investigate the potential benefits of using increased resolution in an tropical channel. The simulations are performed with an idealized aquaplanet configurationmore » using two quasi-uniform grids, with 30 km and 240 km grid spacing, and two variable-resolution grids spanning the same grid spacing range; one with a narrow (20°S–20°N) and one with a wide (30°S–30°N) tropical channel refinement. Results show that increasing resolution in the tropics impacts both the tropical and extratropical circulation. Compared to the quasi-uniform coarse grid, the narrow-channel simulation exhibits stronger updrafts in the Ferrel cell as well as in the middle of the upward branch of the Hadley cell. The wider tropical channel has a closer correspondence to the 30 km quasi-uniform simulation. However, the total atmospheric poleward energy transports are similar in all simulations. The largest differences are in the low-level cloudiness. The refined channel simulations show improved tropical and extratropical precipitation relative to the global 240 km simulation when compared to the global 30 km simulation. All simulations have a single ITCZ. Furthermore, the relatively small differences in mean global and tropical precipitation rates among the simulations are a promising result, and the evidence points to the tropical channel being an effective method for avoiding the extraneous numerical artifacts seen in earlier studies that only refined portion of the tropics.« less
High-resolution surface analysis for extended-range downscaling with limited-area atmospheric models
NASA Astrophysics Data System (ADS)
Separovic, Leo; Husain, Syed Zahid; Yu, Wei; Fernig, David
2014-12-01
High-resolution limited-area model (LAM) simulations are frequently employed to downscale coarse-resolution objective analyses over a specified area of the globe using high-resolution computational grids. When LAMs are integrated over extended time frames, from months to years, they are prone to deviations in land surface variables that can be harmful to the quality of the simulated near-surface fields. Nudging of the prognostic surface fields toward a reference-gridded data set is therefore devised in order to prevent the atmospheric model from diverging from the expected values. This paper presents a method to generate high-resolution analyses of land-surface variables, such as surface canopy temperature, soil moisture, and snow conditions, to be used for the relaxation of lower boundary conditions in extended-range LAM simulations. The proposed method is based on performing offline simulations with an external surface model, forced with the near-surface meteorological fields derived from short-range forecast, operational analyses, and observed temperatures and humidity. Results show that the outputs of the surface model obtained in the present study have potential to improve the near-surface atmospheric fields in extended-range LAM integrations.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Milesi, C.; Votava, P.; Nemani, R. R.
2013-12-01
An unresolved issue with coarse-to-medium resolution satellite-based forest carbon mapping over regional to continental scales is the high level of uncertainty in above ground biomass (AGB) estimates caused by the absence of forest cover information at a high enough spatial resolution (current spatial resolution is limited to 30-m). To put confidence in existing satellite-derived AGB density estimates, it is imperative to create continuous fields of tree cover at a sufficiently high resolution (e.g. 1-m) such that large uncertainties in forested area are reduced. The proposed work will provide means to reduce uncertainty in present satellite-derived AGB maps and Forest Inventory and Analysis (FIA) based regional estimates. Our primary objective will be to create Very High Resolution (VHR) estimates of tree cover at a spatial resolution of 1-m for the Continental United States using all available National Agriculture Imaging Program (NAIP) color-infrared imagery from 2010 till 2012. We will leverage the existing capabilities of the NASA Earth Exchange (NEX) high performance computing and storage facilities. The proposed 1-m tree cover map can be further aggregated to provide percent tree cover at any medium-to-coarse resolution spatial grid, which will aid in reducing uncertainties in AGB density estimation at the respective grid and overcome current limitations imposed by medium-to-coarse resolution land cover maps. We have implemented a scalable and computationally-efficient parallelized framework for tree-cover delineation - the core components of the algorithm [that] include a feature extraction process, a Statistical Region Merging image segmentation algorithm and a classification algorithm based on Deep Belief Network and a Feedforward Backpropagation Neural Network algorithm. An initial pilot exercise has been performed over the state of California (~11,000 scenes) to create a wall-to-wall 1-m tree cover map and the classification accuracy has been assessed. Results show an improvement in accuracy of tree-cover delineation as compared to existing forest cover maps from NLCD, especially over fragmented, heterogeneous and urban landscapes. Estimates of VHR tree cover will complement and enhance the accuracy of present remote-sensing based AGB modeling approaches and forest inventory based estimates at both national and local scales. A requisite step will be to characterize the inherent uncertainties in tree cover estimates and propagate them to estimate AGB.
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
Assessment of the effects of horizontal grid resolution on long ...
The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment.
TopoSCALE v.1.0: downscaling gridded climate data in complex terrain
NASA Astrophysics Data System (ADS)
Fiddes, J.; Gruber, S.
2014-02-01
Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
NASA Astrophysics Data System (ADS)
Poll, Stefan; Shrestha, Prabhakar; Simmer, Clemens
2017-04-01
Land heterogeneity influences the atmospheric boundary layer (ABL) structure including organized (secondary) circulations which feed back on land-atmosphere exchange fluxes. Especially the latter effects cannot be incorporated explicitly in regional and climate models due to their coarse computational spatial grids, but must be parameterized. Current parameterizations lead, however, to uncertainties in modeled surface fluxes and boundary layer evolution, which feed back to cloud initiation and precipitation. This study analyzes the impact of different horizontal grid resolutions on the simulated boundary layer structures in terms of stability, height and induced secondary circulations. The ICON-LES (Icosahedral Nonhydrostatic in LES mode) developed by the MPI-M and the German weather service (DWD) and conducted within the framework of HD(CP)2 is used. ICON is dynamically downscaled through multiple scales of 20 km, 7 km, 2.8 km, 625 m, 312 m, and 156 m grid spacing for several days over Germany and partial neighboring countries for different synoptic conditions. We examined the entropy spectrum of the land surface heterogeneity at these grid resolutions for several locations close to measurement sites, such as Lindenberg, Jülich, Cabauw and Melpitz, and studied its influence on the surface fluxes and the evolution of the boundary layer profiles.
Coarse Grid CFD for underresolved simulation
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.
2010-11-01
CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf
Coarse-grained hydrodynamics from correlation functions
NASA Astrophysics Data System (ADS)
Palmer, Bruce
2018-02-01
This paper will describe a formalism for using correlation functions between different grid cells as the basis for determining coarse-grained hydrodynamic equations for modeling the behavior of mesoscopic fluid systems. Configurations from a molecular dynamics simulation or other atomistic simulation are projected onto basis functions representing grid cells in a continuum hydrodynamic simulation. Equilibrium correlation functions between different grid cells are evaluated from the molecular simulation and used to determine the evolution operator for the coarse-grained hydrodynamic system. The formalism is demonstrated on a discrete particle simulation of diffusion with a spatially dependent diffusion coefficient. Correlation functions are calculated from the particle simulation and the spatially varying diffusion coefficient is recovered using a fitting procedure.
Morris, Ralph; Koo, Bonyoung; Yarwood, Greg
2005-11-01
Version 4.10s of the comprehensive air-quality model with extensions (CAMx) photochemical grid model has been developed, which includes two options for representing particulate matter (PM) size distribution: (1) a two-section representation that consists of fine (PM2.5) and coarse (PM2.5-10) modes that has no interactions between the sections and assumes all of the secondary PM is fine; and (2) a multisectional representation that divides the PM size distribution into N sections (e.g., N = 10) and simulates the mass transfer between sections because of coagulation, accumulation, evaporation, and other processes. The model was applied to Southern California using the two-section and multisection representation of PM size distribution, and we found that allowing secondary PM to grow into the coarse mode had a substantial effect on PM concentration estimates. CAMx was then applied to the Western United States for the 1996 annual period with a 36-km grid resolution using both the two-section and multisection PM representation. The Community Multiscale Air Quality (CMAQ) and Regional Modeling for Aerosol and Deposition (REMSAD) models were also applied to the 1996 annual period. Similar model performance was exhibited by the four models across the Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network monitoring networks. All four of the models exhibited fairly low annual bias for secondary PM sulfate and nitrate but with a winter overestimation and summer underestimation bias. The CAMx multisectional model estimated that coarse mode secondary sulfate and nitrate typically contribute <10% of the total sulfate and nitrate when averaged across the more rural IMPROVE monitoring network.
2017-01-01
Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
The R package 'icosa' for coarse resolution global triangular and penta-hexagonal gridding
NASA Astrophysics Data System (ADS)
Kocsis, Adam T.
2017-04-01
With the development of the internet and the computational power of personal computers, open source programming environments have become indispensable for science in the past decade. This includes the increase of the GIS capacity of the free R environment, which was originally developed for statistical analyses. The flexibility of R made it a preferred programming tool in a multitude of disciplines from the area of the biological and geological sciences. Many of these subdisciplines operate with incidence (occurrence) data that are in a large number of cases to be grained before further analyses can be conducted. This graining is executed mostly by gridding data to cells of a Gaussian grid of various resolutions to increase the density of data in a single unit of the analyses. This method has obvious shortcomings despite the ease of its application: well-known systematic biases are induced to cell sizes and shapes that can interfere with the results of statistical procedures, especially if the number of incidence points influences the metrics in question. The 'icosa' package employs a common method to overcome this obstacle by implementing grids with roughly equal cell sizes and shapes that are based on tessellated icosahedra. These grid objects are essentially polyhedra with xyz Cartesian vertex data that are linked to tables of faces and edges. At its current developmental stage, the package uses a single method of tessellation which balances grid cell size and shape distortions, but its structure allows the implementation of various other types of tessellation algorithms. The resolution of the grids can be set by the number of breakpoints inserted into a segment forming an edge of the original icosahedron. Both the triangular and their inverted penta-hexagonal grids are available for creation with the package. The package also incorporates functions to look up coordinates in the grid very effectively and data containers to link data to the grid structure. The classes defined in the package are communicating with classes of the 'sp' and 'raster' packages and functions are supplied that allow resolution change and type conversions. Three-dimensional rendering is made available with the 'rgl' package and two-dimensional projections can be calculated using 'sp' and 'rgdal'. The package was developed as part of a project funded by the Deutsche Forschungsgemeinschaft (KO - 5382/1-1).
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Durlofsky, L. J.
2016-10-01
A comprehensive framework for modeling flow in porous media containing thin, discrete features, which could be high-permeability fractures or low-permeability deformation bands, is presented. The key steps of the methodology are mesh generation, fine-grid discretization, upscaling, and coarse-grid discretization. Our specialized gridding technique combines a set of intersecting triangulated surfaces by constructing approximate intersections using existing edges. This procedure creates a conforming mesh of all surfaces, which defines the internal boundaries for the volumetric mesh. The flow equations are discretized on this conforming fine mesh using an optimized two-point flux finite-volume approximation. The resulting discrete model is represented by a list of control-volumes with associated positions and pore-volumes, and a list of cell-to-cell connections with associated transmissibilities. Coarse models are then constructed by the aggregation of fine-grid cells, and the transmissibilities between adjacent coarse cells are obtained using flow-based upscaling procedures. Through appropriate computation of fracture-matrix transmissibilities, a dual-continuum representation is obtained on the coarse scale in regions with connected fracture networks. The fine and coarse discrete models generated within the framework are compatible with any connectivity-based simulator. The applicability of the methodology is illustrated for several two- and three-dimensional examples. In particular, we consider gas production from naturally fractured low-permeability formations, and transport through complex fracture networks. In all cases, highly accurate solutions are obtained with significant model reduction.
NASA Astrophysics Data System (ADS)
Foo, Kam Keong
A two-dimensional dual-mode scramjet flowpath is developed and evaluated using the ANSYS Fluent density-based flow solver with various computational grids. Results are obtained for fuel-off, fuel-on non-reacting, and fuel-on reacting cases at different equivalence ratios. A one-step global chemical kinetics hydrogen-air model is used in conjunction with the eddy-dissipation model. Coarse, medium and fine computational grids are used to evaluate grid sensitivity and to investigate a lack of grid independence. Different grid adaptation strategies are performed on the coarse grid in an attempt to emulate the solutions obtained from the finer grids. The goal of this study is to investigate the feasibility of using various mesh adaptation criteria to significantly decrease computational efforts for high-speed reacting flows.
Three-dimensional elliptic grid generation for an F-16
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.
Single-Grid-Pair Fourier Telescope for Imaging in Hard-X Rays and gamma Rays
NASA Technical Reports Server (NTRS)
Campbell, Jonathan
2008-01-01
This instrument, a proposed Fourier telescope for imaging in hard-x rays and gamma rays, would contain only one pair of grids made of an appropriate radiation-absorpting/ scattering material, in contradistinction to multiple pairs of such as grids in prior Fourier x- and gamma-ray telescopes. This instrument would also include a relatively coarse gridlike image detector appropriate to the radiant flux to be imaged. Notwithstanding the smaller number of grids and the relative coarseness of the imaging detector, the images produced by the proposed instrument would be of higher quality.
NASA Astrophysics Data System (ADS)
Feng, Wenqiang; Guo, Zhenlin; Lowengrub, John S.; Wise, Steven M.
2018-01-01
We present a mass-conservative full approximation storage (FAS) multigrid solver for cell-centered finite difference methods on block-structured, locally cartesian grids. The algorithm is essentially a standard adaptive FAS (AFAS) scheme, but with a simple modification that comes in the form of a mass-conservative correction to the coarse-level force. This correction is facilitated by the creation of a zombie variable, analogous to a ghost variable, but defined on the coarse grid and lying under the fine grid refinement patch. We show that a number of different types of fine-level ghost cell interpolation strategies could be used in our framework, including low-order linear interpolation. In our approach, the smoother, prolongation, and restriction operations need never be aware of the mass conservation conditions at the coarse-fine interface. To maintain global mass conservation, we need only modify the usual FAS algorithm by correcting the coarse-level force function at points adjacent to the coarse-fine interface. We demonstrate through simulations that the solver converges geometrically, at a rate that is h-independent, and we show the generality of the solver, applying it to several nonlinear, time-dependent, and multi-dimensional problems. In several tests, we show that second-order asymptotic (h → 0) convergence is observed for the discretizations, provided that (1) at least linear interpolation of the ghost variables is employed, and (2) the mass conservation corrections are applied to the coarse-level force term.
POLARIS: A 30-meter probabilistic soil series map of the contiguous United States
Chaney, Nathaniel W; Wood, Eric F; McBratney, Alexander B; Hempel, Jonathan W; Nauman, Travis; Brungard, Colby W.; Odgers, Nathan P
2016-01-01
A new complete map of soil series probabilities has been produced for the contiguous United States at a 30 m spatial resolution. This innovative database, named POLARIS, is constructed using available high-resolution geospatial environmental data and a state-of-the-art machine learning algorithm (DSMART-HPC) to remap the Soil Survey Geographic (SSURGO) database. This 9 billion grid cell database is possible using available high performance computing resources. POLARIS provides a spatially continuous, internally consistent, quantitative prediction of soil series. It offers potential solutions to the primary weaknesses in SSURGO: 1) unmapped areas are gap-filled using survey data from the surrounding regions, 2) the artificial discontinuities at political boundaries are removed, and 3) the use of high resolution environmental covariate data leads to a spatial disaggregation of the coarse polygons. The geospatial environmental covariates that have the largest role in assembling POLARIS over the contiguous United States (CONUS) are fine-scale (30 m) elevation data and coarse-scale (~ 2 km) estimates of the geographic distribution of uranium, thorium, and potassium. A preliminary validation of POLARIS using the NRCS National Soil Information System (NASIS) database shows variable performance over CONUS. In general, the best performance is obtained at grid cells where DSMART-HPC is most able to reduce the chance of misclassification. The important role of environmental covariates in limiting prediction uncertainty suggests including additional covariates is pivotal to improving POLARIS' accuracy. This database has the potential to improve the modeling of biogeochemical, water, and energy cycles in environmental models; enhance availability of data for precision agriculture; and assist hydrologic monitoring and forecasting to ensure food and water security.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Kurtz, Jason Patrick
We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.
Modeling a three-dimensional river plume over continental shelf using a 3D unstructured grid model
Cheng, R.T.; Casulli, V.; ,
2004-01-01
River derived fresh water discharging into an adjacent continental shelf forms a trapped river plume that propagates in a narrow region along the coast. These river plumes are real and they have been observed in the field. Many previous investigations have reported some aspects of the river plume properties, which are sensitive to stratification, Coriolis acceleration, winds (upwelling or downwelling), coastal currents, and river discharge. Numerical modeling of the dynamics of river plumes is very challenging, because the complete problem involves a wide range of vertical and horizontal scales. Proper simulations of river plume dynamics cannot be achieved without a realistic representation of the flow and salinity structure near the river mouth that controls the initial formation and propagation of the plume in the coastal ocean. In this study, an unstructured grid model was used for simulations of river plume dynamics allowing fine grid resolution in the river and in regions near the coast with a coarse grid in the far field of the river plume in the coastal ocean, in the vertical, fine fixed levels were used near the free surface, and coarse vertical levels were used over the continental shelf. The simulations have demonstrated the uniquely important role played by Coriolis acceleration. Without Coriolis acceleration, no trapped river plume can be formed no matter how favorable the ambient conditions might be. The simulation results show properties of the river plume and the characteristics of flow and salinity within the estuary; they are completely consistent with the physics of estuaries and coastal oceans.
NASA Astrophysics Data System (ADS)
López López, Patricia; Wanders, Niko; Sutanudjaja, Edwin; Renzullo, Luigi; Sterk, Geert; Schellekens, Jaap; Bierkens, Marc
2015-04-01
The coarse spatial resolution of global hydrological models (typically > 0.25o) often limits their ability to resolve key water balance processes for many river basins and thus compromises their suitability for water resources management, especially when compared to locally-tunes river models. A possible solution to the problem may be to drive the coarse resolution models with high-resolution meteorological data as well as to assimilate ground-based and remotely-sensed observations of key water cycle variables. While this would improve the modelling resolution of the global model, the impact of prediction accuracy remains largely an open question. In this study we investigated the impact that assimilating streamflow and satellite soil moisture observations have on global hydrological model estimation, driven by coarse- and high-resolution meteorological observations, for the Murrumbidgee river basin in Australia. The PCR-GLOBWB global hydrological model is forced with downscaled global climatological data (from 0.5o downscaled to 0.1o resolution) obtained from the WATCH Forcing Data (WFDEI) and local high resolution gauging station based gridded datasets (0.05o), sourced from the Australian Bureau of Meteorology. Downscaled satellite derived soil moisture (from 0.5o downscaled to 0.1o resolution) from AMSR-E and streamflow observations collected from 25 gauging stations are assimilated using an ensemble Kalman filter. Several scenarios are analysed to explore the added value of data assimilation considering both local and global climatological data. Results show that the assimilation of streamflow observations result in the largest improvement of the model estimates. The joint assimilation of both streamflow and downscaled soil moisture observations leads to further improved in streamflow simulations (10% reduction in RMSE), mainly in the headwater catchments (up to 10,000 km2). Results also show that the added contribution of data assimilation, for both soil moisture and streamflow, is more pronounced when the global meteorological data are used to force the models. This is caused by the higher uncertainty and coarser resolution of the global forcing. This study demonstrates that it is possible to improve hydrological simulations forced by coarse resolution meteorological data with downscaled satellite soil moisture and streamflow observations and bring them closer to a hydrological model forced with local climatological data. These findings are important in light of the efforts that are currently done to go to global hyper-resolution modelling and can significantly help to advance this research.
The interpretation of remotely sensed cloud properties from a model paramterization perspective
NASA Technical Reports Server (NTRS)
HARSHVARDHAN; Wielicki, Bruce A.; Ginger, Kathryn M.
1994-01-01
A study has been made of the relationship between mean cloud radiative properties and cloud fraction in stratocumulus cloud systems. The analysis is of several Land Resources Satellite System (LANDSAT) images and three hourly International Satellite Cloud Climatology Project (ISCCP) C-1 data during daylight hours for two grid boxes covering an area typical of a general circulation model (GCM) grid increment. Cloud properties were inferred from the LANDSAT images using two thresholds and several pixel resolutions ranging from roughly 0.0625 km to 8 km. At the finest resolution, the analysis shows that mean cloud optical depth (or liquid water path) increases somewhat with increasing cloud fraction up to 20% cloud coverage. More striking, however, is the lack of correlation between the two quantities for cloud fractions between roughly 0.2 and 0.8. When the scene is essentially overcast, the mean cloud optical tends to be higher. Coarse resolution LANDSAT analysis and the ISCCP 8-km data show lack of correlation between mean cloud optical depth and cloud fraction for coverage less than about 90%. This study shows that there is perhaps a local mean liquid water path (LWP) associated with partly cloudy areas of stratocumulus clouds. A method has been suggested to use this property to construct the cloud fraction paramterization in a GCM when the model computes a grid-box-mean LWP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Avik; Milioli, Fernando E.; Ozarkar, Shailesh
2016-10-01
The accuracy of fluidized-bed CFD predictions using the two-fluid model can be improved significantly, even when using coarse grids, by replacing the microscopic kinetic-theory-based closures with coarse-grained constitutive models. These coarse-grained constitutive relationships, called filtered models, account for the unresolved gas-particle structures (clusters and bubbles) via sub-grid corrections. Following the previous 2-D approaches of Igci et al. [AIChE J., 54(6), 1431-1448, 2008] and Milioli et al. [AIChE J., 59(9), 3265-3275, 2013], new filtered models are constructed from highly-resolved 3-D simulations of gas-particle flows. Although qualitatively similar to the older 2-D models, the new 3-D relationships exhibit noticeable quantitative and functionalmore » differences. In particular, the filtered stresses are strongly dependent on the gas-particle slip velocity. Closures for the filtered inter-phase drag, gas- and solids-phase pressures and viscosities are reported. A new model for solids stress anisotropy is also presented. These new filtered 3-D constitutive relationships are better suited to practical coarse-grid 3-D simulations of large, commercial-scale devices.« less
NASA Astrophysics Data System (ADS)
Liu, Q.; Chiu, L. S.; Hao, X.
2017-10-01
The abundance or lack of rainfall affects peoples' life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007), accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG). However, the models' resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling) procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA) at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days) and monthly resolutions. The probability distributions (PDF) and cumulative distribution functions(CDF) of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS) test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.
High-resolution subgrid models: background, grid generation, and implementation
NASA Astrophysics Data System (ADS)
Sehili, Aissa; Lang, Günther; Lippert, Christoph
2014-04-01
The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.
Multigrid methods for differential equations with highly oscillatory coefficients
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Luo, Erding
1993-01-01
New coarse grid multigrid operators for problems with highly oscillatory coefficients are developed. These types of operators are necessary when the characters of the differential equations on coarser grids or longer wavelengths are different from that on the fine grid. Elliptic problems for composite materials and different classes of hyperbolic problems are practical examples. The new coarse grid operators can be constructed directly based on the homogenized differential operators or hierarchically computed from the finest grid. Convergence analysis based on the homogenization theory is given for elliptic problems with periodic coefficients and some hyperbolic problems. These are classes of equations for which there exists a fairly complete theory for the interaction between shorter and longer wavelengths in the problems. Numerical examples are presented.
High Resolution Surface Geometry and Albedo by Combining Laser Altimetry and Visible Images
NASA Technical Reports Server (NTRS)
Morris, Robin D.; vonToussaint, Udo; Cheeseman, Peter C.; Clancy, Daniel (Technical Monitor)
2001-01-01
The need for accurate geometric and radiometric information over large areas has become increasingly important. Laser altimetry is one of the key technologies for obtaining this geometric information. However, there are important application areas where the observing platform has its orbit constrained by the other instruments it is carrying, and so the spatial resolution that can be recorded by the laser altimeter is limited. In this paper we show how information recorded by one of the other instruments commonly carried, a high-resolution imaging camera, can be combined with the laser altimeter measurements to give a high resolution estimate both of the surface geometry and its reflectance properties. This estimate has an accuracy unavailable from other interpolation methods. We present the results from combining synthetic laser altimeter measurements on a coarse grid with images generated from a surface model to re-create the surface model.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
NASA Astrophysics Data System (ADS)
Hutter, Nils; Losch, Martin; Menemenlis, Dimitris
2017-04-01
Sea ice models with the traditional viscous-plastic (VP) rheology and very high grid resolution can resolve leads and deformation rates that are localised along Linear Kinematic Features (LKF). In a 1-km pan-Arctic sea ice-ocean simulation, the small scale sea-ice deformations in the Central Arctic are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS). A new coupled scaling analysis for data on Eulerian grids determines the spatial and the temporal scaling as well as the coupling between temporal and spatial scales. The spatial scaling of the modelled sea ice deformation implies multi-fractality. The spatial scaling is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling and its coupling to temporal scales with satellite observations and models with the modern elasto-brittle rheology challenges previous results with VP models at coarse resolution where no such scaling was found. The temporal scaling analysis, however, shows that the VP model does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.
Impact of Variable SST on Simulated Warm Season Precipitation
NASA Astrophysics Data System (ADS)
Saleeby, S. M.; Cotton, W. R.
2007-05-01
The Colorado State University - Regional Atmospheric Modeling System (CSU-RAMS) is being used to examine the variability in monsoon-related warm season precipitation over Mexico and the United States due to variability in SST. Given recent improvements and increased resolution in satellite derived SSTs it is pertinent to examine the sensitivity of the RAMS model to the variety of SST data sources that are available. In particular, we are examining this dependence across continental scales over the full warm season, as well as across the regional scale centered around the Gulf of California on time scales of individual surge events. In this study we performed an ensemble of simulations that include the 2002, 2003, and 2004 warm seasons with use of the Climatology, Reynold's, AVHRR, and MODIS SSTs. From the seasonal 90-day simulations with 30km grid spacing, it was found that variations in surface latent heat flux are directly linked to differences in SST. Regions with cooler (warmer) SST have decreased (increased) moisture flux from the ocean which is in proportion to the magnitude of the SST difference. Over the eastern Pacific, differences in low-level horizontal moisture flux show a general trend toward reduced fluxes over cooler waters and very little inland impact. Over the Gulf of Mexico, however, there is substantial variability for each dataset comparison, despite having only limited variability among the SST data. Causes of this unexpected variability are not straight-forward. Precipitation impacts are greatest near the southern coast of Mexico and along the Sierra Madres. Precipitation variability over the CONUS is rather chaotic and is limited to areas impacted by the Gulf of Mexico or monsoon convection. Another unexpected outcome is the lack of variability in areas near the northern Gulf of California where SST and latent heat flux variability is a maximum. From the 7-day surge period simulations at 7km grid spacing, we found that SST differences on the higher resolution nested grid reveal fine scale variability that is otherwise smoothed out or unapparent on the coarser grid. Unlike the coarse grid, the latent heat flux, temperature, and moisture transport differences on the fine grid reveal an inland impact. This is likely due to fine scale variability in onshore moisture transport and sea- breeze circulations which may alter monsoonal convection and precipitation. However, only the largest SST differences (spatially and in magnitude) tend to invoke large, coherent responses in moisture flux. The SST variability at high resolution produces relatively large differences in precipitation that are focused along the slopes of the SMO, with a tendency toward greater variability along the western slope adjacent to the coast. The precipitation differences are of fine resolution, with variability of +/- 30 mm (over 5 days) along the length of the SMO. Variability on the fine grid also invokes precipitation changes over AZ/NM that are not resolved on the coarse grid. Vertical cross-sections examined along the GoC during the surge episode revealed variations in the moisture and temperature structure of the surge. The cooler SSTs in the climatological dataset produced the greatest variability compared to the other datasets. The surge produced from climatology SSTs was nearly 5g/kg drier and up to 4°C cooler compared to surges influenced by the SST datasets. The overall northward propagation of the surge appeared unaffected by the SSTs.
KINETIC ENERGY FROM SUPERNOVA FEEDBACK IN HIGH-RESOLUTION GALAXY SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, Christine M.; Bryan, Greg L.; Ostriker, Jeremiah P.
We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (∼10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 10{sup 9} M{sub ⊙} dwarf halo. Wemore » find that in high-density media (≳50 cm{sup −3}) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.« less
NASA Astrophysics Data System (ADS)
Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas
2017-06-01
The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.
Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions
NASA Astrophysics Data System (ADS)
Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.
2010-12-01
Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.
2002-01-01
The variable-resolution stretched-grid (SG) GEOS (Goddard Earth Observing System) GCM has been used for limited ensemble integrations with a relatively coarse, 60 to 100 km, regional resolution over the U.S. The experiments have been run for the 12-year period, 1987-1998, that includes the recent ENSO cycles. Initial conditions 1-2 days apart are used for ensemble members. The goal of the experiments is analyzing the long-term SG-GCM ensemble integrations in terms of their potential in reducing the uncertainties of regional climate simulation while producing realistic mesoscales. The ensemble integration results are analyzed for both prognostic and diagnostic fields. A special attention is devoted to analyzing the variability of precipitation over the U.S. The internal variability of the SG-GCM has been assessed. The ensemble means appear to be closer to the verifying analyses than the individual ensemble members. The ensemble means capture realistic mesoscale patterns, especially those of induced by orography. Two ENSO cycles have been analyzed in terms their impact on the U.S. climate, especially on precipitation. The ability of the SG-GCM simulations to produce regional climate anomalies has been confirmed. However, the optimal size of the ensembles depending on fine regional resolution used, is still to be determined. The SG-GCM ensemble simulations are performed as a preparation or a preliminary stage for the international SGMIP (Stretched-Grid Model Intercomparison Project) that is under way with participation of the major centers and groups employing the SG-approach for regional climate modeling.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
A highly parallel multigrid-like method for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Tuminaro, Ray S.
1989-01-01
We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.
Schwalm, C.; Huntzinger, Deborah N.; Cook, Robert B.; ...
2015-03-11
Significant changes in the water cycle are expected under current global environmental change. Robust assessment of present-day water cycle dynamics at continental to global scales is confounded by shortcomings in the observed record. Modeled assessments also yield conflicting results which are linked to differences in model structure and simulation protocol. Here we compare simulated gridded (1 spatial resolution) runoff from six terrestrial biosphere models (TBMs), seven reanalysis products, and one gridded surface station product in the contiguous United States (CONUS) from 2001 to 2005. We evaluate the consistency of these 14 estimates with stream gauge data, both as depleted flowmore » and corrected for net withdrawals (2005 only), at the CONUS and water resource region scale, as well as examining similarity across TBMs and reanalysis products at the grid cell scale. Mean runoff across all simulated products and regions varies widely (range: 71 to 356 mm yr(-1)) relative to observed continental-scale runoff (209 or 280 mm yr(-1) when corrected for net withdrawals). Across all 14 products 8 exhibit Nash-Sutcliffe efficiency values in excess of 0.8 and three are within 10% of the observed value. Region-level mismatch exhibits a weak pattern of overestimation in western and underestimation in eastern regions although two products are systematically biased across all regions and largely scales with water use. Although gridded composite TBM and reanalysis runoff show some regional similarities, individual product values are highly variable. At the coarse scales used here we find that progress in better constraining simulated runoff requires standardized forcing data and the explicit incorporation of human effects (e.g., water withdrawals by source, fire, and land use change). (C) 2015 Elsevier B.V. All rights reserved.« less
Analysis Tools for CFD Multigrid Solvers
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Diskin, Boris
2004-01-01
Analysis tools are needed to guide the development and evaluate the performance of multigrid solvers for the fluid flow equations. Classical analysis tools, such as local mode analysis, often fail to accurately predict performance. Two-grid analysis tools, herein referred to as Idealized Coarse Grid and Idealized Relaxation iterations, have been developed and evaluated within a pilot multigrid solver. These new tools are applicable to general systems of equations and/or discretizations and point to problem areas within an existing multigrid solver. Idealized Relaxation and Idealized Coarse Grid are applied in developing textbook-efficient multigrid solvers for incompressible stagnation flow problems.
NASA Astrophysics Data System (ADS)
Girotto, M.; De Lannoy, G. J. M.; Reichle, R. H.; Rodell, M.
2015-12-01
The Gravity Recovery And Climate Experiment (GRACE) mission is unique because it provides highly accurate column integrated estimates of terrestrial water storage (TWS) variations. Major limitations of GRACE-based TWS observations are related to their monthly temporal and coarse spatial resolution (around 330 km at the equator), and to the vertical integration of the water storage components. These challenges can be addressed through data assimilation. To date, it is still not obvious how best to assimilate GRACE-TWS observations into a land surface model, in order to improve hydrological variables, and many details have yet to be worked out. This presentation discusses specific recent features of the assimilation of gridded GRACE-TWS data into the NASA Goddard Earth Observing System (GEOS-5) Catchment land surface model to improve soil moisture and shallow groundwater estimates at the continental scale. The major recent advancements introduced by the presented work with respect to earlier systems include: 1) the assimilation of gridded GRACE-TWS data product with scaling factors that are specifically derived for data assimilation purposes only; 2) the assimilation is performed through a 3D assimilation scheme, in which reasonable spatial and temporal error standard deviations and correlations are exploited; 3) the analysis step uses an optimized calculation and application of the analysis increments; 4) a poor-man's adaptive estimation of a spatially variable measurement error. This work shows that even if they are characterized by a coarse spatial and temporal resolution, the observed column integrated GRACE-TWS data have potential for improving our understanding of soil moisture and shallow groundwater variations.
NASA Astrophysics Data System (ADS)
Ziegler, Hannes Moritz
Planners and managers often rely on coarse population distribution data from the census for addressing various social, economic, and environmental problems. In the analysis of physical vulnerabilities to sea-level rise, census units such as blocks or block groups are coarse relative to the required decision-making application. This study explores the benefits offered from integrating image classification and dasymetric mapping at the household level to provide detailed small area population estimates at the scale of residential buildings. In a case study of Boca Raton, FL, a sea-level rise inundation grid based on mapping methods by NOAA is overlaid on the highly detailed population distribution data to identify vulnerable residences and estimate population displacement. The enhanced spatial detail offered through this method has the potential to better guide targeted strategies for future development, mitigation, and adaptation efforts.
NASA Astrophysics Data System (ADS)
Sun, K.; Zhu, L.; Gonzalez Abad, G.; Nowlan, C. R.; Miller, C. E.; Huang, G.; Liu, X.; Chance, K.; Yang, K.
2017-12-01
It has been well demonstrated that regridding Level 2 products (satellite observations from individual footprints, or pixels) from multiple sensors/species onto regular spatial and temporal grids makes the data more accessible for scientific studies and can even lead to additional discoveries. However, synergizing multiple species retrieved from multiple satellite sensors faces many challenges, including differences in spatial coverage, viewing geometry, and data filtering criteria. These differences will lead to errors and biases if not treated carefully. Operational gridded products are often at 0.25°×0.25° resolution with a global scale, which is too coarse for local heterogeneous emission sources (e.g., urban areas), and at fixed temporal intervals (e.g., daily or monthly). We propose a consistent framework to fully use and properly weight the information of all possible individual satellite observations. A key aspect of this work is an accurate knowledge of the spatial response function (SRF) of the satellite Level 2 pixels. We found that the conventional overlap-area-weighting method (tessellation) is accurate only when the SRF is homogeneous within the parameterized pixel boundary and zero outside the boundary. There will be a tessellation error if the SRF is a smooth distribution, and if this distribution is not properly considered. On the other hand, discretizing the SRF at the destination grid will also induce errors. By balancing these error sources, we found that the SRF should be used in gridding OMI data to 0.2° for fine resolutions. Case studies by merging multiple species and wind data into 0.01° grid will be shown in the presentation.
NASA Astrophysics Data System (ADS)
Tseng, Chien-Yung; Chou, Yi-Ju
2018-04-01
A three-dimensional nonhydrostatic coastal model SUNTANS is used to study hyperpycnal plumes on sloping continental shelves with idealized domain setup. The study aims to examine the nonhydrostatic effect of the plunging hyperpycnal plume and the associated flow structures on different shelf slopes. The unstructured triangular grid in SUNTANS allows for local refinement of the grid size for regions in which the flow varies abruptly, while retaining low-cost computation using the coarse grid resolution for regions in which the flow is more uniform. These nonhydrostatic simulations reveal detailed three-dimensional flow structures in both transient and steady states. Via comparison with the hydrostatic simulation, we show that the nonhydrostatic effect is particularly important before plunging, when the plume is subject to significant changes in both the along-shore and vertical directions. After plunging, where the plume becomes an undercurrent that is more spatially uniform, little difference is found between the hydrostatic and nonhydrostatic simulations in the present gentle- and mild-slope cases. A grid-dependence study shows that the nonhydrostatic effect can be seen only when the grid resolution is sufficiently fine that the calculation is not overly diffusive. A depth-integrated momentum budget analysis is then conducted to show that the flow convergence due to plunging is an important factor in the three-dimensional flow structures. Moreover, it shows that the nonhydrostatic effect becomes more important as the slope increases, and in the steep-slope case, neglect of transport of the vertical momentum during plunging in the hydrostatic case further leads to an erroneous prediction for the undercurrent.
Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.
2012-06-21
We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less
The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication
High-resolution field shaping utilizing a masked multileaf collimator.
Williams, P C; Cooper, P
2000-08-01
Multileaf collimators (MLCs) have become an important tool in the modern radiotherapy department. However, the current limit of resolution (1 cm at isocentre) can be too coarse for acceptable shielding of all fields. A number of mini- and micro-MLCs have been developed, with thinner leaves to achieve approved resolution. Currently however, such devices are limited to modest field sizes and stereotactic applications. This paper proposes a new method of high-resolution beam collimation by use of a tertiary grid collimator situated below the conventional MLC. The width of each slit in the grid is a submultiple of the MLC width. A composite shaped field is thus built up from a series of subfields, with the main MLC defining the length of each strip within each subfield. Presented here are initial findings using a prototype device. The beam uniformity achievable with such a device was examined by measuring transmission profiles through the grid using a diode. Profiles thus measured were then copied and superposed to generate composite beams, from which the uniformity achievable could be assessed. With the average dose across the profile normalized to 100%, hot spots up to 5.0% and troughs of 3% were identified for a composite beam of 2 x 5.0 mm grids, as measured at Dmax for a 6 MV beam. For a beam composed from 4 x 2.5 mm grids, the maximum across the profile was 3.0% above the average, and the minimum 2.5% below. Actual composite profiles were also formed using the integrating properties of film, with the subfield indexing performed using an engineering positioning stage. The beam uniformity for these fields compared well with that achieved in theory using the diode measurements. Finally sine wave patterns were generated to demonstrate the potential improvements in field shaping and conformity using this device as opposed to the conventional MLC alone. The scalloping effect on the field edge commonly seen on MLC fields was appreciably reduced by use of 2 x 5.0 mm grids, and still further by the use of 4 x 2.5 mm grids, as would be expected. This was also achieved with a small or negligible broadening of the beam penumbra as measured at Dmax.
Fully implicit moving mesh adaptive algorithm
NASA Astrophysics Data System (ADS)
Serazio, C.; Chacon, L.; Lapenta, G.
2006-10-01
In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Broxton, P. D.; Brunke, M.; Gochis, D.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.
2015-12-01
The terrestrial hydrological system, including surface and subsurface water, is an essential component of the Earth's climate system. Over the past few decades, land surface modelers have built one-dimensional (1D) models resolving the vertical flow of water through the soil column for use in Earth system models (ESMs). These models generally have a relatively coarse model grid size (~25-100 km) and only account for sub-grid lateral hydrological variations using simple parameterization schemes. At the same time, hydrologists have developed detailed high-resolution (~0.1-10 km grid size) three dimensional (3D) models and showed the importance of accounting for the vertical and lateral redistribution of surface and subsurface water on soil moisture, the surface energy balance and ecosystem dynamics on these smaller scales. However, computational constraints have limited the implementation of the high-resolution models for continental and global scale applications. The current work presents a hybrid-3D hydrological approach is presented, where the 1D vertical soil column model (available in many ESMs) is coupled with a high-resolution lateral flow model (h2D) to simulate subsurface flow and overland flow. H2D accounts for both local-scale hillslope and regional-scale unconfined aquifer responses (i.e. riparian zone and wetlands). This approach was shown to give comparable results as those obtained by an explicit 3D Richards model for the subsurface, but improves runtime efficiency considerably. The h3D approach is implemented for the Delaware river basin, where Noah-MP land surface model (LSM) is used to calculated vertical energy and water exchanges with the atmosphere using a 10km grid resolution. Noah-MP was coupled within the WRF-Hydro infrastructure with the lateral 1km grid resolution h2D model, for which the average depth-to-bedrock, hillslope width function and soil parameters were estimated from digital datasets. The ability of this h3D approach to simulate the hydrological dynamics of the Delaware River basin will be assessed by comparing the model results (both hydrological performance and numerical efficiency) with the standard setup of the NOAH-MP model and a high-resolution (1km) version of NOAH-MP, which also explicitly accounts for lateral subsurface and overland flow.
BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics
NASA Astrophysics Data System (ADS)
Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.
2010-12-01
BIOMAP simulates competition between two Plant Functional Types (PFT) at any given point in the conterminous U.S. using a time series of daily temperature (mean, minimum, maximum), precipitation, humidity, light and nutrients, with PFT-specific rooting within a multi-layer soil. The model employs a 2-layer canopy biophysics, Farquhar photosynthesis, the Beer-Lambert Law for light attenuation and a mechanistic soil hydrology. In essence, BIOMAP is a re-built version of the biogeochemistry model, BIOME-BGC, into the form of the MAPSS biogeography model. Specific enhancements are: 1) the 2-layer canopy biophysics of Dolman (1993); 2) the unique MAPSS-based hydrology, which incorporates canopy evaporation, snow dynamics, infiltration and saturated and unsaturated percolation with ‘fast’ flow and base flow and a ‘tunable aquifer’ capacity, a metaphor of D’Arcy’s Law; and, 3) a unique MAPSS-based stomatal conductance algorithm, which simultaneously incorporates vapor pressure and soil water potential constraints, based on physiological information and many other improvements. Over small domains the PFTs can be parameterized as individual species to investigate fundamental vs. potential niche theory; while, at more coarse scales the PFTs can be rendered as more general functional groups. Since all of the model processes are intrinsically leaf to plot scale (physiology to PFT competition), it essentially has no ‘intrinsic’ scale and can be implemented on a grid of any size, taking on the characteristics defined by the homogeneous climate of each grid cell. Currently, the model is implemented on the VEMAP 1/2 degree, daily grid over the conterminous U.S. Although both the thermal and water-limited ecotones are dynamic, following climate variability, the PFT distributions remain fixed. Thus, the model is currently being fitted with a ‘reproduction niche’ to allow full dynamic operation as a Dynamic General Vegetation Model (DGVM). While global simulations of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
NASA Astrophysics Data System (ADS)
Strong, Courtenay; Khatri, Krishna B.; Kochanski, Adam K.; Lewis, Clayton S.; Allen, L. Niel
2017-05-01
The main objective of this study was to investigate whether dynamically downscaled high resolution (4-km) climate data from the Weather Research and Forecasting (WRF) model provide physically meaningful additional information for reference evapotranspiration (E) calculation compared to the recently published GridET framework that uses interpolation from coarser-scale simulations run at 32-km resolution. The analysis focuses on complex terrain of Utah in the western United States for years 1985-2010, and comparisons were made statewide with supplemental analyses specifically for regions with irrigated agriculture. E was calculated using the standardized equation and procedures proposed by the American Society of Civil Engineers from hourly data, and climate inputs from WRF and GridET were debiased relative to the same set of observations. For annual mean values, E from WRF (EW) and E from GridET (EG) both agreed well with E derived from observations (r2 = 0.95, bias < 2 mm). Domain-wide, EW and EG were well correlated spatially (r2 = 0.89), however local differences ΔE =EW -EG were as large as +439 mm year-1 (+26%) in some locations, and ΔE averaged +36 mm year-1. After linearly removing the effects of contrasts in solar radiation and wind speed, which are characteristically less reliable under downscaling in complex terrain, approximately half the residual variance was accounted for by contrasts in temperature and humidity between GridET and WRF. These contrasts stemmed from GridET interpolating using an assumed lapse rate of Γ = 6.5 K km-1, whereas WRF produced a thermodynamically-driven lapse rate closer to 5 K km-1 as observed in mountainous terrain. The primary conclusions are that observed lapse rates in complex terrain differ markedly from the commonly assumed Γ = 6.5 K km-1, these lapse rates can be realistically resolved via dynamical downscaling, and use of constant Γ produces differences in E of order as large as 102 mm year-1.
Development and application of GIS-based PRISM integration through a plugin approach
NASA Astrophysics Data System (ADS)
Lee, Woo-Seop; Chun, Jong Ahn; Kang, Kwangmin
2014-05-01
A PRISM (Parameter-elevation Regressions on Independent Slopes Model) QGIS-plugin was developed on Quantum GIS platform in this study. This Quantum GIS plugin system provides user-friendly graphic user interfaces (GUIs) so that users can obtain gridded meteorological data of high resolutions (1 km × 1 km). Also, this software is designed to run on a personal computer so that it does not require an internet access or a sophisticated computer system. This module is a user-friendly system that a user can generate PRISM data with ease. The proposed PRISM QGIS-plugin is a hybrid statistical-geographic model system that uses coarse resolution datasets (APHRODITE datasets in this study) with digital elevation data to generate the fine-resolution gridded precipitation. To validate the performance of the software, Prek Thnot River Basin in Kandal, Cambodia is selected for application. Overall statistical analysis shows promising outputs generated by the proposed plugin. Error measures such as RMSE (Root Mean Square Error) and MAPE (Mean Absolute Percentage Error) were used to evaluate the performance of the developed PRISM QGIS-plugin. Evaluation results using RMSE and MAPE were 2.76 mm and 4.2%, respectively. This study suggested that the plugin can be used to generate high resolution precipitation datasets for hydrological and climatological studies at a watershed where observed weather datasets are limited.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.; ...
2016-09-22
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
Spatial Downscaling of Alien Species Presences using Machine Learning
NASA Astrophysics Data System (ADS)
Daliakopoulos, Ioannis N.; Katsanevakis, Stelios; Moustakas, Aristides
2017-07-01
Large scale, high-resolution data on alien species distributions are essential for spatially explicit assessments of their environmental and socio-economic impacts, and management interventions for mitigation. However, these data are often unavailable. This paper presents a method that relies on Random Forest (RF) models to distribute alien species presence counts at a finer resolution grid, thus achieving spatial downscaling. A sufficiently large number of RF models are trained using random subsets of the dataset as predictors, in a bootstrapping approach to account for the uncertainty introduced by the subset selection. The method is tested with an approximately 8×8 km2 grid containing floral alien species presence and several indices of climatic, habitat, land use covariates for the Mediterranean island of Crete, Greece. Alien species presence is aggregated at 16×16 km2 and used as a predictor of presence at the original resolution, thus simulating spatial downscaling. Potential explanatory variables included habitat types, land cover richness, endemic species richness, soil type, temperature, precipitation, and freshwater availability. Uncertainty assessment of the spatial downscaling of alien species’ occurrences was also performed and true/false presences and absences were quantified. The approach is promising for downscaling alien species datasets of larger spatial scale but coarse resolution, where the underlying environmental information is available at a finer resolution than the alien species data. Furthermore, the RF architecture allows for tuning towards operationally optimal sensitivity and specificity, thus providing a decision support tool for designing a resource efficient alien species census.
Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J
2009-01-01
High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.
Meng, Xi; Nguyen, Bao D.; Ridge, Clark; Shaka, A. J.
2009-01-01
High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to “reduced-dimensionality” strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the Filter Diagonalization Method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths. PMID:18926747
NASA Astrophysics Data System (ADS)
Venable, N. B. H.; Fassnacht, S. R.; Adyabadam, G.
2014-12-01
Precipitation data in semi-arid and mountainous regions is often spatially and temporally sparse, yet it is a key variable needed to drive hydrological models. Gridded precipitation datasets provide a spatially and temporally coherent alternative to the use of point-based station data, but in the case of Mongolia, may not be constructed from all data available from government data sources, or may only be available at coarse resolutions. To examine the uncertainty associated with the use of gridded and/or point precipitation data, monthly water balance models of three river basins across forest steppe (the Khoid Tamir River at Ikhtamir), steppe (the Baidrag River at Bayanburd), and desert steppe (the Tuin River at Bogd) ecozones in the Khangai Mountain Region of Mongolia were compared. The models were forced over a 10-year period from 2001-2010, with gridded temperature and precipitation data at a 0.5 x 0.5 degree resolution. These results were compared to modeling using an interpolated hybrid of the gridded data and additional point data recently gathered from government sources; and with point data from the nearest meteorological station to the streamflow gage of choice. Goodness-of-fit measures including the Nash-Sutcliff Efficiency statistic, the percent bias, and the RMSE-observations standard deviation ratio were used to assess model performance. The results were mixed with smaller differences between the two gridded products as compared to the differences between gridded products and station data. The largest differences in precipitation inputs and modeled runoff amounts occurred between the two gridded datasets and station data in the desert steppe (Tuin), and the smallest differences occurred in the forest steppe (Khoid Tamir) and steppe (Baidrag). Mean differences between water balance model results are generally smaller than mean differences in the initial input data over the period of record. Seasonally, larger differences in gridded versus station-based precipitation products and modeled outputs occur in summer in the desert-steppe, and in spring in the forest steppe. Choice of precipitation data source in terms of gridded or point-based data directly affects model outcomes with greater uncertainty noted on a seasonal basis across ecozones of the Khangai.
Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation.
Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B
2006-04-15
Modeling air pollutant transport and dispersion in urban environments is especially challenging due to complex ground topography. In this study, we describe a large eddy simulation (LES) tool including a new dynamic subgrid closure and boundary treatment to model urban dispersion problems. The numerical model is developed, validated, and extended to a realistic urban layout. In such applications fairly coarse grids must be used in which each building can be represented using relatively few grid-points only. By carrying out LES of flow around a square cylinder and of flow over surface-mounted cubes, the coarsest resolution required to resolve the bluff body's cross section while still producing meaningful results is established. Specifically, we perform grid refinement studies showing that at least 6-8 grid points across the bluff body are required for reasonable results. The performance of several subgrid models is also compared. Although effects of the subgrid models on the mean flow are found to be small, dynamic Lagrangian models give a physically more realistic subgrid-scale (SGS) viscosity field. When scale-dependence is taken into consideration, these models lead to more realistic resolved fluctuating velocities and spectra. These results set the minimum grid resolution and subgrid model requirements needed to apply LES in simulations of neutral atmospheric boundary layer flow and scalar transport over a realistic urban geometry. The results also illustrate the advantages of LES over traditional modeling approaches, particularly its ability to take into account the complex boundary details and the unsteady nature of atmospheric boundary layer flow. Thus LES can be used to evaluate probabilities of extreme events (such as probabilities of exceeding threshold pollutant concentrations). Some comments about computer resources required for LES are also included.
NASA Astrophysics Data System (ADS)
Bartlett, Kevin S.
Mineral dust aerosols can impact air quality, climate change, biological cycles, tropical cyclone development and flight operations due to reduced visibility. Dust emissions are primarily limited to the extensive arid regions of the world, yet can negatively impact local to global scales, and are extremely complex to model accurately. Within this dissertation, the Dust Entrainment And Deposition (DEAD) model was adapted to run, for the first known time, using high temporal (hourly) and spatial (0.3°x0.3°) resolution data to methodically interrogate the key parameters and factors influencing global dust emissions. The dependence of dust emissions on key parameters under various conditions has been quantified and it has been shown that dust emissions within DEAD are largely determined by wind speeds, vegetation extent, soil moisture and topographic depressions. Important findings were that grid degradation from 0.3ºx0.3º to 1ºx1º, 2ºx2.5º, and 4°x5° of key meteorological, soil, and surface input parameters greatly reduced emissions approximately 13% and 29% and 64% respectively, as a result of the loss of sub grid detail within these key parameters at coarse grids. After running high resolution DEAD emissions globally for 2 years, two severe dust emission cases were chosen for an in-depth investigation of the root causes of the events and evaluation of the 2°x2.5° Goddard Earth Observing System (GEOS)-Chem and 0.3°x0.3° DEAD model capabilities to simulate the events: one over South West Asia (SWA) in June 2008 and the other over the Middle East in July 2009. The 2 year lack of rain over SWA preceding June 2008 with a 43% decrease in mean rainfall, yielded less than normal plant growth, a 28% increase in Aerosol Optical Depth (AOD), and a 24% decrease in Meteorological Aerodrome Report (METAR) observed visibility (VSBY) compared to average years. GEOS-Chem captured the observed higher AOD over SWA in June 2008. More detailed comparisons of GEOS-Chem predicted AOD and visibility over SWA with those observed at surface stations and from satellites revealed overall success of the model, although substantial regional differences exist. Within the extended drought, the study area was zoomed into the Middle East (ME) for July 2009 where multi-grid DEAD dust emissions using hourly CFSR meteorological input were compared with observations. The high resolution input yielded the best spatial and temporal dust patterns compared with Defense Meteorological Satellite Program (DMSP), Moderate Resolution Imaging Spectroradiometer (MODIS) and METAR VSBY observations and definitively revealed Syria as a major dust source for the region. The coarse resolution dust emissions degraded or missed daily dust emissions entirely. This readily showed that the spatial scale degradation of the input data can significantly impair DEAD dust emissions and offers a strong argument for adapting higher resolution dust emission schemes into future global models for improvements of dust simulations.
Advanced Turbulence Modeling Concepts
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing
2005-01-01
The ZCET program developed at NASA Glenn Research Center is to study hydrogen/air injection concepts for aircraft gas turbine engines that meet conventional gas turbine performance levels and provide low levels of harmful NOx emissions. A CFD study for ZCET program has been successfully carried out. It uses the most recently enhanced National combustion code (NCC) to perform CFD simulations for two configurations of hydrogen fuel injectors (GRC- and Sandia-injector). The results can be used to assist experimental studies to provide quick mixing, low emission and high performance fuel injector designs. The work started with the configuration of the single-hole injector. The computational models were taken from the experimental designs. For example, the GRC single-hole injector consists of one air tube (0.78 inches long and 0.265 inches in diameter) and two hydrogen tubes (0.3 inches long and 0.0226 inches in diameter opposed at 180 degree). The hydrogen tubes are located 0.3 inches upstream from the exit of the air element (the inlet location for the combustor). To do the simulation, the single-hole injector is connected to a combustor model (8.16 inches long and 0.5 inches in diameter). The inlet conditions for air and hydrogen elements are defined according to actual experimental designs. Two crossing jets of hydrogen/air are simulated in detail in the injector. The cold flow, reacting flow, flame temperature, combustor pressure and possible flashback phenomena are studied. Two grid resolutions of the numerical model have been adopted. The first computational grid contains 0.52 million elements, the second one contains over 1.3 million elements. The CFD results have shown only about 5% difference between the two grid resolutions. Therefore, the CFD result obtained from the model of 1.3-million grid resolution can be considered as a grid independent numerical solution. Turbulence models built in NCC are consolidated and well tested. They can handle both coarse and fine grids near the wall. They can model the effect of anisotropy of turbulent stresses and the effect of swirling. The chemical reactions of Magnusson model and ILDM method were both used in this study.
Subgrid Modeling Geomorphological and Ecological Processes in Salt Marsh Evolution
NASA Astrophysics Data System (ADS)
Shi, F.; Kirby, J. T., Jr.; Wu, G.; Abdolali, A.; Deb, M.
2016-12-01
Numerical modeling a long-term evolution of salt marshes is challenging because it requires an extensive use of computational resources. Due to the presence of narrow tidal creeks, variations of salt marsh topography can be significant over spatial length scales on the order of a meter. With growing availability of high-resolution bathymetry measurements, like LiDAR-derived DEM data, it is increasingly desirable to run a high-resolution model in a large domain and for a long period of time to get trends of sedimentation patterns, morphological change and marsh evolution. However, high spatial-resolution poses a big challenge in both computational time and memory storage, when simulating a salt marsh with dimensions of up to O(100 km^2) with a small time step. In this study, we have developed a so-called Pre-storage, Sub-grid Model (PSM, Wu et al., 2015) for simulating flooding and draining processes in salt marshes. The simulation of Brokenbridge salt marsh, Delaware, shows that, with the combination of the sub-grid model and the pre-storage method, over 2 orders of magnitude computational speed-up can be achieved with minimal loss of model accuracy. We recently extended PSM to include a sediment transport component and models for biomass growth and sedimentation in the sub-grid model framework. The sediment transport model is formulated based on a newly derived sub-grid sediment concentration equation following Defina's (2000) area-averaging procedure. Suspended sediment transport is modeled by the advection-diffusion equation in the coarse grid level, but the local erosion and sedimentation rates are integrated over the sub-grid level. The morphological model is based on the existing morphological model in NearCoM (Shi et al., 2013), extended to include organic production from the biomass model. The vegetation biomass is predicted by a simple logistic equation model proposed by Marani et al. (2010). The biomass component is loosely coupled with hydrodynamic and sedimentation models owing to the different time scales of the physical and ecological processes. The coupled model is being applied to Delaware marsh evolution in response to rising sea level and changing sediment supplies.
Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D
2017-09-11
Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.
Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution
NASA Astrophysics Data System (ADS)
Wild, O.; Prather, M. J.
2005-12-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.
NASA Astrophysics Data System (ADS)
Lovette, J. P.; Lenhardt, W. C.; Blanton, B.; Duncan, J. M.; Stillwell, L.
2017-12-01
The National Water Model (NWM) has provided a novel framework for near real time flood inundation mapping across CONUS at a 10m resolution. In many regions, this spatial scale is quickly being surpassed through the collection of high resolution lidar (1 - 3m). As one of the leading states in data collection for flood inundation mapping, North Carolina is currently improving their previously available 20 ft statewide elevation product to a Quality Level 2 (QL2) product with a nominal point spacing of 0.7 meters. This QL2 elevation product increases the ground points by roughly ten times over the previous statewide lidar product, and by over 250 times when compared to the 10m NED elevation grid. When combining these new lidar data with the discharge estimates from the NWM, we can further improve statewide flood inundation maps and predictions of at-risk areas. In the context of flood risk management, these improved predictions with higher resolution elevation models consistently represent an improvement on coarser products. Additionally, the QL2 lidar also includes coarse land cover classification data for each point return, opening the possibility for expanding analysis beyond the use of only digital elevation models (e.g. improving estimates of surface roughness, identifying anthropogenic features in floodplains, characterizing riparian zones, etc.). Using the NWM Height Above Nearest Drainage approach, we compare flood inundation extents derived from multiple lidar-derived grid resolutions to assess the tradeoff between precision and computational load in North Carolina's coastal river basins. The elevation data distributed through the state's new lidar collection program provide spatial resolutions ranging from 5-50 feet, with most inland areas also including a 3 ft product. Data storage increases by almost two orders of magnitude across this range, as does processing load. In order to further assess the validity of the higher resolution elevation products on flood inundation, we examine the NWM outputs from Hurricane Matthew, which devastated southeastern North Carolina in October 2016. When compared with numerous surveyed high water marks across the coastal plain, this assessment provides insight on the impacts of grid resolution on flood inundation extent.
Convergence of Defect-Correction and Multigrid Iterations for Inviscid Flows
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Convergence of multigrid and defect-correction iterations is comprehensively studied within different incompressible and compressible inviscid regimes on high-density grids. Good smoothing properties of the defect-correction relaxation have been shown using both a modified Fourier analysis and a more general idealized-coarse-grid analysis. Single-grid defect correction alone has some slowly converging iterations on grids of medium density. The convergence is especially slow for near-sonic flows and for very low compressible Mach numbers. Additionally, the fast asymptotic convergence seen on medium density grids deteriorates on high-density grids. Certain downstream-boundary modes are very slowly damped on high-density grids. Multigrid scheme accelerates convergence of the slow defect-correction iterations to the extent determined by the coarse-grid correction. The two-level asymptotic convergence rates are stable and significantly below one in most of the regions but slow convergence is noted for near-sonic and very low-Mach compressible flows. Multigrid solver has been applied to the NACA 0012 airfoil and to different flow regimes, such as near-tangency and stagnation. Certain convergence difficulties have been encountered within stagnation regions. Nonetheless, for the airfoil flow, with a sharp trailing-edge, residuals were fast converging for a subcritical flow on a sequence of grids. For supercritical flow, residuals converged slower on some intermediate grids than on the finest grid or the two coarsest grids.
Climate Simulations based on a different-grid nested and coupled model
NASA Astrophysics Data System (ADS)
Li, Dan; Ji, Jinjun; Li, Yinpeng
2002-05-01
An atmosphere-vegetation interaction model (A VIM) has been coupled with a nine-layer General Cir-culation Model (GCM) of Institute of Atmospheic Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (IAP/LASG), which is rhomboidally truncated at zonal wave number 15, to simulate global climatic mean states. A VIM is a model having inter-feedback between land surface processes and eco-physiological processes on land. As the first step to couple land with atmosphere completely, the physiological processes are fixed and only the physical part (generally named the SVAT (soil-vegetation-atmosphere-transfer scheme) model) of AVIM is nested into IAP/LASG L9R15 GCM. The ocean part of GCM is prescribed and its monthly sea surface temperature (SST) is the climatic mean value. With respect to the low resolution of GCM, i.e., each grid cell having lon-gitude 7.5° and latitude 4.5°, the vegetation is given a high resolution of 1.5° by 1.5° to nest and couple the fine grid cells of land with the coarse grid cells of atmosphere. The coupling model has been integrated for 15 years and its last ten-year mean of outputs was chosen for analysis. Compared with observed data and NCEP reanalysis, the coupled model simulates the main characteris-tics of global atmospheric circulation and the fields of temperature and moisture. In particular, the simu-lated precipitation and surface air temperature have sound results. The work creates a solid base on coupling climate models with the biosphere.
Evaluation of the UnTRIM model for 3-D tidal circulation
Cheng, R.T.; Casulli, V.; ,
2001-01-01
A family of numerical models, known as the TRIM models, shares the same modeling philosophy for solving the shallow water equations. A characteristic analysis of the shallow water equations points out that the numerical instability is controlled by the gravity wave terms in the momentum equations and by the transport terms in the continuity equation. A semi-implicit finite-difference scheme has been formulated so that these terms and the vertical diffusion terms are treated implicitly and the remaining terms explicitly to control the numerical stability and the computations are carried out over a uniform finite-difference computational mesh without invoking horizontal or vertical coordinate transformations. An unstructured grid version of TRIM model is introduced, or UnTRIM (pronounces as "you trim"), which preserves these basic numerical properties and modeling philosophy, only the computations are carried out over an unstructured orthogonal grid. The unstructured grid offers the flexibilities in representing complex study areas so that fine grid resolution can be placed in regions of interest, and coarse grids are used to cover the remaining domain. Thus, the computational efforts are concentrated in areas of importance, and an overall computational saving can be achieved because the total number of grid-points is dramatically reduced. To use this modeling approach, an unstructured grid mesh must be generated to properly reflect the properties of the domain of the investigation. The new modeling flexibility in grid structure is accompanied by new challenges associated with issues of grid generation. To take full advantage of this new model flexibility, the model grid generation should be guided by insights into the physics of the problems; and the insights needed may require a higher degree of modeling skill.
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1994-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth-order central differences through fast Fourier transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large scale features, such as the total circulation around the roll-up region, are adequately resolved.
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1992-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth order central differences through Fast Fourier Transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large-scale features, such as the total circulation around the roll-up region, are adequately resolved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Møyner, Olav, E-mail: olav.moyner@sintef.no; Lie, Knut-Andreas, E-mail: knut-andreas.lie@sintef.no
2016-01-01
A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructedmore » by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell-centered, conservative, finite-volume method, it is applicable to any flow model in which one can isolate a pressure equation. Herein, we only discuss single and two-phase incompressible models. Compressible flow, e.g., as modeled by the black-oil equations, is discussed in a separate paper.« less
Wave ensemble forecast system for tropical cyclones in the Australian region
NASA Astrophysics Data System (ADS)
Zieger, Stefan; Greenslade, Diana; Kepert, Jeffrey D.
2018-05-01
Forecasting of waves under extreme conditions such as tropical cyclones is vitally important for many offshore industries, but there remain many challenges. For Northwest Western Australia (NW WA), wave forecasts issued by the Australian Bureau of Meteorology have previously been limited to products from deterministic operational wave models forced by deterministic atmospheric models. The wave models are run over global (resolution 1/4∘) and regional (resolution 1/10∘) domains with forecast ranges of + 7 and + 3 day respectively. Because of this relatively coarse resolution (both in the wave models and in the forcing fields), the accuracy of these products is limited under tropical cyclone conditions. Given this limited accuracy, a new ensemble-based wave forecasting system for the NW WA region has been developed. To achieve this, a new dedicated 8-km resolution grid was nested in the global wave model. Over this grid, the wave model is forced with winds from a bias-corrected European Centre for Medium Range Weather Forecast atmospheric ensemble that comprises 51 ensemble members to take into account the uncertainties in location, intensity and structure of a tropical cyclone system. A unique technique is used to select restart files for each wave ensemble member. The system is designed to operate in real time during the cyclone season providing + 10-day forecasts. This paper will describe the wave forecast components of this system and present the verification metrics and skill for specific events.
High resolution global flood hazard map from physically-based hydrologic and hydraulic models.
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; McCollum, J.
2017-12-01
The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.
NASA Astrophysics Data System (ADS)
Mitchell, M. F.; Goodrich, D. C.; Gochis, D. J.; Lahmers, T. M.
2017-12-01
In semi-arid environments with complex terrain, redistribution of moisture occurs through runoff, stream infiltration, and regional groundwater flow. In semi-arid regions, stream infiltration has been shown to account for 10-40% of total recharge in high runoff years. These processes can potentially significantly alter land-atmosphere interactions through changes in sensible and latent heat release. However, currently, their overall impact is still unclear as historical model simulations generally made use of a coarse grid resolution, where these smaller-scale processes were either parameterized or not accounted for. To improve our understanding on the importance of stream infiltration and our ability to represent them in a coupled land-atmosphere model, this study focuses on the Walnut Gulch Experimental Watershed (WGEW) and Long-Term Agro-ecosystem Research (LTAR) site, surrounding the city of Tombstone, AZ. High-resolution surface precipitation, meteorological forcing and distributed runoff measurements have been obtained in WGEW since the 1960s. These data will be used as input for the spatially distributed WRF-Hydro model, a spatially distributed hydrological model that uses the NOAH-MP land surface model. Recently, we have implemented an infiltration loss scheme to WRF-Hydro. We will present the performance of WRF-Hydro to account for stream infiltration by comparing model simulation with in-situ observations. More specifically, as the performance of the model simulations has been shown to depend on the used model grid resolution, in the current work results will present WRF-Hydro simulations obtained at different pixel resolution (10-1000m).
Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations
NASA Technical Reports Server (NTRS)
Moon, Young J.; Liou, Meng-Sing
1989-01-01
Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.
Modeling of Turbulent Natural Convection in Enclosed Tall Cavities
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.
2017-12-01
It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.
Examinations of Linkages Between the Northwest Mexican Monsoon and Great Plains Precipitation
NASA Astrophysics Data System (ADS)
Saleeby, S. M.; Cotton, W. R.
2001-12-01
The Regional Atmospheric Modeling System (RAMS) is being used to examine linkages between the Mexican monsoon and precipitation in the Great Plains region of the United States. Currently, available datasets have allowed for seasonal runs for July and August of the 1993 flood year in the midwest US and the 1997 El Nino year. There is also a plan to perform a full monsoon season simulation of the drought summer of 1988 once precipitation data becomes available. Preliminary results of this ongoing study are presented here. The model configuration consists of a 120km resolution coarse grid that covers a region from west of Hawaii to Bermuda and from south of the equator up into Canada. Two 40km resolution nested grids exist, with one covering the western two-thirds of the United States and Mexico and the other covering the Pacific ITCZ. A 10km fine grid and 2.5km cloud resolving grid are spawned over the region of monsoon surges to explicitly resolve convection. The model is initialized with NCEP reanalysis data, surface obs, rawinsonde data, variable soil moisture, and weekly averaged SST's. RAMS is running with two-stream Harrington radiation, one moment microphysics, and Kuo cumulus parameterization. The completed 1993 and 1997 seasonal simulations are now being examined and verified again NCEP reanalysis data and high resolution precipitation data. Initial model results look promising when verified against the NCEP upper level fields, such that the model is able to capture the large scale dynamics. For the duration of both seasonal runs, RAMS successfully simulates the mid and upper level geopotential heights, the temperature, and winds. The large scale 700mb and 500mb anti-cyclone over the US and Mexico is resolved, as well as the easterly flow over Mexico. Model fields are also being examined to isolate monsoon surge events which are characterized by increased precipitation over the Sierra Madres and a northward moisture surge into the northern extent of the Gulf of California and southern Arizona. Within the coarse grids, the RAMS model has successfully resolved the low-level jet that persists in the Gulf of California and the local maximum in mixing ratio that persists over the gulf. It has also captured the upslope flow over the Sierra Madres that forces the moist air into the higher elevation to the east. This provides the necessary lifting and moisture for the development of intense convection and resulting large amounts of precipitation that occur along the Sierra Madre mountain range. Examination of model-predicted low-level moisture transport reveals that moisture advected from the Gulf of California is the primary monsoon moisture source, rather than the Gulf of Mexico. Time averages of moisture transport, mixing ratio, winds, and precipitation for July 1993 reveal the prominent diurnal cycle variations that exist due to radiative effects and land-sea interactions; the maximum in convection, precipitation rate, and moisture transport occurs around 00Z. Seasonal accumulated precipitation amounts in the model are successful in predicting the placement of precipitation and relative amounts for most of the 40km continental grid, but there is an overestimation of precipitation along the northern Sierra Madre Occidental and an underestimation in the US mid-west. During the 1993 flood summer, much of the mid-west US precipitation fell in association with mesoscale convective systems; it is suspected that other cumulus parameterizations may provide better prediction of sub-grid scale convective precipitation. >http://hugo.atmos.colostate.edu/www/monsoon/monsoon.html
Upscaling of Hydraulic Conductivity using the Double Constraint Method
NASA Astrophysics Data System (ADS)
El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke
2013-04-01
The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the coarse-scale. Exemplification will be presented for the Kleine Nete catchment, Belgium. As a result we identified coarse-scale conductivities while decreasing the number of grid blocks with the advantage that a model run costs less computation time and requires less memory space. In addition, ranking of models was investigated.
Solving Upwind-Biased Discretizations. 2; Multigrid Solver Using Semicoarsening
NASA Technical Reports Server (NTRS)
Diskin, Boris
1999-01-01
This paper studies a novel multigrid approach to the solution for a second order upwind biased discretization of the convection equation in two dimensions. This approach is based on semi-coarsening and well balanced explicit correction terms added to coarse-grid operators to maintain on coarse-grid the same cross-characteristic interaction as on the target (fine) grid. Colored relaxation schemes are used on all the levels allowing a very efficient parallel implementation. The results of the numerical tests can be summarized as follows: 1) The residual asymptotic convergence rate of the proposed V(0, 2) multigrid cycle is about 3 per cycle. This convergence rate far surpasses the theoretical limit (4/3) predicted for standard multigrid algorithms using full coarsening. The reported efficiency does not deteriorate with increasing the cycle, depth (number of levels) and/or refining the target-grid mesh spacing. 2) The full multi-grid algorithm (FMG) with two V(0, 2) cycles on the target grid and just one V(0, 2) cycle on all the coarse grids always provides an approximate solution with the algebraic error less than the discretization error. Estimates of the total work in the FMG algorithm are ranged between 18 and 30 minimal work units (depending on the target (discretizatioin). Thus, the overall efficiency of the FMG solver closely approaches (if does not achieve) the goal of the textbook multigrid efficiency. 3) A novel approach to deriving a discrete solution approximating the true continuous solution with a relative accuracy given in advance is developed. An adaptive multigrid algorithm (AMA) using comparison of the solutions on two successive target grids to estimate the accuracy of the current target-grid solution is defined. A desired relative accuracy is accepted as an input parameter. The final target grid on which this accuracy can be achieved is chosen automatically in the solution process. the actual relative accuracy of the discrete solution approximation obtained by AMA is always better than the required accuracy; the computational complexity of the AMA algorithm is (nearly) optimal (comparable with the complexity of the FMG algorithm applied to solve the problem on the optimally spaced target grid).
A new approach to flow simulation in highly heterogeneous porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rame, M.; Killough, J.E.
In this paper, applications are presented for a new numerical method - operator splittings on multiple grids (OSMG) - devised for simulations in heterogeneous porous media. A coarse-grid, finite-element pressure solver is interfaced with a fine-grid timestepping scheme. The CPU time for the pressure solver is greatly reduced and concentration fronts have minimal numerical dispersion.
Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset
NASA Astrophysics Data System (ADS)
Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.
2017-12-01
Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.
Tropical Cyclone Intensity in Global Models
NASA Astrophysics Data System (ADS)
Davis, C. A.; Wang, W.; Ahijevych, D.
2017-12-01
In recent years, global prediction and climate models have begun to depict intense tropical cyclones, even up to Category 5 on the Saffir-Simpson scale. In light of the limitation of horizontal resolution in such models, we examine the how well these models treat tropical cyclone intensity, measured from several different perspectives. The models evaluated include the operational Global Forecast System, with a grid spacing of about 13 km, and the Model for Prediction Across Scales, with a variable resolution of 15 km over the Northwest Pacific transitioning to 60 km elsewhere. We focus on the Northwest Pacific for the period July-October, 2016. Results indicate that discrimination of tropical cyclone intensity is reasonably good up to roughly category 3 storms. The models are able to capture storms of category 4 intensity, but still exhibit a negative intensity bias of 20-30 knots at lead times beyond 5 days. This is partly indicative of the large number of super-typhoons that occurred in 2016. The question arises of how well global models should represent intensity, given that it is unreasonable for them to depict the inner core of many intense tropical cyclones with a grid increment of 13-15 km. We compute an expected "best-case" prediction of intensity based on filtering the observed wind profiles of Atlantic tropical cyclones according to different hypothetical model resolutions. The Atlantic is used because of the significant number of reconnaissance missions and more reliable estimate of wind radii. Results indicate that, even under the most optimistic assumptions, models with horizontal grid spacing of 1/4 degree or coarser should not produce a realistic number of category 4 and 5 storms unless there are errors in spatial attributes of the wind field. Furthermore, models with a grid spacing of 1/4 degree or greater are unlikely to systematically discriminate hurricanes with differing intensity. Finally, for simple wind profiles, it is shown how an accurate representation of maximum wind on a coarse grid will lead to an overestimate of horizontally integrated kinetic energy by a factor of two or more.
Sherba, Jason T.; Sleeter, Benjamin M.; Davis, Adam W.; Parker, Owen P.
2015-01-01
Global land-use/land-cover (LULC) change projections and historical datasets are typically available at coarse grid resolutions and are often incompatible with modeling applications at local to regional scales. The difficulty of downscaling and reapportioning global gridded LULC change projections to regional boundaries is a barrier to the use of these datasets in a state-and-transition simulation model (STSM) framework. Here we compare three downscaling techniques to transform gridded LULC transitions into spatial scales and thematic LULC classes appropriate for use in a regional STSM. For each downscaling approach, Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathway (RCP) LULC projections, at the 0.5 × 0.5 cell resolution, were downscaled to seven Level III ecoregions in the Pacific Northwest, United States. RCP transition values at each cell were downscaled based on the proportional distribution between ecoregions of (1) cell area, (2) land-cover composition derived from remotely-sensed imagery, and (3) historic LULC transition values from a LULC history database. Resulting downscaled LULC transition values were aggregated according to their bounding ecoregion and “cross-walked” to relevant LULC classes. Ecoregion-level LULC transition values were applied in a STSM projecting LULC change between 2005 and 2100. While each downscaling methods had advantages and disadvantages, downscaling using the historical land-use history dataset consistently apportioned RCP LULC transitions in agreement with historical observations. Regardless of the downscaling method, some LULC projections remain improbable and require further investigation.
The optimization of high resolution topographic data for 1D hydrodynamic models
NASA Astrophysics Data System (ADS)
Ales, Ronovsky; Michal, Podhoranyi
2016-06-01
The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domain and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.
The optimization of high resolution topographic data for 1D hydrodynamic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ales, Ronovsky, E-mail: ales.ronovsky@vsb.cz; Michal, Podhoranyi
2016-06-08
The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domainmore » and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.« less
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
Scalability and Performance of Data-Parallel Pressure-Based Multigrid Methods for Viscous Flows
NASA Astrophysics Data System (ADS)
Blosch, Edwin L.; Shyy, Wei
1996-05-01
A full-approximation storage multigrid method for solving the steady-state 2-dincompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5,using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns,allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 × 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable and that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the coarse grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320× 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature.
NASA Technical Reports Server (NTRS)
Reznick, Steve
1988-01-01
Transonic Euler/Navier-Stokes computations are accomplished for wing-body flow fields using a computer program called Transonic Navier-Stokes (TNS). The wing-body grids are generated using a program called ZONER, which subdivides a coarse grid about a fighter-like aircraft configuration into smaller zones, which are tailored to local grid requirements. These zones can be either finely clustered for capture of viscous effects, or coarsely clustered for inviscid portions of the flow field. Different equation sets may be solved in the different zone types. This modular approach also affords the opportunity to modify a local region of the grid without recomputing the global grid. This capability speeds up the design optimization process when quick modifications to the geometry definition are desired. The solution algorithm embodied in TNS is implicit, and is capable of capturing pressure gradients associated with shocks. The algebraic turbulence model employed has proven adequate for viscous interactions with moderate separation. Results confirm that the TNS program can successfully be used to simulate transonic viscous flows about complicated 3-D geometries.
Historical U.S. cropland areas and the potential for bioenergy production on abandoned croplands.
Zumkehr, A; Campbell, J E
2013-04-16
Agriculture is historically a dominant form of global environmental degradation, and the potential for increased future degradation may be driven by growing demand for food and biofuels. While these impacts have been explored using global gridded maps of croplands, such maps are based on relatively coarse spatial data. Here, we apply high-resolution cropland inventories for the conterminous U.S. with a land-use model to develop historical gridded cropland areas for the years 1850-2000 and year 2000 abandoned cropland maps. While the historical cropland maps are consistent with generally accepted land-use trends, our U.S. abandoned cropland estimates of 68 Mha are as much as 70% larger than previous gridded estimates due to a reduction in aggregation effects. Renewed cultivation on the subset of abandoned croplands that have not become forests or urban lands represents one approach to mitigating the future expansion of agriculture. Potential bioenergy production from these abandoned lands using a wide range of biomass yields and conversion efficiencies has an upper-limit of 5-30% of the current U.S. primary energy demand or 4-30% of the current U.S. liquid fuel demand.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrnstein, Aaron R.
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less
NASA Astrophysics Data System (ADS)
Gagnon, Patrick; Rousseau, Alain N.; Charron, Dominique; Fortin, Vincent; Audet, René
2017-11-01
Several businesses and industries rely on rainfall forecasts to support their day-to-day operations. To deal with the uncertainty associated with rainfall forecast, some meteorological organisations have developed products, such as ensemble forecasts. However, due to the intensive computational requirements of ensemble forecasts, the spatial resolution remains coarse. For example, Environment and Climate Change Canada's (ECCC) Global Ensemble Prediction System (GEPS) data is freely available on a 1-degree grid (about 100 km), while those of the so-called High Resolution Deterministic Prediction System (HRDPS) are available on a 2.5-km grid (about 40 times finer). Potential users are then left with the option of using either a high-resolution rainfall forecast without uncertainty estimation and/or an ensemble with a spectrum of plausible rainfall values, but at a coarser spatial scale. The objective of this study was to evaluate the added value of coupling the Gibbs Sampling Disaggregation Model (GSDM) with ECCC products to provide accurate, precise and consistent rainfall estimates at a fine spatial resolution (10-km) within a forecast framework (6-h). For 30, 6-h, rainfall events occurring within a 40,000-km2 area (Québec, Canada), results show that, using 100-km aggregated reference rainfall depths as input, statistics of the rainfall fields generated by GSDM were close to those of the 10-km reference field. However, in forecast mode, GSDM outcomes inherit of the ECCC forecast biases, resulting in a poor performance when GEPS data were used as input, mainly due to the inherent rainfall depth distribution of the latter product. Better performance was achieved when the Regional Deterministic Prediction System (RDPS), available on a 10-km grid and aggregated at 100-km, was used as input to GSDM. Nevertheless, most of the analyzed ensemble forecasts were weakly consistent. Some areas of improvement are identified herein.
NASA Astrophysics Data System (ADS)
Yu, Karen; Keller, Christoph A.; Jacob, Daniel J.; Molod, Andrea M.; Eastham, Sebastian D.; Long, Michael S.
2018-01-01
Global simulations of atmospheric chemistry are commonly conducted with off-line chemical transport models (CTMs) driven by archived meteorological data from general circulation models (GCMs). The off-line approach has the advantages of simplicity and expediency, but it incurs errors due to temporal averaging in the meteorological archive and the inability to reproduce the GCM transport algorithms exactly. The CTM simulation is also often conducted at coarser grid resolution than the parent GCM. Here we investigate this cascade of CTM errors by using 222Rn-210Pb-7Be chemical tracer simulations off-line in the GEOS-Chem CTM at rectilinear 0.25° × 0.3125° (≈ 25 km) and 2° × 2.5° (≈ 200 km) resolutions and online in the parent GEOS-5 GCM at cubed-sphere c360 (≈ 25 km) and c48 (≈ 200 km) horizontal resolutions. The c360 GEOS-5 GCM meteorological archive, updated every 3 h and remapped to 0.25° × 0.3125°, is the standard operational product generated by the NASA Global Modeling and Assimilation Office (GMAO) and used as input by GEOS-Chem. We find that the GEOS-Chem 222Rn simulation at native 0.25° × 0.3125° resolution is affected by vertical transport errors of up to 20 % relative to the GEOS-5 c360 online simulation, in part due to loss of transient organized vertical motions in the GCM (resolved convection) that are temporally averaged out in the 3 h meteorological archive. There is also significant error caused by operational remapping of the meteorological archive from a cubed-sphere to a rectilinear grid. Decreasing the GEOS-Chem resolution from 0.25° × 0.3125° to 2° × 2.5° induces further weakening of vertical transport as transient vertical motions are averaged out spatially and temporally. The resulting 222Rn concentrations simulated by the coarse-resolution GEOS-Chem are overestimated by up to 40 % in surface air relative to the online c360 simulations and underestimated by up to 40 % in the upper troposphere, while the tropospheric lifetimes of 210Pb and 7Be against aerosol deposition are affected by 5-10 %. The lost vertical transport in the coarse-resolution GEOS-Chem simulation can be partly restored by recomputing the convective mass fluxes at the appropriate resolution to replace the archived convective mass fluxes and by correcting for bias in the spatial averaging of boundary layer mixing depths.
NASA Astrophysics Data System (ADS)
Khouider, B.; Majda, A.; Deng, Q.; Ravindran, A. M.
2015-12-01
Global climate models (GCMs) are large computer codes based on the discretization of the equations of atmospheric and oceanic motions coupled to various processes of transfer of heat, moisture and other constituents between land, atmosphere, and oceans. Because of computing power limitations, typical GCM grid resolution is on the order of 100 km and the effects of many physical processes, occurring on smaller scales, on the climate system are represented through various closure recipes known as parameterizations. The parameterization of convective motions and many processes associated with cumulus clouds such as the exchange of latent heat and cloud radiative forcing are believed to be behind much of uncertainty in GCMs. Based on a lattice particle interacting system, the stochastic multicloud model (SMCM) provide a novel and efficient representation of the unresolved variability in GCMs due to organized tropical convection and the cloud cover. It is widely recognized that stratiform heating contributes significantly to tropical rainfall and to the dynamics of tropical convective systems by inducing a front-to-rear tilt in the heating profile. Stratiform anvils forming in the wake of deep convection play a central role in the dynamics of tropical mesoscale convective systems. Here, aquaplanet simulations with a warm pool like surface forcing, based on a coarse-resolution GCM , of ˜170 km grid mesh, coupled with SMCM, are used to demonstrate the importance of stratiform heating for the organization of convection on planetary and intraseasonal scales. When some key model parameters are set to produce higher stratiform heating fractions, the model produces low-frequency and planetary-scale Madden Julian oscillation (MJO)-like wave disturbances while lower to moderate stratiform heating fractions yield mainly synoptic-scale convectively coupled Kelvin-like waves. Rooted from the stratiform instability, it is conjectured here that the strength and extent of stratiform downdrafts are key contributors to the scale selection of convective organizations perhaps with mechanisms that are in essence similar to those of mesoscale convective systems.
Domain-averaged snow depth over complex terrain from flat field measurements
NASA Astrophysics Data System (ADS)
Helbig, Nora; van Herwijnen, Alec
2017-04-01
Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.
Simulations and Evaluation of Mesoscale Convective Systems in a Multi-scale Modeling Framework (MMF)
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.
2017-12-01
It is well known that the mesoscale convective systems (MCS) produce more than 50% of rainfall in most tropical regions and play important roles in regional and global water cycles. Simulation of MCSs in global and climate models is a very challenging problem. Typical MCSs have horizontal scale of a few hundred kilometers. Models with a domain of several hundred kilometers and fine enough resolution to properly simulate individual clouds are required to realistically simulate MCSs. The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has shown some capabilities of simulating organized MCS-like storm signals and propagations. However, its embedded CRMs typically have small domain (less than 128 km) and coarse resolution ( 4 km) that cannot realistically simulate MCSs and individual clouds. In this study, a series of simulations were performed using the Goddard MMF. The impacts of the domain size and model grid resolution of the embedded CRMs on simulating MCSs are examined. The changes of cloud structure, occurrence, and properties such as cloud types, updraft and downdraft, latent heating profile, and cold pool strength in the embedded CRMs are examined in details. The simulated MCS characteristics are evaluated against satellite measurements using the Goddard Satellite Data Simulator Unit. The results indicate that embedded CRMs with large domain and fine resolution tend to produce better simulations compared to those simulations with typical MMF configuration (128 km domain size and 4 km model grid spacing).
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is addressed. Leading eigenvalues of large matrices that arise from discretization are calculated, and an efficient multigrid method for solving these problems is presented. The resulting grid functions are used as initial approximations for appropriate eigenvalue problems. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a nonstandard way in which the right-hand side of the coarse grid equations involves unknown parameters to be solved on the coarse grid. This leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem are presented which demonstrate the effectiveness of the method.
Uncertainties in estimates of mortality attributable to ambient PM2.5 in Europe
NASA Astrophysics Data System (ADS)
Kushta, Jonilda; Pozzer, Andrea; Lelieveld, Jos
2018-06-01
The assessment of health impacts associated with airborne particulate matter smaller than 2.5 μm in diameter (PM2.5) relies on aerosol concentrations derived either from monitoring networks, satellite observations, numerical models, or a combination thereof. When global chemistry-transport models are used for estimating PM2.5, their relatively coarse resolution has been implied to lead to underestimation of health impacts in densely populated and industrialized areas. In this study the role of spatial resolution and of vertical layering of a regional air quality model, used to compute PM2.5 impacts on public health and mortality, is investigated. We utilize grid spacings of 100 km and 20 km to calculate annual mean PM2.5 concentrations over Europe, which are in turn applied to the estimation of premature mortality by cardiovascular and respiratory diseases. Using model results at a 100 km grid resolution yields about 535 000 annual premature deaths over the extended European domain (242 000 within the EU-28), while numbers approximately 2.4% higher are derived by using the 20 km resolution. Using the surface (i.e. lowest) layer of the model for PM2.5 yields about 0.6% higher mortality rates compared with PM2.5 averaged over the first 200 m above ground. Further, the calculation of relative risks (RR) from PM2.5, using 0.1 μg m‑3 size resolution bins compared to the commonly used 1 μg m‑3, is associated with ±0.8% uncertainty in estimated deaths. We conclude that model uncertainties contribute a small part of the overall uncertainty expressed by the 95% confidence intervals, which are of the order of ±30%, mostly related to the RR calculations based on epidemiological data.
SoilInfo App: global soil information on your palm
NASA Astrophysics Data System (ADS)
Hengl, Tomislav; Mendes de Jesus, Jorge
2015-04-01
ISRIC ' World Soil Information has released in 2014 and app for mobile de- vices called 'SoilInfo' (http://soilinfo-app.org) and which aims at providing free access to the global soil data. SoilInfo App (available for Android v.4.0 Ice Cream Sandwhich or higher, and Apple v.6.x and v.7.x iOS) currently serves the Soil- Grids1km data ' a stack of soil property and class maps at six standard depths at a resolution of 1 km (30 arc second) predicted using automated geostatistical mapping and global soil data models. The list of served soil data includes: soil organic carbon (), soil pH, sand, silt and clay fractions (%), bulk density (kg/m3), cation exchange capacity of the fine earth fraction (cmol+/kg), coarse fragments (%), World Reference Base soil groups, and USDA Soil Taxonomy suborders (DOI: 10.1371/journal.pone.0105992). New soil properties and classes will be continuously added to the system. SoilGrids1km are available for download under a Creative Commons non-commercial license via http://soilgrids.org. They are also accessible via a Representational State Transfer API (http://rest.soilgrids.org) service. SoilInfo App mimics common weather apps, but is also largely inspired by the crowdsourcing systems such as the OpenStreetMap, Geo-wiki and similar. Two development aspects of the SoilInfo App and SoilGrids are constantly being worked on: Data quality in terms of accuracy of spatial predictions and derived information, and Data usability in terms of ease of access and ease of use (i.e. flexibility of the cyberinfrastructure / functionalities such as the REST SoilGrids API, SoilInfo App etc). The development focus in 2015 is on improving the thematic and spatial accuracy of SoilGrids predictions, primarily by using finer resolution covariates (250 m) and machine learning algorithms (such as random forests) to improve spatial predictions.
NASA Astrophysics Data System (ADS)
Kim, Y.; Du, J.; Kimball, J. S.
2017-12-01
The landscape freeze-thaw (FT) status derived from satellite microwave remote sensing is closely linked to vegetation phenology and productivity, surface energy exchange, evapotranspiration, snow/ice melt dynamics, and trace gas fluxes over land areas affected by seasonally frozen temperatures. A long-term global satellite microwave Earth System Data Record of daily landscape freeze-thaw status (FT-ESDR) was developed using similar calibrated 37GHz, vertically-polarized (V-pol) brightness temperatures (Tb) from SMMR, SSM/I, and SSMIS sensors. The FT-ESDR shows mean annual spatial classification accuracies of 90.3 and 84.3 % for PM and AM overpass retrievals relative surface air temperature (SAT) measurement based FT estimates from global weather stations. However, the coarse FT-ESDR gridding (25-km) is insufficient to distinguish finer scale FT heterogeneity. In this study, we tested alternative finer scale FT estimates derived from two enhanced polar-grid (3.125-km and 6-km resolution), 36.5 GHz V-pol Tb records derived from calibrated AMSR-E and AMSR2 sensor observations. The daily FT estimates are derived using a modified seasonal threshold algorithm that classifies daily Tb variations in relation to grid cell-wise FT thresholds calibrated using ERA-Interim reanalysis based SAT, downscaled using a digital terrain map and estimated temperature lapse rates. The resulting polar-grid FT records for a selected study year (2004) show mean annual spatial classification accuracies of 90.1% (84.2%) and 93.1% (85.8%) for respective PM (AM) 3.125km and 6-km Tb retrievals relative to in situ SAT measurement based FT estimates from regional weather stations. Areas with enhanced FT accuracy include water-land boundaries and mountainous terrain. Differences in FT patterns and relative accuracy obtained from the enhanced grid Tb records were attributed to several factors, including different noise contributions from underlying Tb processing and spatial mismatches between Tb retrievals and SAT calibrated FT thresholds.
Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD
NASA Astrophysics Data System (ADS)
Viellieber, Mathias; Class, Andreas G.
2013-11-01
Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.
A machine learning approach for efficient uncertainty quantification using multiscale methods
NASA Astrophysics Data System (ADS)
Chan, Shing; Elsheikh, Ahmed H.
2018-02-01
Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.
NASA Astrophysics Data System (ADS)
Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.
2017-12-01
The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.
Interior Fluid Dynamics of Liquid-Filled Projectiles
1989-12-01
the Sandia code. The previous codes are primarily based on finite-difference approximations with relatively coarse grid and were designed without...exploits Chorin’s method of artificial compressibility. The steady solution at 11 X 24 X 21 grid points in r, 0, z-direction is obtained by integrating...differences in radial and axial direction and pseudoepectral differencing in the azimuthal direction. Nonuniform grids are introduced for increased
Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.
Using High Resolution Model Data to Improve Lightning Forecasts across Southern California
NASA Astrophysics Data System (ADS)
Capps, S. B.; Rolinski, T.
2014-12-01
Dry lightning often results in a significant amount of fire starts in areas where the vegetation is dry and continuous. Meteorologists from the USDA Forest Service Predictive Services' program in Riverside, California are tasked to provide southern and central California's fire agencies with fire potential outlooks. Logistic regression equations were developed by these meteorologists several years ago, which forecast probabilities of lightning as well as lightning amounts, out to seven days across southern California. These regression equations were developed using ten years of historical gridded data from the Global Forecast System (GFS) model on a coarse scale (0.5 degree resolution), correlated with historical lightning strike data. These equations do a reasonably good job of capturing a lightning episode (3-5 consecutive days or greater of lightning), but perform poorly regarding more detailed information such as exact location and amounts. It is postulated that the inadequacies in resolving the finer details of episodic lightning events is due to the coarse resolution of the GFS data, along with limited predictors. Stability parameters, such as the Lifted Index (LI), the Total Totals index (TT), Convective Available Potential Energy (CAPE), along with Precipitable Water (PW) are the only parameters being considered as predictors. It is hypothesized that the statistical forecasts will benefit from higher resolution data both in training and implementing the statistical model. We have dynamically downscaled NCEP FNL (Final) reanalysis data using the Weather Research and Forecasting model (WRF) to 3km spatial and hourly temporal resolution across a decade. This dataset will be used to evaluate the contribution to the success of the statistical model of additional predictors in higher vertical, spatial and temporal resolution. If successful, we will implement an operational dynamically downscaled GFS forecast product to generate predictors for the resulting statistical lightning model. This data will help fire agencies be better prepared to pre-deploy resources in advance of these events. Specific information regarding duration, amount, and location will be especially valuable.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Sleeter, Benjamin M.; Sohl, Terry L.; Bouchard, Michelle A.; Reker, Ryan R.; Soulard, Christopher E.; Acevedo, William; Griffith, Glenn E.; Sleeter, Rachel R.; Auch, Roger F.; Sayler, Kristi L.; Prisley, Stephen; Zhu, Zhi-Liang
2012-01-01
Global environmental change scenarios have typically provided projections of land use and land cover for a relatively small number of regions or using a relatively coarse resolution spatial grid, and for only a few major sectors. The coarseness of global projections, in both spatial and thematic dimensions, often limits their direct utility at scales useful for environmental management. This paper describes methods to downscale projections of land-use and land-cover change from the Intergovernmental Panel on Climate Change's Special Report on Emission Scenarios to ecological regions of the conterminous United States, using an integrated assessment model, land-use histories, and expert knowledge. Downscaled projections span a wide range of future potential conditions across sixteen land use/land cover sectors and 84 ecological regions, and are logically consistent with both historical measurements and SRES characteristics. Results appear to provide a credible solution for connecting regionalized projections of land use and land cover with existing downscaled climate scenarios, under a common set of scenario-based socioeconomic assumptions.
Coarse climate change projections for species living in a fine-scaled world.
Nadeau, Christopher P; Urban, Mark C; Bridle, Jon R
2017-01-01
Accurately predicting biological impacts of climate change is necessary to guide policy. However, the resolution of climate data could be affecting the accuracy of climate change impact assessments. Here, we review the spatial and temporal resolution of climate data used in impact assessments and demonstrate that these resolutions are often too coarse relative to biologically relevant scales. We then develop a framework that partitions climate into three important components: trend, variance, and autocorrelation. We apply this framework to map different global climate regimes and identify where coarse climate data is most and least likely to reduce the accuracy of impact assessments. We show that impact assessments for many large mammals and birds use climate data with a spatial resolution similar to the biologically relevant area encompassing population dynamics. Conversely, impact assessments for many small mammals, herpetofauna, and plants use climate data with a spatial resolution that is orders of magnitude larger than the area encompassing population dynamics. Most impact assessments also use climate data with a coarse temporal resolution. We suggest that climate data with a coarse spatial resolution is likely to reduce the accuracy of impact assessments the most in climates with high spatial trend and variance (e.g., much of western North and South America) and the least in climates with low spatial trend and variance (e.g., the Great Plains of the USA). Climate data with a coarse temporal resolution is likely to reduce the accuracy of impact assessments the most in the northern half of the northern hemisphere where temporal climatic variance is high. Our framework provides one way to identify where improving the resolution of climate data will have the largest impact on the accuracy of biological predictions under climate change. © 2016 John Wiley & Sons Ltd.
Gulf of Mexico region - Highlighting low-lying areas derived from USGS Digital Elevation Data
Kosovich, John J.
2008-01-01
In support of U.S. Geological Survey (USGS) disaster preparedness efforts, this map depicts a color shaded relief representation of the area surrounding the Gulf of Mexico. The first 30 feet of relief above mean sea level are displayed as brightly colored 5-foot elevation bands, which highlight low-elevation areas at a coarse spatial resolution. Standard USGS National Elevation Dataset (NED) 1 arc-second (nominally 30-meter) digital elevation model (DEM) data are the basis for the map, which is designed to be used at a broad scale and for informational purposes only. The NED data were derived from the original 1:24,000-scale USGS topographic map bare-earth contours, which were converted into gridded quadrangle-based DEM tiles at a constant post spacing (grid cell size) of either 30 meters (data before the mid-1990s data) or 10 meters (mid-1990s and later data). These individual-quadrangle DEMs were then converted to spherical coordinates (latitude/longitude decimal degrees) and edge-matched to ensure seamlessness. Approximately one-half of the area shown on this map has DEM source data at a 30-meter resolution, with the remaining half consisting of 10-meter contour-derived DEM data or higher-resolution LIDAR data. Areas below sea level typically are surrounded by levees or some other type of flood-control structures. State and county boundary, hydrography, city, and road layers were modified from USGS National Atlas data downloaded in 2003. The NED data were downloaded in 2005.
NASA Astrophysics Data System (ADS)
Ko, A.; Mascaro, G.; Vivoni, E. R.
2017-12-01
Hyper-resolution (< 1 km) hydrological modeling is expected to support a range of studies related to the terrestrial water cycle. A critical need for increasing the utility of hyper-resolution modeling is the availability of meteorological forcings and land surface characteristics at high spatial resolution. Unfortunately, in many areas these datasets are only available at coarse (> 10 km) scales. In this study, we address some of the challenges by applying a parallel version of the Triangulated Irregular Network (TIN)-based Real Time Integrated Basin Simulator (tRIBS) to the Rio Sonora Basin (RSB) in northwest Mexico. The RSB is a large, semiarid watershed ( 21,000 km2) characterized by complex topography and a strong seasonality in vegetation conditions, due to the North American monsoon. We conducted simulations at an average spatial resolution of 88 m over a decadal (2004-2013) period using spatially-distributed forcings from remotely-sensed and reanalysis products. Meteorological forcings were derived from the North American Land Data Assimilation System (NLDAS) at the original resolution of 12 km and were downscaled at 1 km with techniques accounting for terrain effects. Two grids of soil properties were created from different sources, including: (i) CONABIO (Comisión Nacional para el Conocimiento y Uso de la Biodiversidad) at 6 km resolution; and (ii) ISRIC (International Soil Reference Information Centre) at 250 m. Time-varying vegetation parameters were derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) composite products. The model was first calibrated and validated through distributed soil moisture data from a network of 20 soil moisture stations during the monsoon season. Next, hydrologic simulations were conducted with five different combinations of coarse and downscaled forcings and soil properties. Outputs in the different configurations were then compared with independent observations of soil moisture, and with estimates of land surface temperature (1 km, daily) and evapotranspiration (1 km, monthly) from MODIS. This study is expected to support the community involved in hyper-resolution hydrologic modeling by identifying the crucial factors that, if available at higher resolution, lead to the largest improvement of the simulation prognostic capability.
NASA Astrophysics Data System (ADS)
Ramsdale, Jason; Balme, Matthew; Conway, Susan
2015-04-01
An International Space Science Institute (ISSI) team project has been convened to study the northern plains of Mars. The northern plains are younger and at lower elevation than the majority of the martian surface and are thought to be the remnants of an ancient ocean. Understanding the surface geology and geomorphology of the Northern Plains is complex, because the surface has been subtly modified many times, making traditional unit-boundaries hard to define. Our ISSI team project aims to answer the following questions: 1) "What is the distribution of ice-related landforms in the northern plains, and can it be related to distinct latitude bands or different geological or geomorphological units?" 2) "What is the relationship between the latitude dependent mantle (LDM; a draping unit believed to comprise of ice and dust thought to be deposited under periods of high axial obliquity) and (i) landforms indicative of ground ice, and (ii) other geological units in the northern plains?" 3) "What are the distributions and associations of recent landforms indicative of thaw of ice or snow?" With increasing coverage of high-resolution images of the surface of we are able to identify increasing numbers and varieties of small-scale landforms on Mars. Many such landforms are too small to represent on regional maps, yet determining their presence or absence across large areas can form the observational basis for developing hypotheses on the nature and history of an area. The combination of improved spatial resolution with near-continuous coverage increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre-scale landforms. Here, we describe an approach to mapping small features across large areas. Rather than traditional mapping with points, lines and polygons, we used a grid "tick box" approach to locate specific landforms. The mapping strips were divided into 15×150 grid of squares, each approximately 20×20 km, for each study area. Orbital images at 6-15m/pix were then viewed systematically for each grid square and the presence or absence of each of the basic suite of landforms recorded. The landforms were recorded as being either "present", "dominant", "possible", or "absent" in each grid square. The result is a series of coarse-resolution "rasters" showing the distribution of the different types of landforms across the strip. We have found this approach to be efficient, scalable and appropriate for teams of people mapping remotely. It is easily scalable because, carrying the "absent" values forward to finer grids from the larger grids would mean only areas with positive values for that landform would need to be examined to increase the resolution for the whole strip. As each sub-grid only requires the presence or absence of a landform ascertaining, it therefore removes an individual's decision as to where to draw boundaries, making the method efficient and repeatable.
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is treated. Leading eigenvalues (i.e., having maximal real part) of large matrices that arise from discretization are to be calculated. An efficient multigrid method for solving these problems is presented. The method begins by obtaining an initial approximation for the dominant subspace on a coarse level using a damped Jacobi relaxation. This proceeds until enough accuracy for the dominant subspace has been obtained. The resulting grid functions are then used as an initial approximation for appropriate eigenvalue problems. These problems are being solved first on coarse levels, followed by refinement until a desired accuracy for the eigenvalues has been achieved. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a non-standard way in which the right hand side of the coarse grid equations involves unknown parameters to be solved for on the coarse grid. This in particular leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem demonstrate the effectiveness of the method proposed. Using an FMG algorithm a solution to the level of discretization errors is obtained in just a few work units (less than 10), where a work unit is the work involved in one Jacobi relization on the finest level.
Xia, Kelin
2017-12-20
In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.
NASA Astrophysics Data System (ADS)
Roth, Aurora; Hock, Regine; Schuler, Thomas V.; Bieniek, Peter A.; Pelto, Mauri; Aschwanden, Andy
2018-03-01
Assessing and modeling precipitation in mountainous areas remains a major challenge in glacier mass balance modeling. Observations are typically scarce and reanalysis data and similar climate products are too coarse to accurately capture orographic effects. Here we use the linear theory of orographic precipitation model (LT model) to downscale winter precipitation from a regional climate model over the Juneau Icefield, one of the largest ice masses in North America (>4000 km2), for the period 1979-2013. The LT model is physically-based yet computationally efficient, combining airflow dynamics and simple cloud microphysics. The resulting 1 km resolution precipitation fields show substantially reduced precipitation on the northeastern portion of the icefield compared to the southwestern side, a pattern that is not well captured in the coarse resolution (20 km) WRF data. Net snow accumulation derived from the LT model precipitation agrees well with point observations across the icefield. To investigate the robustness of the LT model results, we perform a series of sensitivity experiments varying hydrometeor fall speeds, the horizontal resolution of the underlying grid, and the source of the meteorological forcing data. The resulting normalized spatial precipitation pattern is similar for all sensitivity experiments, but local precipitation amounts vary strongly, with greatest sensitivity to variations in snow fall speed. Results indicate that the LT model has great potential to provide improved spatial patterns of winter precipitation for glacier mass balance modeling purposes in complex terrain, but ground observations are necessary to constrain model parameters to match total amounts.
A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.
Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J
2009-11-28
In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.
Sub-grid drag model for immersed vertical cylinders in fluidized beds
Verma, Vikrant; Li, Tingwen; Dietiker, Jean -Francois; ...
2017-01-03
Immersed vertical cylinders are often used as heat exchanger in gas-solid fluidized beds. Computational Fluid Dynamics (CFD) simulations are computationally expensive for large scale systems with bundles of cylinders. Therefore sub-grid models are required to facilitate simulations on a coarse grid, where internal cylinders are treated as a porous medium. The influence of cylinders on the gas-solid flow tends to enhance segregation and affect the gas-solid drag. A correction to gas-solid drag must be modeled using a suitable sub-grid constitutive relationship. In the past, Sarkar et al. have developed a sub-grid drag model for horizontal cylinder arrays based on 2Dmore » simulations. However, the effect of a vertical cylinder arrangement was not considered due to computational complexities. In this study, highly resolved 3D simulations with vertical cylinders were performed in small periodic domains. These simulations were filtered to construct a sub-grid drag model which can then be implemented in coarse-grid simulations. Gas-solid drag was filtered for different solids fractions and a significant reduction in drag was identified when compared with simulation without cylinders and simulation with horizontal cylinders. Slip velocities significantly increase when vertical cylinders are present. Lastly, vertical suspension drag due to vertical cylinders is insignificant however substantial horizontal suspension drag is observed which is consistent to the finding for horizontal cylinders.« less
Isotropic stochastic rotation dynamics
NASA Astrophysics Data System (ADS)
Mühlbauer, Sebastian; Strobl, Severin; Pöschel, Thorsten
2017-12-01
Stochastic rotation dynamics (SRD) is a widely used method for the mesoscopic modeling of complex fluids, such as colloidal suspensions or multiphase flows. In this method, however, the underlying Cartesian grid defining the coarse-grained interaction volumes induces anisotropy. We propose an isotropic, lattice-free variant of stochastic rotation dynamics, termed iSRD. Instead of Cartesian grid cells, we employ randomly distributed spherical interaction volumes. This eliminates the requirement of a grid shift, which is essential in standard SRD to maintain Galilean invariance. We derive analytical expressions for the viscosity and the diffusion coefficient in relation to the model parameters, which show excellent agreement with the results obtained in iSRD simulations. The proposed algorithm is particularly suitable to model systems bound by walls of complex shape, where the domain cannot be meshed uniformly. The presented approach is not limited to SRD but is applicable to any other mesoscopic method, where particles interact within certain coarse-grained volumes.
SoilGrids1km — Global Soil Information Based on Automated Mapping
Hengl, Tomislav; de Jesus, Jorge Mendes; MacMillan, Robert A.; Batjes, Niels H.; Heuvelink, Gerard B. M.; Ribeiro, Eloi; Samuel-Rosa, Alessandro; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Gonzalez, Maria Ruiperez
2014-01-01
Background Soils are widely recognized as a non-renewable natural resource and as biophysical carbon sinks. As such, there is a growing requirement for global soil information. Although several global soil information systems already exist, these tend to suffer from inconsistencies and limited spatial detail. Methodology/Principal Findings We present SoilGrids1km — a global 3D soil information system at 1 km resolution — containing spatial predictions for a selection of soil properties (at six standard depths): soil organic carbon (g kg−1), soil pH, sand, silt and clay fractions (%), bulk density (kg m−3), cation-exchange capacity (cmol+/kg), coarse fragments (%), soil organic carbon stock (t ha−1), depth to bedrock (cm), World Reference Base soil groups, and USDA Soil Taxonomy suborders. Our predictions are based on global spatial prediction models which we fitted, per soil variable, using a compilation of major international soil profile databases (ca. 110,000 soil profiles), and a selection of ca. 75 global environmental covariates representing soil forming factors. Results of regression modeling indicate that the most useful covariates for modeling soils at the global scale are climatic and biomass indices (based on MODIS images), lithology, and taxonomic mapping units derived from conventional soil survey (Harmonized World Soil Database). Prediction accuracies assessed using 5–fold cross-validation were between 23–51%. Conclusions/Significance SoilGrids1km provide an initial set of examples of soil spatial data for input into global models at a resolution and consistency not previously available. Some of the main limitations of the current version of SoilGrids1km are: (1) weak relationships between soil properties/classes and explanatory variables due to scale mismatches, (2) difficulty to obtain covariates that capture soil forming factors, (3) low sampling density and spatial clustering of soil profile locations. However, as the SoilGrids system is highly automated and flexible, increasingly accurate predictions can be generated as new input data become available. SoilGrids1km are available for download via http://soilgrids.org under a Creative Commons Non Commercial license. PMID:25171179
NASCAP simulation of PIX 2 experiments
NASA Technical Reports Server (NTRS)
Roche, J. C.; Mandell, M. J.
1985-01-01
The latest version of the NASCAP/LEO digital computer code used to simulate the PIX 2 experiment is discussed. NASCAP is a finite-element code and previous versions were restricted to a single fixed mesh size. As a consequence the resolution was dictated by the largest physical dimension to be modeled. The latest version of NASCAP/LEO can subdivide selected regions. This permitted the modeling of the overall Delta launch vehicle in the primary computational grid at a coarse resolution, with subdivided regions at finer resolution being used to pick up the details of the experiment module configuration. Langmuir probe data from the flight were used to estimate the space plasma density and temperature and the Delta ground potential relative to the space plasma. This information is needed for input to NASCAP. Because of the uncertainty or variability in the values of these parameters, it was necessary to explore a range around the nominal value in order to determine the variation in current collection. The flight data from PIX 2 were also compared with the results of the NASCAP simulation.
Grid computing in large pharmaceutical molecular modeling.
Claus, Brian L; Johnson, Stephen R
2008-07-01
Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.
Highly Coarse-Grained Representations of Transmembrane Proteins
2017-01-01
Numerous biomolecules and biomolecular complexes, including transmembrane proteins (TMPs), are symmetric or at least have approximate symmetries. Highly coarse-grained models of such biomolecules, aiming at capturing the essential structural and dynamical properties on resolution levels coarser than the residue scale, must preserve the underlying symmetry. However, making these models obey the correct physics is in general not straightforward, especially at the highly coarse-grained resolution where multiple (∼3–30 in the current study) amino acid residues are represented by a single coarse-grained site. In this paper, we propose a simple and fast method of coarse-graining TMPs obeying this condition. The procedure involves partitioning transmembrane domains into contiguous segments of equal length along the primary sequence. For the coarsest (lowest-resolution) mappings, it turns out to be most important to satisfy the symmetry in a coarse-grained model. As the resolution is increased to capture more detail, however, it becomes gradually more important to match modular repeats in the secondary structure (such as helix-loop repeats) instead. A set of eight TMPs of various complexity, functionality, structural topology, and internal symmetry, representing different classes of TMPs (ion channels, transporters, receptors, adhesion, and invasion proteins), has been examined. The present approach can be generalized to other systems possessing exact or approximate symmetry, allowing for reliable and fast creation of multiscale, highly coarse-grained mappings of large biomolecular assemblies. PMID:28043122
Test problems for inviscid transonic flow
NASA Technical Reports Server (NTRS)
Carlson, L. A.
1979-01-01
Solving of test problems with the TRANDES program is discussed. This method utilizes the full, inviscid, perturbation potential flow equation in a Cartesian grid system that is stretched to infinity. This equation is represented by a nonconservative system of finite difference equations that includes at supersonic points a rotated difference scheme and is solved by column relaxation. The solution usually starts from a zero perturbation potential on a very coarse grid (typically 13 by 7) followed by several grid halvings until a final solution is obtained on a fine grid (97 by 49).
Application of a multi-level grid method to transonic flow calculations
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Brandt, A.
1976-01-01
A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse to fine. The coarser grids are used to diminish the magnitude of the smooth part of the residuals. The method was applied to the solution of the transonic small disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is studied with meshes of both constant and variable step size.
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclaurin, Galen; Sengupta, Manajit; Xie, Yu
A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance)more » broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the Northern Hemisphere for the temporal extent of the NSRDB (1998-2015). We provide a review of validation studies conducted on these two products and describe the methodology developed by NREL to remap the data products to the NSRDB grid and integrate them into a seamless daily data set.« less
NASA Technical Reports Server (NTRS)
Ott, L.; Putman, B.; Collatz, J.; Gregg, W.
2012-01-01
Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement concepts to create realistic pseudo-datasets. Pseudo-data are averaged over coarse model grid cell areas to better understand the ability of measurements to characterize CO2 distributions and spatial gradients on both short (daily to weekly) and long (monthly to seasonal) time scales
High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.
Haxton, Thomas K
2015-03-10
We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.
Entropic multirelaxation lattice Boltzmann models for turbulent flows
NASA Astrophysics Data System (ADS)
Bösch, Fabian; Chikatamarla, Shyam S.; Karlin, Ilya V.
2015-10-01
We present three-dimensional realizations of a class of lattice Boltzmann models introduced recently by the authors [I. V. Karlin, F. Bösch, and S. S. Chikatamarla, Phys. Rev. E 90, 031302(R) (2014), 10.1103/PhysRevE.90.031302] and review the role of the entropic stabilizer. Both coarse- and fine-grid simulations are addressed for the Kida vortex flow benchmark. We show that the outstanding numerical stability and performance is independent of a particular choice of the moment representation for high-Reynolds-number flows. We report accurate results for low-order moments for homogeneous isotropic decaying turbulence and second-order grid convergence for most assessed statistical quantities. It is demonstrated that all the three-dimensional lattice Boltzmann realizations considered herein converge to the familiar lattice Bhatnagar-Gross-Krook model when the resolution is increased. Moreover, thanks to the dynamic nature of the entropic stabilizer, the present model features less compressibility effects and maintains correct energy and enstrophy dissipation. The explicit and efficient nature of the present lattice Boltzmann method renders it a promising candidate for both engineering and scientific purposes for highly turbulent flows.
NASA Astrophysics Data System (ADS)
Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.
2014-12-01
Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.
State of Texas - Highlighting low-lying areas derived from USGS Digital Elevation Data
Kosovich, John J.
2008-01-01
In support of U.S. Geological Survey (USGS) disaster preparedness efforts, this map depicts a color shaded relief representation of Texas and a grayscale relief of the surrounding areas. The first 30 feet of relief above mean sea level are displayed as brightly colored 5-foot elevation bands, which highlight low-elevation areas at a coarse spatial resolution. Standard USGS National Elevation Dataset (NED) 1 arc-second (nominally 30-meter) digital elevation model (DEM) data are the basis for the map, which is designed to be used at a broad scale and for informational purposes only. The NED data were derived from the original 1:24,000-scale USGS topographic map bare-earth contours, which were converted into gridded quadrangle-based DEM tiles at a constant post spacing (grid cell size) of either 30 meters (data before the mid-1990s) or 10 meters (mid-1990s and later data). These individual-quadrangle DEMs were then converted to spherical coordinates (latitude/longitude decimal degrees) and edge-matched to ensure seamlessness. The NED source data for this map consists of a mixture of 30-meter- and 10-meter-resolution DEMs. State and county boundary, hydrography, city, and road layers were modified from USGS National Atlas data downloaded in 2003. The NED data were downloaded in 2002. Shaded relief over Mexico was obtained from the USGS National Atlas.
NASA Technical Reports Server (NTRS)
1981-01-01
Developments in numerical solution of certain types of partial differential equations by rapidly converging sequences of operations on supporting grids that range from very fine to very coarse are presented.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter D.; Dawson, Andrew
2017-03-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter; Dawson, Andrew
2017-04-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.
Self-similarity and flow characteristics of vertical-axis wind turbine wakes: an LES study
NASA Astrophysics Data System (ADS)
Abkar, Mahdi; Dabiri, John O.
2017-04-01
Large eddy simulation (LES) is coupled with a turbine model to study the structure of the wake behind a vertical-axis wind turbine (VAWT). In the simulations, a tuning-free anisotropic minimum dissipation model is used to parameterise the subfilter stress tensor, while the turbine-induced forces are modelled with an actuator line technique. The LES framework is first validated in the simulation of the wake behind a model straight-bladed VAWT placed in the water channel and then used to study the wake structure downwind of a full-scale VAWT sited in the atmospheric boundary layer. In particular, the self-similarity of the wake is examined, and it is found that the wake velocity deficit can be well characterised by a two-dimensional multivariate Gaussian distribution. By assuming a self-similar Gaussian distribution of the velocity deficit, and applying mass and momentum conservation, an analytical model is developed and tested to predict the maximum velocity deficit downwind of the turbine. Also, a simple parameterisation of VAWTs for LES with very coarse grid resolutions is proposed, in which the turbine is modelled as a rectangular porous plate with the same thrust coefficient. The simulation results show that, after some downwind distance (x/D ≈ 6), both actuator line and rectangular porous plate models have similar predictions for the mean velocity deficit. These results are of particular importance in simulations of large wind farms where, due to the coarse spatial resolution, the flow around individual VAWTs is not resolved.
Exploration of scaling effects on coarse resolution land surface phenology
USDA-ARS?s Scientific Manuscript database
A great number of land surface phenoloy (LSP) data have been produced from various coarse resolution satellite datasets and detection algorithms across regional and global scales. Unlike field- measured phenological events which are quantitatively defined with clear biophysical meaning, current LSP ...
NASA Astrophysics Data System (ADS)
Molon, Michelle; Boyce, Joseph I.; Arain, M. Altaf
2017-01-01
Coarse root biomass was estimated in a temperate pine forest using high-resolution (1 GHz) 3-D ground-penetrating radar (GPR). GPR survey grids were acquired across a 400 m2 area with varying line spacing (12.5 and 25 cm). Root volume and biomass were estimated directly from the 3-D radar volume by using isometric surfaces calculated with the marching cubes algorithm. Empirical relations between GPR reflection amplitude and root diameter were determined for 14 root segments (0.1-10 cm diameter) reburied in a 6 m2 experimental test plot and surveyed at 5-25 cm line spacing under dry and wet soil conditions. Reburied roots >1.4 cm diameter were detectable as continuous root structures with 5 cm line separation. Reflection amplitudes were strongly controlled by soil moisture and decreased by 40% with a twofold increase in soil moisture. GPR line intervals of 12.5 and 25 cm produced discontinuous mapping of roots, and GPR coarse root biomass estimates (0.92 kgC m-2) were lower than those obtained previously with a site-specific allometric equation due to nondetection of vertical roots and roots <1.5 cm diameter. The results show that coarse root volume and biomass can be estimated directly from interpolated 3-D GPR volumes by using a marching cubes approach, but mapping of roots as continuous structures requires high inline sampling and line density (<5 cm). The results demonstrate that 3-D GPR is viable approach for estimating belowground carbon and for mapping tree root architecture. This methodology can be applied more broadly in other disciplines (e.g., archaeology and civil engineering) for imaging buried structures.
NASA Astrophysics Data System (ADS)
Rimac, Antonija; von Storch, Jin-Song; Eden, Carsten
2013-04-01
The estimated power required to sustain global general circulation in the ocean is about 2 TW. This power is supplied with wind stress and tides. Energy spectrum shows pronounced maxima at near-inertial frequency. Near-inertial waves excited by high-frequency winds represent an important source for deep ocean mixing since they can propagate into the deep ocean and dissipate far away from the generation sites. The energy input by winds to near-inertial waves has been studied mostly using slab ocean models and wind stress forcing with coarse temporal resolution (e.g. 6-hourly). Slab ocean models lack the ability to reproduce fundamental aspects of kinetic energy balance and systematically overestimate the wind work. Also, slab ocean models do not account the energy used for the mixed layer deepening or the energy radiating downward into the deep ocean. Coarse temporal resolution of the wind forcing strongly underestimates the near-inertial energy. To overcome this difficulty we use an eddy permitting ocean model with high-frequency wind forcing. We establish the following model setup: We use the Max Planck Institute Ocean Model (MPIOM) on a tripolar grid with 45 km horizontal resolution and 40 vertical levels. We run the model with wind forcings that vary in horizontal and temporal resolution. We use high-resolution (1-hourly with 35 km horizontal resolution) and low-resolution winds (6-hourly with 250 km horizontal resolution). We address the following questions: Is the kinetic energy of near-inertial waves enhanced when high-resolution wind forcings are used? If so, is this due to higher level of overall wind variability or higher spatial or temporal resolution of wind forcing? How large is the power of near-inertial waves generated by winds? Our results show that near-inertial waves are enhanced and the near-inertial kinetic energy is two times higher (in the storm track regions 3.5 times higher) when high-resolution winds are used. Filtering high-resolution winds in space and time, the near-inertial kinetic energy reduces. The reduction is faster when a temporal filter is used suggesting that the high-frequency wind forcing is more efficient in generating near-inertial wave energy than the small-scale wind forcing. Using low-resolution wind forcing the wind generated power to near-inertial waves is 0.55 TW. When we use high-resolution wind forcing the result is 1.6 TW meaning that the result increases by 300%.
Adaptive resolution simulation of an atomistic protein in MARTINI water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J., E-mail: s.j.marrink@rug.nl
2014-02-07
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecularmore » dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.« less
Adaptive resolution simulation of an atomistic protein in MARTINI water.
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J; Praprotnik, Matej
2014-02-07
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.
Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.
Spatially enhanced passive microwave derived soil moisture: capabilities and opportunities
USDA-ARS?s Scientific Manuscript database
Low frequency passive microwave remote sensing is a proven technique for soil moisture retrieval, but its coarse resolution restricts the range of applications. Downscaling, otherwise known as disaggregation, has been proposed as the solution to spatially enhance these coarse resolution soil moistur...
Convergence acceleration of viscous flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1982-01-01
A multiple-grid convergence acceleration technique introduced for application to the solution of the Euler equations by means of Lax-Wendroff algorithms is extended to treat compressible viscous flow. Computational results are presented for the solution of the thin-layer version of the Navier-Stokes equations using the explicit MacCormack algorithm, accelerated by a convective coarse-grid scheme. Extensions and generalizations are mentioned.
Evaluation of coarse scale land surface remote sensing albedo product over rugged terrain
NASA Astrophysics Data System (ADS)
Wen, J.; Xinwen, L.; You, D.; Dou, B.
2017-12-01
Satellite derived Land surface albedo is an essential climate variable which controls the earth energy budget and it can be used in applications such as climate change, hydrology, and numerical weather prediction. The accuracy and uncertainty of surface albedo products should be evaluated with a reliable reference truth data prior to applications. And more literatures investigated the validation methods about the albedo validation in a flat or homogenous surface. However, the albedo performance over rugged terrain is still unknow due to the validation method limited. A multi-validation strategy is implemented to give a comprehensive albedo validation, which will involve the high resolution albedo processing, high resolution albedo validation based on in situ albedo, and the method to upscale the high resolution albedo to a coarse scale albedo. Among them, the high resolution albedo generation and the upscale method is the core step for the coarse scale albedo validation. In this paper, the high resolution albedo is generated by Angular Bin algorithm. And a albedo upscale method over rugged terrain is developed to obtain the coarse scale albedo truth. The in situ albedo located 40 sites in mountain area are selected globally to validate the high resolution albedo, and then upscaled to the coarse scale albedo by the upscale method. This paper takes MODIS and GLASS albedo product as a example, and the prelimarily results show the RMSE of MODIS and GLASS albedo product over rugged terrain are 0.047 and 0.057, respectively under the RMSE with 0.036 of high resolution albedo.
Multiple-grid convergence acceleration of viscous and inviscid flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1983-01-01
A multiple-grid algorithm for use in efficiently obtaining steady solution to the Euler and Navier-Stokes equations is presented. The convergence of a simple, explicit fine-grid solution procedure is accelerated on a sequence of successively coarser grids by a coarse-grid information propagation method which rapidly eliminates transients from the computational domain. This use of multiple-gridding to increase the convergence rate results in substantially reduced work requirements for the numerical solution of a wide range of flow problems. Computational results are presented for subsonic and transonic inviscid flows and for laminar and turbulent, attached and separated, subsonic viscous flows. Work reduction factors as large as eight, in comparison to the basic fine-grid algorithm, were obtained. Possibilities for further performance improvement are discussed.
NASA Astrophysics Data System (ADS)
Ramage, J. M.; Brodzik, M. J.; Hardman, M.
2016-12-01
Passive microwave (PM) 18 GHz and 36 GHz horizontally- and vertically-polarized brightness temperatures (Tb) channels from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) have been important sources of information about snow melt status in glacial environments, particularly at high latitudes. PM data are sensitive to the changes in near-surface liquid water that accompany melt onset, melt intensification, and refreezing. Overpasses are frequent enough that in most areas multiple (2-8) observations per day are possible, yielding the potential for determining the dynamic state of the snow pack during transition seasons. AMSR-E Tb data have been used effectively to determine melt onset and melt intensification using daily Tb and diurnal amplitude variation (DAV) thresholds. Due to mixed pixels in historically coarse spatial resolution Tb data, melt analysis has been impractical in ice-marginal zones where pixels may be only fractionally snow/ice covered, and in areas where the glacier is near large bodies of water: even small regions of open water in a pixel severely impact the microwave signal. We use the new enhanced-resolution Calibrated Passive Microwave Daily EASE-Grid 2.0 Brightness Temperature (CETB) Earth System Data Record product's twice daily obserations to test and update existing snow melt algorithms by determining appropriate melt thresholds for both Tb and DAV for the CETB 18 and 36 GHz channels. We use the enhanced resolution data to evaluate melt characteristics along glacier margins and melt transition zones during the melt seasons in locations spanning a wide range of melt scenarios, including the Patagonian Andes, the Alaskan Coast Range, and the Russian High Arctic icecaps. We quantify how improvement of spatial resolution from the original 12.5 - 25 km-scale pixels to the enhanced resolution of 3.125 - 6.25 km improves the ability to evaluate melt timing across boundaries and transition zones in diverse glacial environments.
A Modeling Approach to Global Land Surface Monitoring with Low Resolution Satellite Imaging
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Dungan, Jennifer; Livingston, Gerry P.; Gore, Warren J. (Technical Monitor)
1998-01-01
The effects of changing land use/land cover on global climate and ecosystems due to greenhouse gas emissions and changing energy and nutrient exchange rates are being addressed by federal programs such as NASA's Mission to Planet Earth (MTPE) and by international efforts such as the International Geosphere-Biosphere Program (IGBP). The quantification of these effects depends on accurate estimates of the global extent of critical land cover types such as fire scars in tropical savannas and ponds in Arctic tundra. To address the requirement for accurate areal estimates, methods for producing regional to global maps with satellite imagery are being developed. The only practical way to produce maps over large regions of the globe is with data of coarse spatial resolution, such as Advanced Very High Resolution Radiometer (AVHRR) weather satellite imagery at 1.1 km resolution or European Remote-Sensing Satellite (ERS) radar imagery at 100 m resolution. The accuracy of pixel counts as areal estimates is in doubt, especially for highly fragmented cover types such as fire scars and ponds. Efforts to improve areal estimates from coarse resolution maps have involved regression of apparent area from coarse data versus that from fine resolution in sample areas, but it has proven difficult to acquire sufficient fine scale data to develop the regression. A method for computing accurate estimates from coarse resolution maps using little or no fine data is therefore needed.
Identifying grain-size dependent errors on global forest area estimates and carbon studies
Daolan Zheng; Linda S. Heath; Mark J. Ducey
2008-01-01
Satellite-derived coarse-resolution data are typically used for conducting global analyses. But the forest areas estimated from coarse-resolution maps (e.g., 1 km) inevitably differ from a corresponding fine-resolution map (such as a 30-m map) that would be closer to ground truth. A better understanding of changes in grain size on area estimation will improve our...
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2004-01-01
During the research project, sounding datasets were generated for the region surrounding 9 major airports, including Dallas, TX, Boston, MA, New York, NY, Chicago, IL, St. Louis, MO, Atlanta, GA, Miami, FL, San Francico, CA, and Los Angeles, CA. The numerical simulation of winter and summer environments during which no instrument flight rule impact was occurring at these 9 terminals was performed using the most contemporary version of the Terminal Area PBL Prediction System (TAPPS) model nested from 36 km to 6 km to 1 km horizontal resolution and very detailed vertical resolution in the planetary boundary layer. The soundings from the 1 km model were archived at 30 minute time intervals for a 24 hour period and the vertical dependent variables as well as derived quantities, i.e., 3-dimensional wind components, temperatures, pressures, mixing ratios, turbulence kinetic energy and eddy dissipation rates were then interpolated to 5 m vertical resolution up to 1000 m elevation above ground level. After partial validation against field experiment datasets for Dallas as well as larger scale and much coarser resolution observations at the other 8 airports, these sounding datasets were sent to NASA for use in the Virtual Air Space and Modeling program. The application of these datasets being to determine representative airport weather environments to diagnose the response of simulated wake vortices to realistic atmospheric environments. These virtual datasets are based on large scale observed atmospheric initial conditions that are dynamically interpolated in space and time. The 1 km nested-grid simulated datasets providing a very coarse and highly smoothed representation of airport environment meteorological conditions. Details concerning the airport surface forcing are virtually absent from these simulated datasets although the observed background atmospheric processes have been compared to the simulated fields and the fields were found to accurately replicate the flows surrounding the airport where coarse verification data were available as well as where airport scale datasets were available.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guba, O.; Taylor, M. A.; Ullrich, P. A.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
NASA Astrophysics Data System (ADS)
Prein, A. F.; Ikeda, K.; Liu, C.; Bullock, R.; Rasmussen, R.
2016-12-01
Convective storms are causing extremes such as flooding, landslides, and wind gusts and are related to the development of tornadoes and hail. Convective storms are also the dominant source of summer precipitation in most regions of the Contiguous United States. So far little is known about how convective storms might change due to global warming. This is mainly because of the coarse grid spacing of state-of-the-art climate models that are not able to resolve deep convection explicitly. Instead, coarse resolution models rely on convective parameterization schemes that are a major source of errors and uncertainties in climate change projections. Convection-permitting climate simulations, with grid-spacings smaller than 4 km, show significant improvements in the simulation of convective storms by representing deep convection explicitly. Here we use a pair of 13-year long current and future convection-permitting climate simulations that cover large parts of North America. We use the Method for Object-Based Diagnostic Evaluation (MODE) that incorporates the time dimension (MODE-TD) to analyze the model performance in reproducing storm features in the current climate and to investigate their potential future changes. We show that the model is able to accurately reproduce the main characteristics of convective storms in the present climate. The comparison with the future climate simulation shows that convective storms significantly increase in frequency, intensity, and size. Furthermore, they are projected to move slower which could result in a substantial increase in convective storm-related hazards such as flash floods, debris flows, and landslides. Some regions, such as the North Atlantic, might experience a regime shift that leads to significantly stronger storms that are unrepresented in the current climate.
NASA Astrophysics Data System (ADS)
Skamarock, W. C.
2015-12-01
One of the major problems in atmospheric model applications is the representation of deep convection within the models; explicit simulation of deep convection on fine meshes performs much better than sub-grid parameterized deep convection on coarse meshes. Unfortunately, the high cost of explicit convective simulation has meant it has only been used to down-scale global simulations in weather prediction and regional climate applications, typically using traditional one-way interactive nesting technology. We have been performing real-time weather forecast tests using a global non-hydrostatic atmospheric model (the Model for Prediction Across Scales, MPAS) that employs a variable-resolution unstructured Voronoi horizontal mesh (nominally hexagons) to span hydrostatic to nonhydrostatic scales. The smoothly varying Voronoi mesh eliminates many downscaling problems encountered using traditional one- or two-way grid nesting. Our test weather forecasts cover two periods - the 2015 Spring Forecast Experiment conducted at the NOAA Storm Prediction Center during the month of May in which we used a 50-3 km mesh, and the PECAN field program examining nocturnal convection over the US during the months of June and July in which we used a 15-3 km mesh. An important aspect of this modeling system is that the model physics be scale-aware, particularly the deep convection parameterization. These MPAS simulations employ the Grell-Freitas scale-aware convection scheme. Our test forecasts show that the scheme produces a gradual transition in the deep convection, from the deep unstable convection being handled entirely by the convection scheme on the coarse mesh regions (dx > 15 km), to the deep convection being almost entirely explicit on the 3 km NA region of the meshes. We will present results illustrating the performance of critical aspects of the MPAS model in these tests.
Linear mixing model applied to coarse resolution satellite data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Subranging technique using superconducting technology
Gupta, Deepnarayan
2003-01-01
Subranging techniques using "digital SQUIDs" are used to design systems with large dynamic range, high resolution and large bandwidth. Analog-to-digital converters (ADCs) embodying the invention include a first SQUID based "coarse" resolution circuit and a second SQUID based "fine" resolution circuit to convert an analog input signal into "coarse" and "fine" digital signals for subsequent processing. In one embodiment, an ADC includes circuitry for supplying an analog input signal to an input coil having at least a first inductive section and a second inductive section. A first superconducting quantum interference device (SQUID) is coupled to the first inductive section and a second SQUID is coupled to the second inductive section. The first SQUID is designed to produce "coarse" (large amplitude, low resolution) output signals and the second SQUID is designed to produce "fine" (low amplitude, high resolution) output signals in response to the analog input signals.
Dynamic subfilter-scale stress model for large-eddy simulations
NASA Astrophysics Data System (ADS)
Rouhi, A.; Piomelli, U.; Geurts, B. J.
2016-08-01
We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2014-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2013-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using NCEP/NCAR re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally-refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
NASA Astrophysics Data System (ADS)
Lynch, Peng; Reid, Jeffrey S.; Westphal, Douglas L.; Zhang, Jianglong; Hogan, Timothy F.; Hyer, Edward J.; Curtis, Cynthia A.; Hegg, Dean A.; Shi, Yingxi; Campbell, James R.; Rubin, Juli I.; Sessions, Walter R.; Turk, F. Joseph; Walker, Annette L.
2016-04-01
While stand alone satellite and model aerosol products see wide utilization, there is a significant need in numerous atmospheric and climate applications for a fused product on a regular grid. Aerosol data assimilation is an operational reality at numerous centers, and like meteorological reanalyses, aerosol reanalyses will see significant use in the near future. Here we present a standardized 2003-2013 global 1 × 1° and 6-hourly modal aerosol optical thickness (AOT) reanalysis product. This data set can be applied to basic and applied Earth system science studies of significant aerosol events, aerosol impacts on numerical weather prediction, and electro-optical propagation and sensor performance, among other uses. This paper describes the science of how to develop and score an aerosol reanalysis product. This reanalysis utilizes a modified Navy Aerosol Analysis and Prediction System (NAAPS) at its core and assimilates quality controlled retrievals of AOT from the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua and the Multi-angle Imaging SpectroRadiometer (MISR) on Terra. The aerosol source functions, including dust and smoke, were regionally tuned to obtain the best match between the model fine- and coarse-mode AOTs and the Aerosol Robotic Network (AERONET) AOTs. Other model processes, including deposition, were tuned to minimize the AOT difference between the model and satellite AOT. Aerosol wet deposition in the tropics is driven with satellite-retrieved precipitation, rather than the model field. The final reanalyzed fine- and coarse-mode AOT at 550 nm is shown to have good agreement with AERONET observations, with global mean root mean square error around 0.1 for both fine- and coarse-mode AOTs. This paper includes a discussion of issues particular to aerosol reanalyses that make them distinct from standard meteorological reanalyses, considerations for extending such a reanalysis outside of the NASA A-Train era, and examples of how the aerosol reanalysis can be applied or fused with other model or remote sensing products. Finally, the reanalysis is evaluated in comparison with other available studies of aerosol trends, and the implications of this comparison are discussed.
NASA Astrophysics Data System (ADS)
Lynch, P.; Reid, J. S.; Westphal, D. L.; Zhang, J.; Hogan, T. F.; Hyer, E. J.; Curtis, C. A.; Hegg, D. A.; Shi, Y.; Campbell, J. R.; Rubin, J. I.; Sessions, W. R.; Turk, F. J.; Walker, A. L.
2015-12-01
While standalone satellite and model aerosol products see wide utilization, there is a significant need in numerous climate and applied applications for a fused product on a regular grid. Aerosol data assimilation is an operational reality at numerous centers, and like meteorological reanalyses, aerosol reanalyses will see significant use in the near future. Here we present a standardized 2003-2013 global 1° × 1° and 6 hourly modal aerosol optical thickness (AOT) reanalysis product. This dataset can be applied to basic and applied earth system science studies of significant aerosol events, aerosol impacts on numerical weather prediction, and electro-optical propagation and sensor performance, among other uses. This paper describes the science of how to develop and score an aerosol reanalysis product. This reanalysis utilizes a modified Navy Aerosol Analysis and Prediction System (NAAPS) at its core and assimilates quality controlled retrievals of AOT from the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua and the Multi-angle Imaging SpectroRadiometer (MISR) on Terra. The aerosol source functions, including dust and smoke, were regionally tuned to obtain the best match between the model fine and coarse mode AOTs and the Aerosol Robotic Network (AERONET) AOTs. Other model processes, including deposition, were tuned to minimize the AOT difference between the model and satellite AOT. Aerosol wet deposition in the tropics is driven with satellite retrieved precipitation, rather than the model field. The final reanalyzed fine and coarse mode AOT at 550 nm is shown to have good agreement with AERONET observations, with global mean root mean square error around 0.1 for both fine and coarse mode AOTs. This paper includes a discussion of issues particular to aerosol reanalyses that make them distinct from standard meteorological reanalyses, considerations for extending such a reanalysis outside of the NASA A-Train era, and examples of how the aerosol reanalysis can be applied or fused with other model or remote sensing products. Finally, the reanalysis is evaluated in comparison with other available studies of aerosol trends, and the implications of this comparison are discussed.
NASA Astrophysics Data System (ADS)
Kumar, R.; Samaniego, L. E.; Livneh, B.
2013-12-01
Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked differences; particularly at a shorter time scale (hours to days) in regions with coarse texture sandy soils. Furthermore, the partitioning of total runoff into near-surface interflows and baseflow components was also significantly different between the two simulations. Simulations with the coarser soil map produced comparatively higher baseflows. At longer time scales (months to seasons) where climatic factors plays a major role, the integrated fluxes and states from both sets of model simulations match fairly closely, despite the apparent discrepancy in the partitioning of total runoff.
Coarse-to-fine construction for high-resolution representation in visual working memory.
Gao, Zaifeng; Ding, Xiaowei; Yang, Tong; Liang, Junying; Shui, Rende
2013-01-01
This study explored whether the high-resolution representations created by visual working memory (VWM) are constructed in a coarse-to-fine or all-or-none manner. The coarse-to-fine hypothesis suggests that coarse information precedes detailed information in entering VWM and that its resolution increases along with the processing time of the memory array, whereas the all-or-none hypothesis claims that either both enter into VWM simultaneously, or neither does. We tested the two hypotheses by asking participants to remember two or four complex objects. An ERP component, contralateral delay activity (CDA), was used as the neural marker. CDA is higher for four objects than for two objects when coarse information is primarily extracted; yet, this CDA difference vanishes when detailed information is encoded. Experiment 1 manipulated the comparison difficulty of the task under a 500-ms exposure time to determine a condition in which the detailed information was maintained. No CDA difference was found between two and four objects, even in an easy-comparison condition. Thus, Experiment 2 manipulated the memory array's exposure time under the easy-comparison condition and found a significant CDA difference at 100 ms while replicating Experiment 1's results at 500 ms. In Experiment 3, the 500-ms memory array was blurred to block the detailed information; this manipulation reestablished a significant CDA difference. These findings suggest that the creation of high-resolution representations in VWM is a coarse-to-fine process.
Elliptic generation of composite three-dimensional grids about realistic aircraft
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1986-01-01
An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.
Navier-Stokes simulation of rotor-body flowfield in hover using overset grids
NASA Technical Reports Server (NTRS)
Srinivasan, G. R.; Ahmad, J. U.
1993-01-01
A free-wake Navier-Stokes numerical scheme and multiple Chimera overset grids have been utilized for calculating the quasi-steady hovering flowfield of a Boeing-360 rotor mounted on an axisymmetric whirl-tower. The entire geometry of this rotor-body configuration is gridded-up with eleven different overset grids. The composite grid has 1.3 million grid points for the entire flow domain. The numerical results, obtained using coarse grids and a rigid rotor assumption, show a thrust value that is within 5% of the experimental value at a flow condition of M(sub tip) = 0.63, Theta(sub c) = 8 deg, and Re = 2.5 x 10(exp 6). The numerical method thus demonstrates the feasibility of using a multi-block scheme for calculating the flowfields of complex configurations consisting of rotating and non-rotating components.
Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.
NASA Astrophysics Data System (ADS)
Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.
2014-12-01
High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.
Bouda, Martin; Caplan, Joshua S.; Saiers, James E.
2016-01-01
Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073
An Eulerian/Lagrangian method for computing blade/vortex impingement
NASA Technical Reports Server (NTRS)
Steinhoff, John; Senge, Heinrich; Yonghu, Wenren
1991-01-01
A combined Eulerian/Lagrangian approach to calculating helicopter rotor flows with concentrated vortices is described. The method computes a general evolving vorticity distribution without any significant numerical diffusion. Concentrated vortices can be accurately propagated over long distances on relatively coarse grids with cores only several grid cells wide. The method is demonstrated for a blade/vortex impingement case in 2D and 3D where a vortex is cut by a rotor blade, and the results are compared to previous 2D calculations involving a fifth-order Navier-Stokes solver on a finer grid.
NASA Astrophysics Data System (ADS)
Huang, Y.; Engdahl, N.
2017-12-01
Proactive management to improve water resource sustainability is often limited by a lack of understanding about the hydrological consequences of human activities and climate induced land use and land cover (LULC) change. Changes in LULC can alter runoff, soil moisture, and evapotranspiration, but these effects are complex and traditional modeling techniques have had limited successes in realistically simulating the relevant feedbacks. Recent studies have investigated the coupled interactions but typically do so at coarse resolutions with simple topographic settings, so it is unclear if the previous conclusions remain valid in the steep, complex terrains that dominate the western USA. This knowledge gap was explored with a series of integrated hydrologic simulations based on the Dry Creek Experimental Watershed (DCEW) in southwestern Idaho, USA, using the ParFlow.CLM model. The DCEW has extensive monitoring data that allowed for a direct calibration and validation of the base-case simulation, which is not commonly done with integrated models. The effects of LULC change on the hydrologic and water budgets were then assessed at two grid resolutions (20m and 40m) under four LULC scenarios: 1) current LULC; 2) LULC change from a small but gradual decrease in potential recharge (PR); 3) LULC change from a large but rapid decrease in PR; and 4) LULC change from a large but gradual decrease in PR. The results show that the methods used for terrain processing and the grid resolution can both heavily impact the simulation results and that LULC change can significantly alter the relative amounts of groundwater storage and runoff.
State of Louisiana - Highlighting low-lying areas derived from USGS Digital Elevation Data
Kosovich, John J.
2008-01-01
In support of U.S. Geological Survey (USGS) disaster preparedness efforts, this map depicts a color shaded relief representation highlighting the State of Louisiana and depicts the surrounding areas using muted elevation colors. The first 30 feet of relief above mean sea level are displayed as brightly colored 5-foot elevation bands, which highlight low-elevation areas at a coarse spatial resolution. Areas below sea level typically are surrounded by levees or some other type of flood-control structures. Standard USGS National Elevation Dataset (NED) 1 arc-second (nominally 30-meter) digital elevation model (DEM) data are the basis for the map, which is designed to be used at a broad scale and for informational purposes only. The NED data are a mixture of data and were derived from the original 1:24,000-scale USGS topographic map bare-earth contours, which were converted into gridded quadrangle-based DEM tiles at a constant post spacing (grid cell size) of either 30 meters (data before the mid-1990s) or 10 meters (mid-1990s and later data). These individual-quadrangle DEMs were then converted to spherical coordinates (latitude/longitude decimal degrees) and edge-matched to ensure seamlessness. Approximately one-half of the area shown on this map has DEM source data at a 30-meter resolution, with the remaining half consisting of mostly 10-meter contour-derived DEM data and some small areas of higher-resolution LIght Detection And Ranging (LIDAR) data along parts of the coastline. Areas below sea level typically are surrounded by levees or some other type of flood-control structures. State and parish boundary, hydrography, city, and road layers were modified from USGS National Atlas data downloaded in 2003. The NED data were downloaded in 2007.
NASA Astrophysics Data System (ADS)
Xu, Y.; Fan, M.; Huang, Z.; Zheng, J.; Chen, L.
2017-12-01
Open biomass burning which has adverse effects on air quality and human health is an important source of gas and particulate matter (PM) in China. Current emission estimations of open biomass burning are generally based on single source (alternative to statistical data and satellite-derived data) and thus contain large uncertainty due to the limitation of data. In this study, to quantify the 2015-based amount of open biomass burning, we established a new estimation method for open biomass burning activity levels by combining the bottom-up statistical data and top-down MODIS observations. And three sub-category sources which used different activity data were considered. For open crop residue burning, the "best estimate" of activity data was obtained by averaging the statistical data from China statistical yearbooks and satellite observations from MODIS burned area product MCD64A1 weighted by their uncertainties. For the forest and grassland fires, their activity levels were represented by the combination of statistical data and MODIS active fire product MCD14ML. Using the fire radiative power (FRP) which is considered as a better indicator of active fire level as the spatial allocation surrogate, coarse gridded emissions were reallocated into 3km ×3km grids to get a high-resolution emission inventory. Our results showed that emissions of CO, NOx, SO2, NH3, VOCs, PM2.5, PM10, BC and OC in mainland China were 6607, 427, 84, 79, 1262, 1198, 1222, 159 and 686 Gg/yr, respectively. Among all provinces of China, Henan, Shandong and Heilongjiang were the top three contributors to the total emissions. In this study, the developed open biomass burning emission inventory with a high-resolution could support air quality modeling and policy-making for pollution control.
NASA Technical Reports Server (NTRS)
Jasperson, W. H.; Holdeman, J. D.
1984-01-01
Tabulations are given of GASP ambient ozone mean, standard deviation, median, 84th percentile, and 98th percentile values, by month, flight level, and geographical region. These data are tabulated to conform to the temporal and spatial resolution required by FAA Advisory Circular 120-38 (monthly by 2000 ft in altitude by 5 deg in latitude) for climatological data used to show compliance with cabin ozone regulations. In addition seasonal x 10 deg latitude tabulations are included which are directly comparable to and supersede the interim GASP ambient ozone tabulations given in appendix B of FAA-EE-80-43 (NASA TM-81528). Selected probability variations are highlighted to illustrate the spatial and temporal variability of ambient ozone and to compare results from the coarse and fine grid analyses.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
STOCK: Structure mapper and online coarse-graining kit for molecular simulations
Bevc, Staš; Junghans, Christoph; Praprotnik, Matej
2015-03-15
We present a web toolkit STructure mapper and Online Coarse-graining Kit for setting up coarse-grained molecular simulations. The kit consists of two tools: structure mapping and Boltzmann inversion tools. The aim of the first tool is to define a molecular mapping from high, e.g. all-atom, to low, i.e. coarse-grained, resolution. Using a graphical user interface it generates input files, which are compatible with standard coarse-graining packages, e.g. VOTCA and DL_CGMAP. Our second tool generates effective potentials for coarse-grained simulations preserving the structural properties, e.g. radial distribution functions, of the underlying higher resolution model. The required distribution functions can be providedmore » by any simulation package. Simulations are performed on a local machine and only the distributions are uploaded to the server. The applicability of the toolkit is validated by mapping atomistic pentane and polyalanine molecules to a coarse-grained representation. Effective potentials are derived for systems of TIP3P (transferable intermolecular potential 3 point) water molecules and salt solution. The presented coarse-graining web toolkit is available at http://stock.cmm.ki.si.« less
NASA Astrophysics Data System (ADS)
Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander
2012-02-01
Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.
DNS/LES Simulations of Separated Flows at High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Balakumar, P.
2015-01-01
Direct numerical simulations (DNS) and large-eddy simulations (LES) simulations of flow through a periodic channel with a constriction are performed using the dynamic Smagorinsky model at two Reynolds numbers of 2800 and 10595. The LES equations are solved using higher order compact schemes. DNS are performed for the lower Reynolds number case using a fine grid and the data are used to validate the LES results obtained with a coarse and a medium size grid. LES simulations are also performed for the higher Reynolds number case using a coarse and a medium size grid. The results are compared with an existing reference data set. The DNS and LES results agreed well with the reference data. Reynolds stresses, sub-grid eddy viscosity, and the budgets for the turbulent kinetic energy are also presented. It is found that the turbulent fluctuations in the normal and spanwise directions have the same magnitude. The turbulent kinetic energy budget shows that the production peaks near the separation point region and the production to dissipation ratio is very high on the order of five in this region. It is also observed that the production is balanced by the advection, diffusion, and dissipation in the shear layer region. The dominant term is the turbulent diffusion that is about two times the molecular dissipation.
Modelling tidal current energy extraction in large area using a three-dimensional estuary model
NASA Astrophysics Data System (ADS)
Chen, Yaling; Lin, Binliang; Lin, Jie
2014-11-01
This paper presents a three-dimensional modelling study for simulating tidal current energy extraction in large areas, with a momentum sink term being added into the momentum equations. Due to the limits of computational capacity, the grid size of the numerical model is generally much larger than the turbine rotor diameter. Two models, i.e. a local grid refinement model and a coarse grid model, are employed and an idealized estuary is set up. The local grid refinement model is constructed to simulate the power generation of an isolated turbine and its impacts on hydrodynamics. The model is then used to determine the deployment of turbine farm and quantify a combined thrust coefficient for multiple turbines located in a grid element of coarse grid model. The model results indicate that the performance of power extraction is affected by array deployment, with more power generation from outer rows than inner rows due to velocity deficit influence of upstream turbines. Model results also demonstrate that the large-scale turbine farm has significant effects on the hydrodynamics. The tidal currents are attenuated within the turbine swept area, and both upstream and downstream of the array. While the currents are accelerated above and below turbines, which is contributed to speeding up the wake mixing process behind the arrays. The water levels are heightened in both low and high water levels as the turbine array spanning the full width of estuary. The magnitude of water level change is found to increase with the array expansion, especially at the low water level.
NASA Astrophysics Data System (ADS)
Fenech, Sara; Doherty, Ruth M.; Heaviside, Clare; Vardoulakis, Sotiris; Macintyre, Helen L.; O'Connor, Fiona M.
2018-04-01
We examine the impact of model horizontal resolution on simulated concentrations of surface ozone (O3) and particulate matter less than 2.5 µm in diameter (PM2.5), and the associated health impacts over Europe, using the HadGEM3-UKCA chemistry-climate model to simulate pollutant concentrations at a coarse (˜ 140 km) and a finer (˜ 50 km) resolution. The attributable fraction (AF) of total mortality due to long-term exposure to warm season daily maximum 8 h running mean (MDA8) O3 and annual-average PM2.5 concentrations is then calculated for each European country using pollutant concentrations simulated at each resolution. Our results highlight a seasonal variation in simulated O3 and PM2.5 differences between the two model resolutions in Europe. Compared to the finer resolution results, simulated European O3 concentrations at the coarse resolution are higher on average in winter and spring (˜ 10 and ˜ 6 %, respectively). In contrast, simulated O3 concentrations at the coarse resolution are lower in summer and autumn (˜ -1 and ˜ -4 %, respectively). These differences may be partly explained by differences in nitrogen dioxide (NO2) concentrations simulated at the two resolutions. Compared to O3, we find the opposite seasonality in simulated PM2.5 differences between the two resolutions. In winter and spring, simulated PM2.5 concentrations are lower at the coarse compared to the finer resolution (˜ -8 and ˜ -6 %, respectively) but higher in summer and autumn (˜ 29 and ˜ 8 %, respectively). Simulated PM2.5 values are also mostly related to differences in convective rainfall between the two resolutions for all seasons. These differences between the two resolutions exhibit clear spatial patterns for both pollutants that vary by season, and exert a strong influence on country to country variations in estimated AF for the two resolutions. Warm season MDA8 O3 levels are higher in most of southern Europe, but lower in areas of northern and eastern Europe when simulated at the coarse resolution compared to the finer resolution. Annual-average PM2.5 concentrations are higher across most of northern and eastern Europe but lower over parts of southwest Europe at the coarse compared to the finer resolution. Across Europe, differences in the AF associated with long-term exposure to population-weighted MDA8 O3 range between -0.9 and +2.6 % (largest positive differences in southern Europe), while differences in the AF associated with long-term exposure to population-weighted annual mean PM2.5 range from -4.7 to +2.8 % (largest positive differences in eastern Europe) of the total mortality. Therefore this study, with its unique focus on Europe, demonstrates that health impact assessments calculated using modelled pollutant concentrations, are sensitive to a change in model resolution by up to ˜ ±5 % of the total mortality across Europe.
Southern Ocean eddy compensation in a forced eddy-resolving GCM
NASA Astrophysics Data System (ADS)
Bruun Poulsen, Mads; Jochum, Markus; Eden, Carsten; Nuterman, Roman
2017-04-01
Contemporary eddy-resolving model studies have demonstrated that the common parameterisation of isopycnal mixing in the ocean is subject to limitations in the Southern Ocean where the mesoscale eddies are of leading order importance to the dynamics. We here present forced simulations from the Community Earth System Model on a global {1/10}° and 1° horizontal grid, the latter employing an eddy parameterisation, where the strength of the zonal wind stress south of 25°S has been varied. With a 50% zonally symmetric increase of the wind stress, we show that the two models arrive at two radically different solutions in terms of the large-scale circulation, with an increase of the deep inflow of water to the Southern Ocean at 40°S by 50% in the high resolution model against 20% at coarse resolution. Together with a weaker vertical displacement of the pycnocline in the 1° model, these results suggest that the parameterised eddies have an overly strong compensating effect on the water mass transformation compared to the explicit eddies. Implications for eddy mixing parameterisations will be discussed.
Time-reversal transcranial ultrasound beam focusing using a k-space method
Jing, Yun; Meral, F. Can; Clement, Greg. T.
2012-01-01
This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477
Non-Gaussian power grid frequency fluctuations characterized by Lévy-stable laws and superstatistics
NASA Astrophysics Data System (ADS)
Schäfer, Benjamin; Beck, Christian; Aihara, Kazuyuki; Witthaut, Dirk; Timme, Marc
2018-02-01
Multiple types of fluctuations impact the collective dynamics of power grids and thus challenge their robust operation. Fluctuations result from processes as different as dynamically changing demands, energy trading and an increasing share of renewable power feed-in. Here we analyse principles underlying the dynamics and statistics of power grid frequency fluctuations. Considering frequency time series for a range of power grids, including grids in North America, Japan and Europe, we find a strong deviation from Gaussianity best described as Lévy-stable and q-Gaussian distributions. We present a coarse framework to analytically characterize the impact of arbitrary noise distributions, as well as a superstatistical approach that systematically interprets heavy tails and skewed distributions. We identify energy trading as a substantial contribution to today's frequency fluctuations and effective damping of the grid as a controlling factor enabling reduction of fluctuation risks, with enhanced effects for small power grids.
NASA Technical Reports Server (NTRS)
Ruffert, Maximilian; Arnett, David
1994-01-01
We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. Totally absorbing spheres of varying sizes (from 10 down to 0.01 accretion radii) move at Mach 3 relative to a homogeneous and slightly perturbed medium, which is taken to be an ideal gas (gamma = 5/3). To accommodate the long-range gravitational forces, the extent of the computational volume is 32(exp 3) accretion radii. We examine the influence of numerical procedure on physical behavior. The hydrodynamics is modeled by the 'piecewise parabolic method.' No energy sources (nuclear burning) or sinks (radiation, conduction) are included. The resolution in the vicinity of the accretor is increased by multiply nesting several (5-10) grids around the sphere, each finer grid being a factor of 2 smaller in zone dimension that the next coarser grid. The largest dynamic range (ratio of size of the largest grid to size of the finest zone) is 16,384. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid, while at the same time evolving the gas on the coarser grids. Initially (at time t = 0-10), a shock front is set up, a Mach cone develops, and the accretion column is observable. Eventually the flow becomes unstable, destroying axisymmetry. This happens approximately when the mass accretion rate reaches the values (+/- 10%) predicted by the Bondi-Hoyle accretion formula (factor of 2 included). However, our three-dimensional models do not show the highly dynamic flip-flop flow so prominent in two-dimensional calculations performed by other authors. The flow, and thus the accretion rate of all quantities, shows quasi-periodic (P approximately equals 5) cycles between quiescent and active states. The interpolation formula proposed in an accompanying paper is found to follow the collected numerical data to within approximately 30%. The specific angular momentum accreted is of the same order of magnitude as the values previously found for two-dimensional flows.
A multi-resolution approach to electromagnetic modeling.
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-04-01
We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
Grid adaption using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Grid adaptation using chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaptation using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
The relative entropy is fundamental to adaptive resolution simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less
The relative entropy is fundamental to adaptive resolution simulations
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Potestio, Raffaello
2016-07-01
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.
NASA Technical Reports Server (NTRS)
Lim, Young-Kwon; Stefanova, Lydia B.; Chan, Steven C.; Schubert, Siegfried D.; OBrien, James J.
2010-01-01
This study assesses the regional-scale summer precipitation produced by the dynamical downscaling of analyzed large-scale fields. The main goal of this study is to investigate how much the regional model adds smaller scale precipitation information that the large-scale fields do not resolve. The modeling region for this study covers the southeastern United States (Florida, Georgia, Alabama, South Carolina, and North Carolina) where the summer climate is subtropical in nature, with a heavy influence of regional-scale convection. The coarse resolution (2.5deg latitude/longitude) large-scale atmospheric variables from the National Center for Environmental Prediction (NCEP)/DOE reanalysis (R2) are downscaled using the NCEP Environmental Climate Prediction Center regional spectral model (RSM) to produce precipitation at 20 km resolution for 16 summer seasons (19902005). The RSM produces realistic details in the regional summer precipitation at 20 km resolution. Compared to R2, the RSM-produced monthly precipitation shows better agreement with observations. There is a reduced wet bias and a more realistic spatial pattern of the precipitation climatology compared with the interpolated R2 values. The root mean square errors of the monthly R2 precipitation are reduced over 93 (1,697) of all the grid points in the five states (1,821). The temporal correlation also improves over 92 (1,675) of all grid points such that the domain-averaged correlation increases from 0.38 (R2) to 0.55 (RSM). The RSM accurately reproduces the first two observed eigenmodes, compared with the R2 product for which the second mode is not properly reproduced. The spatial patterns for wet versus dry summer years are also successfully simulated in RSM. For shorter time scales, the RSM resolves heavy rainfall events and their frequency better than R2. Correlation and categorical classification (above/near/below average) for the monthly frequency of heavy precipitation days is also significantly improved by the RSM.
NASA Astrophysics Data System (ADS)
Petersson, Anders; Rodgers, Arthur
2010-05-01
The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
FV-MHMM: A Discussion on Weighting Schemes.
NASA Astrophysics Data System (ADS)
Franc, J.; Gerald, D.; Jeannin, L.; Egermann, P.; Masson, R.
2016-12-01
Upscaling or homogenization techniques consist in finding block-equivalentor equivalent upscaled properties on a coarse grid from heterogeneousproperties defined on an underlying fine grid. However, this couldbecome costly and resource consuming. Harder et al., 2013, have developeda Multiscale Hybrid-Mixed Method (MHMM) of upscaling to treat Darcytype equations on heterogeneous fields formulated using a finite elementmethod. Recently, Franc et al. 2016, has extended this method of upscalingto finite volume formulation (FV-MHMM). Although convergence refiningLagrange multipliers space has been observed, numerical artefactscan occur while trapping numerically the flow in regions of low permeability. This work will present the development of the method along with theresults obtained from its classical formulation. Then, two weightingschemes and their benefits on the FV-MHMM method will be presented insome simple random permeability cases. Next example will involve alarger heterogeneous 2D permeability field extracted from the 10thSPE test case. Eventually, multiphase flow will be addressed asan extension of this single phase flow method. An elliptic pressureequation solved on the coarse grid via FV-MHMM will be sequentiallycoupled with a hyperbolic saturation equation on the fine grid. Theimproved accuracy thanks to the weighting scheme will be measuredcompared to a finite volume fine grid solution. References: Harder, C., Paredes, D. and Valentin, F., A family of multiscalehybrid-mixed finite element methods for the Darcy equation with roughcoefficients, Journal of Computational Physics, 2013. Franc J., Debenest G., Jeannin L., Egermann P. and Masson R., FV-MHMMfor reservoir modelling ECMOR XV-15th European Conference on the Mathematicsof Oil Recovery, 2015.
Simulation of the Atmospheric Boundary Layer for Wind Energy Applications
NASA Astrophysics Data System (ADS)
Marjanovic, Nikola
Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).
Pilot-in-the-Loop CFD Method Development
2015-02-01
expensive alternatives [1]. ALM represents the blades as a set of segments along with each blade axis and the ADM represents the entire rotor as...fine grid, Δx = 1.00 m Figure 4 – Time-averaged vertical velocity distributions on downwash and rotor disk plane for hybrid and loose coupling...cases with fine and coarse grid refinement levels. Figure 4 shows the time-averaged distributions of vertical velocities on both downwash and rotor disk
NASA Astrophysics Data System (ADS)
Gailler, Audrey; Hébert, Hélène; Loevenbruck, Anne
2013-04-01
Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response on the scale of an individual harbor. In fact, when facing the problem of the interaction of the tsunami wavefield with a shoreline, any numerical simulation must be performed over an increasingly fine grid, which in turn mandates a reduced time step, and the use of a fully non-linear code. Such calculations become then prohibitively time-consuming, which is clearly unacceptable in the framework of real-time warning. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami wave heights in high seas, and tsunami warning maps at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these deep wave heights simulations. The method involves an empirical correction relation derived from Green's law, expressing conservation of wave energy flux to extend the gridded wave field into the harbor with respect to the nearby deep-water grid node. The main limitation of this method is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, a set of synthetic mareograms is calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids characterized by a coarse resolution over deep water regions and an increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). This synthetic dataset is then used to approximate the empirical parameters of the correction equation. Results of inundation estimates in several french Mediterranean harbors obtained with the fast "Green's law - derived" method are presented and compared with values given by time-consuming nested grids simulations.
A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.
2000-01-01
The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.
Shen, Lin; Yang, Weitao
2016-04-12
We developed a new multiresolution method that spans three levels of resolution with quantum mechanical, atomistic molecular mechanical, and coarse-grained models. The resolution-adapted all-atom and coarse-grained water model, in which an all-atom structural description of the entire system is maintained during the simulations, is combined with the ab initio quantum mechanics and molecular mechanics method. We apply this model to calculate the redox potentials of the aqueous ruthenium and iron complexes by using the fractional number of electrons approach and thermodynamic integration simulations. The redox potentials are recovered in excellent accordance with the experimental data. The speed-up of the hybrid all-atom and coarse-grained water model renders it computationally more attractive. The accuracy depends on the hybrid all-atom and coarse-grained water model used in the combined quantum mechanical and molecular mechanical method. We have used another multiresolution model, in which an atomic-level layer of water molecules around redox center is solvated in supramolecular coarse-grained waters for the redox potential calculations. Compared with the experimental data, this alternative multilayer model leads to less accurate results when used with the coarse-grained polarizable MARTINI water or big multipole water model for the coarse-grained layer.
Unstructured grid research and use at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Potapczuk, Mark G.
1993-01-01
Computational fluid dynamics applications of grid research at LRC include inlets, nozzles, and ducts; turbomachinery; propellers - ducted and unducted; and aircraft icing. Some issues related to internal flow grid generation are resolution requirements on several boundaries, shock resolution vs. grid periodicity, grid spacing at blade/shroud gap, grid generation in turbine blade passages, and grid generation for inlet/nozzle geometries. Aircraft icing grid generation issues include (1) small structures relative to airfoil chord must be resolved; (2) excessive number of grid points in far-field using structured grid; and (3) grid must be recreated as ice shape grows.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Large-Eddy Simulation of Aeroacoustic Applications
NASA Technical Reports Server (NTRS)
Pruett, C. David; Sochacki, James S.
1999-01-01
This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.
- CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic Multiscale Model on the B grid AWIPS grid 212 Regional - CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic 132 - Double Resolution (Lambert Conformal - 16km) NEMS Non-hydrostatic Multiscale Model on the B grid
Scalability and performance of data-parallel pressure-based multigrid methods for viscous flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blosch, E.L.; Shyy, W.
1996-05-01
A full-approximation storage multigrid method for solving the steady-state 2-d incompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5, using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns, allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 x 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable andmore » that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the course grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320 x 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature. 62 refs., 13 figs.« less
NASA Astrophysics Data System (ADS)
Putman, W. M.; Suarez, M.
2009-12-01
The Goddard Earth Observing System Model (GEOS-5), an earth system model developed in the NASA Global Modeling and Assimilation Office (GMAO), has integrated the non-hydrostatic finite-volume dynamical core on the cubed-sphere grid. The extension to a non-hydrostatic dynamical framework and the quasi-uniform cubed-sphere geometry permits the efficient exploration of global weather and climate modeling at cloud permitting resolutions of 10- to 4-km on today's high performance computing platforms. We have explored a series of incremental increases in global resolution with GEOS-5 from it's standard 72-level 27-km resolution (~5.5 million cells covering the globe from the surface to 0.1 hPa) down to 3.5-km (~3.6 billion cells). We will present results from a series of forecast experiments exploring the impact of the non-hydrostatic dynamics at transition resolutions of 14- to 7-km, and the influence of increased horizontal/vertical resolution on convection and physical parameterizations within GEOS-5. Regional and mesoscale features of 5- to 10-day weather forecasts will be presented and compared with satellite observations. Our results will highlight the impact of resolution on the structure of cloud features including tropical convection and tropical cyclone predicability, cloud streets, von Karman vortices, and the marine stratocumulus cloud layer. We will also present experiment design and early results from climate impact experiments for global non-hydrostatic models using GEOS-5. Our climate experiments will focus on support for the Year of Tropical Convection (YOTC). We will also discuss a seasonal climate time-slice experiment design for downscaling coarse resolution century scale climate simulations to global non-hydrostatic resolutions of 14- to 7-km with GEOS-5.
A CPT for Improving Turbulence and Cloud Processes in the NCEP Global Models
NASA Astrophysics Data System (ADS)
Krueger, S. K.; Moorthi, S.; Randall, D. A.; Pincus, R.; Bogenschutz, P.; Belochitski, A.; Chikira, M.; Dazlich, D. A.; Swales, D. J.; Thakur, P. K.; Yang, F.; Cheng, A.
2016-12-01
Our Climate Process Team (CPT) is based on the premise that the NCEP (National Centers for Environmental Prediction) global models can be improved by installing an integrated, self-consistent description of turbulence, clouds, deep convection, and the interactions between clouds and radiative and microphysical processes. The goal of our CPT is to unify the representation of turbulence and subgrid-scale (SGS) cloud processes and to unify the representation of SGS deep convective precipitation and grid-scale precipitation as the horizontal resolution decreases. We aim to improve the representation of small-scale phenomena by implementing a PDF-based SGS turbulence and cloudiness scheme that replaces the boundary layer turbulence scheme, the shallow convection scheme, and the cloud fraction schemes in the GFS (Global Forecast System) and CFS (Climate Forecast System) global models. We intend to improve the treatment of deep convection by introducing a unified parameterization that scales continuously between the simulation of individual clouds when and where the grid spacing is sufficiently fine and the behavior of a conventional parameterization of deep convection when and where the grid spacing is coarse. We will endeavor to improve the representation of the interactions of clouds, radiation, and microphysics in the GFS/CFS by using the additional information provided by the PDF-based SGS cloud scheme. The team is evaluating the impacts of the model upgrades with metrics used by the NCEP short-range and seasonal forecast operations.
Large-eddy simulations with wall models
NASA Technical Reports Server (NTRS)
Cabot, W.
1995-01-01
The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.
Running GCM physics and dynamics on different grids: Algorithm and tests
NASA Astrophysics Data System (ADS)
Molod, A.
2006-12-01
The major drawback in the use of sigma coordinates in atmospheric GCMs, namely the error in the pressure gradient term near sloping terrain, leaves the use of eta coordinates an important alternative. A central disadvantage of an eta coordinate, the inability to retain fine resolution in the vertical as the surface rises above sea level, is addressed here. An `alternate grid' technique is presented which allows the tendencies of state variables due to the physical parameterizations to be computed on a vertical grid (the `physics grid') which retains fine resolution near the surface, while the remaining terms in the equations of motion are computed using an eta coordinate (the `dynamics grid') with coarser vertical resolution. As a simple test of the technique a set of perpetual equinox experiments using a simplified lower boundary condition with no land and no topography were performed. The results show that for both low and high resolution alternate grid experiments, much of the benefit of increased vertical resolution for the near surface meridional wind (and mass streamfield) can be realized by enhancing the vertical resolution of the `physics grid' in the manner described here. In addition, approximately half of the increase in zonal jet strength seen with increased vertical resolution can be realized using the `alternate grid' technique. A pair of full GCM experiments with realistic lower boundary conditions and topography were also performed. It is concluded that the use of the `alternate grid' approach offers a promising way forward to alleviate a central problem associated with the use of the eta coordinate in atmospheric GCMs.
Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.
Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J
2016-01-01
Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
Ku, S.; Hager, R.; Chang, C. S.; ...
2016-04-01
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S.; Hager, R.; Chang, C. S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
NASA Astrophysics Data System (ADS)
He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry
2008-04-01
The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.
Influence of topographic heterogeneity on the abandance of larch forest in eastern Siberia
NASA Astrophysics Data System (ADS)
Sato, H.; Kobayashi, H.
2016-12-01
In eastern Siberia, larches (Larix spp.) often exist in pure stands, constructing the world's largest coniferous forest, of which changes can significantly affect the earth's albedo and the global carbon balance. We have conducted simulation studies for this vegetation, aiming to forecast its structures and functions under changing climate (1, 2). In previous studies of simulating vegetation at large geographical scales, the examining area is divided into coarse grid cells such as 0.5 * 0.5 degree resolution, and topographical heterogeneities within each grid cell are just ignored. However, in Siberian larch area, which is located on the environmental edge of existence of forest ecosystem, abundance of larch trees largely depends on topographic condition at the scale of tens to hundreds meters. We, therefore, analyzed patterns of within-grid-scale heterogeneity of larch LAI as a function of topographic condition, and examined its underlying reason. For this analysis, larch LAI was estimated for each 1/112 degree from the SPOT-VEGETATION data, and topographic properties such as angularity and aspect direction were estimated form the ASTER-GDEM data. Through this analysis, we found that, for example, sign of correlation between angularity and larch LAI depends on hydrological condition on the grid cell. We then refined the hydrological sub-model of our vegetation model SEIB-DGVM, and validated whether the modified model can reconstruct these patterns, and examined its impact on the estimation of biomass and vegetation productivity of entire larch region. -- References --1. Sato, H., et al. (2010). "Simulation study of the vegetation structure and function in eastern Siberian larch forests using the individual-based vegetation model SEIB-DGVM." Forest Ecology and Management 259(3): 301-311.2. Sato, H., et al. (2016). "Endurance of larch forest ecosystems in eastern Siberia under warming trends." Ecology and Evolution
Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities
NASA Astrophysics Data System (ADS)
Doster, F.; Celia, M. A.; Nordbotten, J. M.
2012-12-01
Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.
A Note on Multigrid Theory for Non-nested Grids and/or Quadrature
NASA Technical Reports Server (NTRS)
Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.
1996-01-01
We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.
Operator induced multigrid algorithms using semirefinement
NASA Technical Reports Server (NTRS)
Decker, Naomi; Vanrosendale, John
1989-01-01
A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.
Operational forecasting with the subgrid technique on the Elbe Estuary
NASA Astrophysics Data System (ADS)
Sehili, Aissa
2017-04-01
Modern remote sensing technologies can deliver very detailed land surface height data that should be considered for more accurate simulations. In that case, and even if some compromise is made with regard to grid resolution of an unstructured grid, simulations still will require large grids which can be computationally very demanding. The subgrid technique, first published by Casulli (2009), is based on the idea of making use of the available detailed subgrid bathymetric information while performing computations on relatively coarse grids permitting large time steps. Consequently, accuracy and efficiency are drastically enhanced if compared to the classical linear method, where the underlying bathymetry is solely discretized by the computational grid. The algorithm guarantees rigorous mass conservation and nonnegative water depths for any time step size. Computational grid-cells are permitted to be wet, partially wet or dry and no drying threshold is needed. The subgrid technique is used in an operational forecast model for water level, current velocity, salinity and temperature of the Elbe estuary in Germany. Comparison is performed with the comparatively highly resolved classical unstructured grid model UnTRIM. The daily meteorological forcing data are delivered by the German Weather Service (DWD) using the ICON-EU model. Open boundary data are delivered by the coastal model BSHcmod of the German Federal Maritime and Hydrographic Agency (BSH). Comparison of predicted water levels between classical and subgrid model shows a very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out within less than 10 minutes on standard PC-like hardware. The model is capable of permanently delivering highly resolved temporal and spatial information on water level, current velocity, salinity and temperature for the whole estuary. The model offers also the possibility to recalculate any previous situation. This can be helpful to figure out for instance the context in which a certain event occurred like an accident. In addition to measurement, the model can be used to improve navigability by adjusting the tidal transit-schedule for container vessels that are depending on the tide to approach or leave the port of Hamburg.
A multi-resolution approach to electromagnetic modelling
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-07-01
We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
A fast dynamic grid adaption scheme for meteorological flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, B.H.; Trapp, R.J.
1993-10-01
The continuous dynamic grid adaption (CDGA) technique is applied to a compressible, three-dimensional model of a rising thermal. The computational cost, per grid point per time step, of using CDGA instead of a fixed, uniform Cartesian grid is about 53% of the total cost of the model with CDGA. The use of general curvilinear coordinates contributes 11.7% to this total, calculating and moving the grid 6.1%, and continually updating the transformation relations 20.7%. Costs due to calculations that involve the gridpoint velocities (as well as some substantial unexplained costs) contribute the remaining 14.5%. A simple way to limit the costmore » of calculating the grid is presented. The grid is adapted by solving an elliptic equation for gridpoint coordinates on a coarse grid and then interpolating the full finite-difference grid. In this application, the additional costs per grid point of CDGA are shown to be easily offset by the savings resulting from the reduction in the required number of grid points. In simulation of the thermal costs are reduced by a factor of 3, as compared with those of a companion model with a fixed, uniform Cartesian grid. 8 refs., 8 figs.« less
A Semi-Structured MODFLOW-USG Model to Evaluate Local Water Sources to Wells for Decision Support.
Feinstein, Daniel T; Fienen, Michael N; Reeves, Howard W; Langevin, Christian D
2016-07-01
In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A "semi-structured" approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a). Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
A semi-structured MODFLOW-USG model to evaluate local water sources to wells for decision support
Feinstein, Daniel T.; Fienen, Michael N.; Reeves, Howard W.; Langevin, Christian D.
2016-01-01
In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A “semi-structured” approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a).
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Development and Application of Agglomerated Multigrid Methods for Complex Geometries
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.
2010-01-01
We report progress in the development of agglomerated multigrid techniques for fully un- structured grids in three dimensions, building upon two previous studies focused on efficiently solving a model diffusion equation. We demonstrate a robust fully-coarsened agglomerated multigrid technique for 3D complex geometries, incorporating the following key developments: consistent and stable coarse-grid discretizations, a hierarchical agglomeration scheme, and line-agglomeration/relaxation using prismatic-cell discretizations in the highly-stretched grid regions. A signi cant speed-up in computer time is demonstrated for a model diffusion problem, the Euler equations, and the Reynolds-averaged Navier-Stokes equations for 3D realistic complex geometries.
Fast and accurate grid representations for atom-based docking with partner flexibility.
de Vries, Sjoerd J; Zacharias, Martin
2017-06-30
Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Myneni, Ranga
2003-01-01
The problem of how the scale, or spatial resolution, of reflectance data impacts retrievals of vegetation leaf area index (LAI) and fraction absorbed photosynthetically active radiation (PAR) has been investigated. We define the goal of scaling as the process by which it is established that LAI and FPAR values derived from coarse resolution sensor data equal the arithmetic average of values derived independently from fine resolution sensor data. The increasing probability of land cover mixtures with decreasing resolution is defined as heterogeneity, which is a key concept in scaling studies. The effect of pixel heterogeneity on spectral reflectances and LAI/FPAR retrievals is investigated with 1 km Advanced Very High Resolution Radiometer (AVHRR) data aggregated to different coarse spatial resolutions. It is shown that LAI retrieval errors at coarse resolution are inversely related to the proportion of the dominant land cover in such pixel. Further, large errors in LAI retrievals are incurred when forests are minority biomes in non-forest pixels compared to when forest biomes are mixed with one another, and vice-versa. A physically based technique for scaling with explicit spatial resolution dependent radiative transfer formulation is developed. The successful application of this theory to scaling LAI retrievals from AVHRR data of different resolutions is demonstrated
NASA Astrophysics Data System (ADS)
Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng
2018-02-01
Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
Simulation of Anomalous Regional Climate Events with a Variable Resolution Stretched Grid GCM
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.
1999-01-01
The stretched-grid approach provides an efficient down-scaling and consistent interactions between global and regional scales due to using one variable-resolution model for integrations. It is a workable alternative to the widely used nested-grid approach introduced over a decade ago as a pioneering step in regional climate modeling. A variable-resolution General Circulation Model (GCM) employing a stretched grid, with enhanced resolution over the US as the area of interest, is used for simulating two anomalous regional climate events, the US summer drought of 1988 and flood of 1993. The special mode of integration using a stretched-grid GCM and data assimilation system is developed that allows for imitating the nested-grid framework. The mode is useful for inter-comparison purposes and for underlining the differences between these two approaches. The 1988 and 1993 integrations are performed for the two month period starting from mid May. Regional resolutions used in most of the experiments is 60 km. The major goal and the result of the study is obtaining the efficient down-scaling over the area of interest. The monthly mean prognostic regional fields for the stretched-grid integrations are remarkably close to those of the verifying analyses. Simulated precipitation patterns are successfully verified against gauge precipitation observations. The impact of finer 40 km regional resolution is investigated for the 1993 integration and an example of recovering subregional precipitation is presented. The obtained results show that the global variable-resolution stretched-grid approach is a viable candidate for regional and subregional climate studies and applications.
NASA Astrophysics Data System (ADS)
Clay, M. P.; Buaria, D.; Gotoh, T.; Yeung, P. K.
2017-10-01
A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection-diffusion equation. We focus on the regime of high Schmidt number (S c), where because of low molecular diffusivity the grid-resolution requirements for the scalar field are stricter than those for the velocity field by a factor √{ S c }. Computational throughput is improved by simulating the velocity field on a coarse grid of Nv3 points with a Fourier pseudo-spectral (FPS) method, while the passive scalar is simulated on a fine grid of Nθ3 points with a combined compact finite difference (CCD) scheme which computes first and second derivatives at eighth-order accuracy. A static three-dimensional domain decomposition and a parallel solution algorithm for the CCD scheme are used to avoid the heavy communication cost of memory transposes. A kernel is used to evaluate several approaches to optimize the performance of the CCD routines, which account for 60% of the overall simulation cost. On the petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign, scalability is improved substantially with a hybrid MPI-OpenMP approach in which a dedicated thread per NUMA domain overlaps communication calls with computational tasks performed by a separate team of threads spawned using OpenMP nested parallelism. At a target production problem size of 81923 (0.5 trillion) grid points on 262,144 cores, CCD timings are reduced by 34% compared to a pure-MPI implementation. Timings for 163843 (4 trillion) grid points on 524,288 cores encouragingly maintain scalability greater than 90%, although the wall clock time is too high for production runs at this size. Performance monitoring with CrayPat for problem sizes up to 40963 shows that the CCD routines can achieve nearly 6% of the peak flop rate. The new DNS code is built upon two existing FPS and CCD codes. With the grid ratio Nθ /Nv = 8, the disparity in the computational requirements for the velocity and scalar problems is addressed by splitting the global communicator MPI_COMM_WORLD into disjoint communicators for the velocity and scalar fields, respectively. Inter-communicator transfer of the velocity field from the velocity communicator to the scalar communicator is handled with discrete send and non-blocking receive calls, which are overlapped with other operations on the scalar communicator. For production simulations at Nθ = 8192 and Nv = 1024 on 262,144 cores for the scalar field, the DNS code achieves 94% strong scaling relative to 65,536 cores and 92% weak scaling relative to Nθ = 1024 and Nv = 128 on 512 cores.
Triangle Geometry Processing for Surface Modeling and Cartesian Grid Generation
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J. (Inventor); Melton, John E. (Inventor); Berger, Marsha J. (Inventor)
2002-01-01
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
A new multigrid formulation for high order finite difference methods on summation-by-parts form
NASA Astrophysics Data System (ADS)
Ruggiu, Andrea A.; Weinerfelt, Per; Nordström, Jan
2018-04-01
Multigrid schemes for high order finite difference methods on summation-by-parts form are studied by comparing the effect of different interpolation operators. By using the standard linear prolongation and restriction operators, the Galerkin condition leads to inaccurate coarse grid discretizations. In this paper, an alternative class of interpolation operators that bypass this issue and preserve the summation-by-parts property on each grid level is considered. Clear improvements of the convergence rate for relevant model problems are achieved.
Triangle geometry processing for surface modeling and cartesian grid generation
Aftosmis, Michael J [San Mateo, CA; Melton, John E [Hollister, CA; Berger, Marsha J [New York, NY
2002-09-03
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
Summary of the Fourth AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Rider, Ben; Zickuhr, Tom; Levy, David W.; Brodersen, Olaf P.; Eisfeld, Bernhard; Crippa, Simone; Wahls, Richard A.;
2010-01-01
Results from the Fourth AIAA Drag Prediction Workshop (DPW-IV) are summarized. The workshop focused on the prediction of both absolute and differential drag levels for wing-body and wing-body-horizontal-tail configurations that are representative of transonic transport air- craft. Numerical calculations are performed using industry-relevant test cases that include lift- specific flight conditions, trimmed drag polars, downwash variations, dragrises and Reynolds- number effects. Drag, lift and pitching moment predictions from numerous Reynolds-Averaged Navier-Stokes computational fluid dynamics methods are presented. Solutions are performed on structured, unstructured and hybrid grid systems. The structured-grid sets include point- matched multi-block meshes and over-set grid systems. The unstructured and hybrid grid sets are comprised of tetrahedral, pyramid, prismatic, and hexahedral elements. Effort is made to provide a high-quality and parametrically consistent family of grids for each grid type about each configuration under study. The wing-body-horizontal families are comprised of a coarse, medium and fine grid; an optional extra-fine grid augments several of the grid families. These mesh sequences are utilized to determine asymptotic grid-convergence characteristics of the solution sets, and to estimate grid-converged absolute drag levels of the wing-body-horizontal configuration using Richardson extrapolation.
NASA Astrophysics Data System (ADS)
Huang, Xin; Yin, Chang-Chun; Cao, Xiao-Yue; Liu, Yun-He; Zhang, Bo; Cai, Jing
2017-09-01
The airborne electromagnetic (AEM) method has a high sampling rate and survey flexibility. However, traditional numerical modeling approaches must use high-resolution physical grids to guarantee modeling accuracy, especially for complex geological structures such as anisotropic earth. This can lead to huge computational costs. To solve this problem, we propose a spectral-element (SE) method for 3D AEM anisotropic modeling, which combines the advantages of spectral and finite-element methods. Thus, the SE method has accuracy as high as that of the spectral method and the ability to model complex geology inherited from the finite-element method. The SE method can improve the modeling accuracy within discrete grids and reduce the dependence of modeling results on the grids. This helps achieve high-accuracy anisotropic AEM modeling. We first introduced a rotating tensor of anisotropic conductivity to Maxwell's equations and described the electrical field via SE basis functions based on GLL interpolation polynomials. We used the Galerkin weighted residual method to establish the linear equation system for the SE method, and we took a vertical magnetic dipole as the transmission source for our AEM modeling. We then applied fourth-order SE calculations with coarse physical grids to check the accuracy of our modeling results against a 1D semi-analytical solution for an anisotropic half-space model and verified the high accuracy of the SE. Moreover, we conducted AEM modeling for different anisotropic 3D abnormal bodies using two physical grid scales and three orders of SE to obtain the convergence conditions for different anisotropic abnormal bodies. Finally, we studied the identification of anisotropy for single anisotropic abnormal bodies, anisotropic surrounding rock, and single anisotropic abnormal body embedded in an anisotropic surrounding rock. This approach will play a key role in the inversion and interpretation of AEM data collected in regions with anisotropic geology.
NASA Astrophysics Data System (ADS)
Ludwig, V. S.; Istomina, L.; Spreen, G.
2017-12-01
Arctic sea ice concentration (SIC), the fraction of a grid cell that is covered by sea ice, is relevant for a multitude of branches: physics (heat/momentum exchange), chemistry (gas exchange), biology (photosynthesis), navigation (location of pack ice) and others. It has been observed from passive microwave (PMW) radiometers on satellites continuously since 1979, providing an almost 40-year time series. However, the resolution is limited to typically 25 km which is good enough for climate studies but too coarse to properly resolve the ice edge or to show leads. The highest resolution from PMW sensors today is 5 km of the AMSR2 89 GHz channels. Thermal infrared (TIR) and visible (VIS) measurements provide much higher resolutions between 1 km (TIR) and 30 m (VIS, regional daily coverage). The higher resolutions come at the cost of depending on cloud-free fields of view (TIR and VIS) and daylight (VIS). We present a merged product of ASI-AMSR2 SIC (PMW) and MODIS SIC (TIR) at a nominal resolution of 1 km. This product benefits from both the independence of PMW towards cloud coverage and the high resolution of TIR data. An independent validation data set has been produced from manually selected, cloud-free Landsat VIS data at 30 m resolution. This dataset is used to evaluate the performance of the merged SIC dataset. Our results show that the merged product resolves features which are smeared out by the PMW data while benefitting from the PMW data in cloudy cases and is thus indeed more than the sum of its parts.
A high-order staggered finite-element vertical discretization for non-hydrostatic atmospheric models
Guerra, Jorge E.; Ullrich, Paul A.
2016-06-01
Atmospheric modeling systems require economical methods to solve the non-hydrostatic Euler equations. Two major differences between hydrostatic models and a full non-hydrostatic description lies in the vertical velocity tendency and numerical stiffness associated with sound waves. In this work we introduce a new arbitrary-order vertical discretization entitled the staggered nodal finite-element method (SNFEM). Our method uses a generalized discrete derivative that consistently combines the discontinuous Galerkin and spectral element methods on a staggered grid. Our combined method leverages the accurate wave propagation and conservation properties of spectral elements with staggered methods that eliminate stationary (2Δ x) modes. Furthermore, high-order accuracymore » also eliminates the need for a reference state to maintain hydrostatic balance. In this work we demonstrate the use of high vertical order as a means of improving simulation quality at relatively coarse resolution. We choose a test case suite that spans the range of atmospheric flows from predominantly hydrostatic to nonlinear in the large-eddy regime. Lastly, our results show that there is a distinct benefit in using the high-order vertical coordinate at low resolutions with the same robust properties as the low-order alternative.« less
A high-order staggered finite-element vertical discretization for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerra, Jorge E.; Ullrich, Paul A.
Atmospheric modeling systems require economical methods to solve the non-hydrostatic Euler equations. Two major differences between hydrostatic models and a full non-hydrostatic description lies in the vertical velocity tendency and numerical stiffness associated with sound waves. In this work we introduce a new arbitrary-order vertical discretization entitled the staggered nodal finite-element method (SNFEM). Our method uses a generalized discrete derivative that consistently combines the discontinuous Galerkin and spectral element methods on a staggered grid. Our combined method leverages the accurate wave propagation and conservation properties of spectral elements with staggered methods that eliminate stationary (2Δ x) modes. Furthermore, high-order accuracymore » also eliminates the need for a reference state to maintain hydrostatic balance. In this work we demonstrate the use of high vertical order as a means of improving simulation quality at relatively coarse resolution. We choose a test case suite that spans the range of atmospheric flows from predominantly hydrostatic to nonlinear in the large-eddy regime. Lastly, our results show that there is a distinct benefit in using the high-order vertical coordinate at low resolutions with the same robust properties as the low-order alternative.« less
Ground Boundary Conditions for Thermal Convection Over Horizontal Surfaces at High Rayleigh Numbers
NASA Astrophysics Data System (ADS)
Hanjalić, K.; Hrebtov, M.
2016-07-01
We present "wall functions" for treating the ground boundary conditions in the computation of thermal convection over horizontal surfaces at high Rayleigh numbers using coarse numerical grids. The functions are formulated for an algebraic-flux model closed by transport equations for the turbulence kinetic energy, its dissipation rate and scalar variance, but could also be applied to other turbulence models. The three-equation algebraic-flux model, solved in a T-RANS mode ("Transient" Reynolds-averaged Navier-Stokes, based on triple decomposition), was shown earlier to reproduce well a number of generic buoyancy-driven flows over heated surfaces, albeit by integrating equations up to the wall. Here we show that by using a set of wall functions satisfactory results are found for the ensemble-averaged properties even on a very coarse computational grid. This is illustrated by the computations of the time evolution of a penetrative mixed layer and Rayleigh-Bénard (open-ended, 4:4:1 domain) convection, using 10 × 10 × 100 and 10 × 10 × 20 grids, compared also with finer grids (e.g. 60 × 60 × 100), as well as with one-dimensional treatment using 1 × 1 × 100 and 1 × 1 × 20 nodes. The approach is deemed functional for simulations of a convective boundary layer and mesoscale atmospheric flows, and pollutant transport over realistic complex hilly terrain with heat islands, urban and natural canopies, for diurnal cycles, or subjected to other time and space variations in ground conditions and stratification.
A variable resolution right TIN approach for gridded oceanographic data
NASA Astrophysics Data System (ADS)
Marks, David; Elmore, Paul; Blain, Cheryl Ann; Bourgeois, Brian; Petry, Frederick; Ferrini, Vicki
2017-12-01
Many oceanographic applications require multi resolution representation of gridded data such as for bathymetric data. Although triangular irregular networks (TINs) allow for variable resolution, they do not provide a gridded structure. Right TINs (RTINs) are compatible with a gridded structure. We explored the use of two approaches for RTINs termed top-down and bottom-up implementations. We illustrate why the latter is most appropriate for gridded data and describe for this technique how the data can be thinned. While both the top-down and bottom-up approaches accurately preserve the surface morphology of any given region, the top-down method of vertex placement can fail to match the actual vertex locations of the underlying grid in many instances, resulting in obscured topology/bathymetry. Finally we describe the use of the bottom-up approach and data thinning in two applications. The first is to provide thinned, variable resolution bathymetry data for tests of storm surge and inundation modeling, in particular hurricane Katrina. Secondly we consider the use of the approach for an application to an oceanographic data grid of 3-D ocean temperature.
Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Niknafs, H. S.
1991-01-01
A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.
Modal density of rectangular structures in a wide frequency range
NASA Astrophysics Data System (ADS)
Parrinello, A.; Ghiringhelli, G. L.
2018-04-01
A novel approach to investigate the modal density of a rectangular structure in a wide frequency range is presented. First, the modal density is derived, in the whole frequency range of interest, on the basis of sound transmission through the infinite counterpart of the structure; then, it is corrected by means of the low-frequency modal behavior of the structure, taking into account actual size and boundary conditions. A statistical analysis reveals the connection between the modal density of the structure and the transmission of sound through its thickness. A transfer matrix approach is used to compute the required acoustic parameters, making it possible to deal with structures having arbitrary stratifications of different layers. A finite element method is applied on coarse grids to derive the first few eigenfrequencies required to correct the modal density. Both the transfer matrix approach and the coarse grids involved in the finite element analysis grant high efficiency. Comparison with alternative formulations demonstrates the effectiveness of the proposed methodology.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
Statistical Downscaling of WRF-Chem Model: An Air Quality Analysis over Bogota, Colombia
NASA Astrophysics Data System (ADS)
Kumar, Anikender; Rojas, Nestor
2015-04-01
Statistical downscaling is a technique that is used to extract high-resolution information from regional scale variables produced by coarse resolution models such as Chemical Transport Models (CTMs). The fully coupled WRF-Chem (Weather Research and Forecasting with Chemistry) model is used to simulate air quality over Bogota. Bogota is a tropical Andean megacity located over a high-altitude plateau in the middle of very complex terrain. The WRF-Chem model was adopted for simulating the hourly ozone concentrations. The computational domains were chosen of 120x120x32, 121x121x32 and 121x121x32 grid points with horizontal resolutions of 27, 9 and 3 km respectively. The model was initialized with real boundary conditions using NCAR-NCEP's Final Analysis (FNL) and a 1ox1o (~111 km x 111 km) resolution. Boundary conditions were updated every 6 hours using reanalysis data. The emission rates were obtained from global inventories, namely the REanalysis of the TROpospheric (RETRO) chemical composition and the Emission Database for Global Atmospheric Research (EDGAR). Multiple linear regression and artificial neural network techniques are used to downscale the model output at each monitoring stations. The results confirm that the statistically downscaled outputs reduce simulated errors by up to 25%. This study provides a general overview of statistical downscaling of chemical transport models and can constitute a reference for future air quality modeling exercises over Bogota and other Colombian cities.
NASA Astrophysics Data System (ADS)
Pauliquevis, T.; Gomes, H. B.; Barbosa, H. M.
2014-12-01
In this study we evaluate the skill of WRF model to simulate the actual diurnal cycle of convection in the Amazon basin. Models tipically are not capable to simulate the well documented cycle of 1) shallow cumulus in the morning; 2) towering process around noon; 3) shallow-to-deep convection and rain around 14h (LT). The fail in models is explained by the typical size of shallow cumulus (~0.5 - 2.0 km) and the coarse resolution of models using convection parameterisation (> 20 km). In this study we employed high spatial resolution (Dx = 0.625 km) to reach the shallow cumulus scale. . The simulations corresponds to a dynamical downscaling of ERA-Interim from 25 to 28 February 2013 with 40 vertical levels, 30 minutes outputs,and three nested grids (10 km, 2.5 km, 0.625 km). Improved vegetation (USGS + PROVEG), albedo and greenfrac (computed from MODIS-NDVI + LEAF-2 land surface parameterization), as well as pseudo analysis of soil moisture were used as input data sets, resulting in more realistic precipitation fields when compared to observations in sensitivity tests. Convective parameterization was switched off for the 2.5/0.625 km grids, where cloud formation was solely resolved by the microphysics module (WSM6 scheme, which provided better results). Results showed a significant improved capability of the model to simulate diurnal cycle. Shallow cumulus begin to appear in the first hours in the morning. They were followed by a towering process that culminates with precipitation in the early afternoon, which is a behavior well described by observations but rarely obtained in models. Rain volumes were also realistic (~20 mm for single events) when compared to typical events during the period, which is in the core of the wet season. Cloud fields evolution also differed with respect to Amazonas River bank, which is a clear evidence of the interaction between river breeze and large scale circulation.
VizieR Online Data Catalog: 3D correction in 5 photometric systems (Bonifacio+, 2018)
NASA Astrophysics Data System (ADS)
Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kucinskas, A.; Prakapavicius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.
2018-01-01
We have used the CIFIST grid of CO5BOLD models to investigate the effects of granulation on fluxes and colours of stars of spectral type F, G, and K. We publish tables with 3D corrections that can be applied to colours computed from any 1D model atmosphere. For Teff>=5000K, the corrections are smooth enough, as a function of atmospheric parameters, that it is possible to interpolate the corrections between grid points; thus the coarseness of the CIFIST grid should not be a major limitation. However at the cool end there are still far too few models to allow a reliable interpolation. (20 data files).
Bayless, E. Randall; Arihood, Leslie D.; Reeves, Howard W.; Sperl, Benjamin J.S.; Qi, Sharon L.; Stipe, Valerie E.; Bunch, Aubrey R.
2017-01-18
As part of the National Water Availability and Use Program established by the U.S. Geological Survey (USGS) in 2005, this study took advantage of about 14 million records from State-managed collections of water-well drillers’ records and created a database of hydrogeologic properties for the glaciated United States. The water-well drillers’ records were standardized to be relatively complete and error-free and to provide consistent variables and naming conventions that span all State boundaries.Maps and geospatial grids were developed for (1) total thickness of glacial deposits, (2) total thickness of coarse-grained deposits, (3) specific-capacity based transmissivity and hydraulic conductivity, and (4) texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity. The information included in these maps and grids is required for most assessments of groundwater availability, in addition to having applications to studies of groundwater flow and transport. The texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity were based on an assumed range of hydraulic conductivity values for coarse- and fine-grained deposits and should only be used with complete awareness of the methods used to create them. However, the maps and grids of texture-based estimated equivalent hydraulic conductivity and transmissivity may be useful for application to areas where a range of measured values is available for re-scaling.Maps of hydrogeologic information for some States are presented as examples in this report but maps and grids for all States are available electronically at the project Web site (USGS Glacial Aquifer System Groundwater Availability Study, http://mi.water.usgs.gov/projects/WaterSmart/Map-SIR2015-5105.html) and the Science Base Web site, https://www.sciencebase.gov/catalog/item/58756c7ee4b0a829a3276352.
Hyper-Resolution Groundwater Modeling using MODFLOW 6
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Langevin, C.
2017-12-01
MODFLOW 6 is the latest version of the U.S. Geological Survey's modular hydrologic model. MODFLOW 6 was developed to synthesize many of the recent versions of MODFLOW into a single program, improve the way different process models are coupled, and to provide an object-oriented framework for adding new types of models and packages. The object-oriented framework and underlying numerical solver make it possible to tightly couple any number of hyper-resolution models within coarser regional models. The hyper-resolution models can be used to evaluate local-scale groundwater issues that may be affected by regional-scale forcings. In MODFLOW 6, hyper-resolution meshes can be maintained as separate model datasets, similar to MODFLOW-LGR, which simplifies the development of a coarse regional model with imbedded hyper-resolution models from a coarse regional model. For example, the South Atlantic Coastal Plain regional water availability model was converted from a MODFLOW-2000 model to a MODFLOW 6 model. The horizontal discretization of the original model is approximately 3,218 m x 3,218 m. Hyper-resolution models of the Aiken and Sumter County water budget areas in South Carolina with a horizontal discretization of approximately 322 m x 322 m were developed and were tightly coupled to a modified version of the original coarse regional model that excluded these areas. Hydraulic property and aquifer geometry data from the coarse model were mapped to the hyper-resolution models. The discretization of the hyper-resolution models is fine enough to make detailed analyses of the effect that changes in groundwater withdrawals in the production aquifers have on the water table and surface-water/groundwater interactions. The approach used in this analysis could be applied to other regional water availability models that have been developed by the U.S. Geological Survey to evaluate local scale groundwater issues.
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
NASA Astrophysics Data System (ADS)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-11-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.
Konrad, Christopher P.
2015-01-01
Ecological functions and flood-related risks were assessed for floodplains along the 17 major rivers flowing into Puget Sound Basin, Washington. The assessment addresses five ecological functions, five components of flood-related risks at two spatial resolutions—fine and coarse. The fine-resolution assessment compiled spatial attributes of floodplains from existing, publically available sources and integrated the attributes into 10-meter rasters for each function, hazard, or exposure. The raster values generally represent different types of floodplains with regard to each function, hazard, or exposure rather than the degree of function, hazard, or exposure. The coarse-resolution assessment tabulates attributes from the fine-resolution assessment for larger floodplain units, which are floodplains associated with 0.1 to 21-kilometer long segments of major rivers. The coarse-resolution assessment also derives indices that can be used to compare function or risk among different floodplain units and to develop normative (based on observed distributions) standards. The products of the assessment are available online as geospatial datasets (Konrad, 2015; http://dx.doi.org/10.5066/F7DR2SJC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
Purpose: Demonstrate the effectiveness of an anti-scatter grid artifact minimization method by removing the grid-line artifacts for three different grids when used with a high resolution CMOS detector. Method: Three different stationary x-ray grids were used with a high resolution CMOS x-ray detector (Dexela 1207, 75 µm pixels, sensitivity area 11.5cm × 6.5cm) to image a simulated artery block phantom (Nuclear Associates, Stenosis/Aneurysm Artery Block 76–705) combined with a frontal head phantom used as the scattering source. The x-ray parameters were 98kVp, 200mA, and 16ms for all grids. With all the three grids, two images were acquired: the first formore » a scatter-less flat field including the grid and the second of the object with the grid which may still have some scatter transmission. Because scatter has a low spatial frequency distribution, it was represented by an estimated constant value as an initial approximation and subtracted from the image of the object with grid before dividing by an average frame of the grid flat-field with no scatter. The constant value was iteratively changed to minimize residual grid-line artifact. This artifact minimization process was used for all the three grids. Results: Anti-scatter grid lines artifacts were successfully eliminated in all the three final images taken with the three different grids. The image contrast and CNR were also compared before and after the correction, and also compared with those from the image of the object when no grid was used. The corrected images showed an increase in CNR of approximately 28%, 33% and 25% for the three grids, as compared to the images when no grid at all was used. Conclusion: Anti-scatter grid-artifact minimization works effectively irrespective of the specifications of the grid when it is used with a high spatial resolution detector. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
Statistical evaluation of the simulated convective activity over Central Greece
NASA Astrophysics Data System (ADS)
Kartsios, Stergios; Kotsopoulos, Stylianos; Karacostas, Theodore S.; Tegoulias, Ioannis; Pytharoulis, Ioannis; Bampzelis, Dimitrios
2015-04-01
In the framework of the project DAPHNE (www.daphne-meteo.gr), the non-hydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW, version 3.5.1) is used to produce very high spatiotemporal resolution simulations of the convective activity over Thessaly plain and hence, enhancing our knowledge on the impact of high resolution elevation and land use data in the moist convection. The expecting results act as a precursor for the potential applicability of a planned precipitation enhancement program. The three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and Thessaly region-central Greece (d03), are used at horizontal grid-spacings of 15km, 5km and 1km respectively. ECMWF operational analyses at 6-hourly intervals (0.25ox0.25o lat.-long.) are imported as initial and boundary conditions of the coarse domain, while in the vertical, 39 sigma levels (up to 50 hPa) are used, with increased resolution in the boundary layer. Microphysical processes are represented by WSM6 scheme, sub-grid scale convection by Kain-Fritsch scheme, longwave and shortwave radiation by RRTMG scheme, surface layer by Monin-Obukhov (MM5), boundary layer by Yonsei University and soil physics by NOAH Unified model. Six representative days with different upper-air synoptic circulation types are selected, while high resolution (3'') elevation data from the Shuttle Radar Topography Mission (SRTM - version 4) are inserted in the innermost domain (d03), along with the Corine Land Cover 2000 raster data (3''x3''). The aforementioned data sets are used in different configurations, in order to evaluate the impact of each one on the simulated convective activity in the vicinity of Thessaly region, using a grid of available meteorological stations in the area. For each selected day, four (4) sensitivity simulations are performed, setting a total number of 24 runs. Finally, the best configuration provides the necessary forcing fields into a 3D Cloud model, representing a potential cloud seeding process. Acknowledgements: This research is co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013).
Regional Data Assimilation Using a Stretched-Grid Approach and Ensemble Calculations
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, M. S.; Takacs, L. L.; Govindaraju, R. C.; Atlas, Robert (Technical Monitor)
2002-01-01
The global variable resolution stretched grid (SG) version of the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS) incorporating the GEOS SG-GCM (Fox-Rabinovitz 2000, Fox-Rabinovitz et al. 2001a,b), has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The major area of interest with enhanced regional resolution used in different SG-DAS experiments includes a rectangle over the U.S. with 50 or 60 km horizontal resolution. The analyses and diagnostics are produced for all mandatory levels from the surface to 0.2 hPa. The assimilated regional mesoscale products are consistent with global scale circulation characteristics due to using the SG-approach. Both the stretched grid and basic uniform grid DASs use the same amount of global grid-points and are compared in terms of regional product quality.
Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walko, Robert
2016-11-07
The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of themore » atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.« less
Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution
NASA Astrophysics Data System (ADS)
Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.
2017-12-01
We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.
Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.
2013-01-01
Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.
NASA Astrophysics Data System (ADS)
Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.
2017-12-01
Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.
Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang
2017-05-30
In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.
NASA Astrophysics Data System (ADS)
Li, Y.; Epifanio, C.
2017-12-01
In numerical prediction models, the interaction between the Earth's surface and the atmosphere is typically accounted for in terms of surface layer parameterizations, whose main job is to specify turbulent fluxes of heat, moisture and momentum across the lower boundary of the model domain. In the case of a domain with complex geometry, implementing the flux conditions (particularly the tensor stress condition) at the boundary can be somewhat subtle, and there has been a notable history of confusion in the CFD community over how to formulate and impose such conditions generally. In the atmospheric case, modelers have largely been able to avoid these complications, at least until recently, by assuming that the terrain resolved at typical model resolutions is fairly gentle, in the sense of having relatively shallow slopes. This in turn allows the flux conditions to be imposed as if the lower boundary were essentially flat. Unfortunately, while this flat-boundary assumption is acceptable for coarse resolutions, as grids become more refined and the geometry of the resolved terrain becomes more complex, the appproach is less justified. With this in mind, the goal of our present study is to explore the implementation and usage of the full, unapproximated version of the turbulent flux/stress conditions in atmospheric models, thus taking full account of the complex geometry of the resolved terrain. We propose to implement the conditions using a semi-idealized model developed by Epifanio (2007), in which the discretized boundary conditions are reduced to a large, sparse-matrix problem. The emphasis will be on fluxes of momentum, as the tensor nature of this flux makes the associated stress condition more difficult to impose, although the flux conditions for heat and moisture will be considered as well. With the resulotion of 90 meters, some of the results show that the typical differences between flat-boundary cases and full/stress cases are on the order of 10%, with extreme cases reaching as high as 30% based on typical disturbance wind speeds. And this difference dropping by a factor of six between grid spacings of 90 meters and 240 meters. It would thus appear that the need to apply the full stress condition is limited to relatively high-resolution modeling, with grid spacings on the order of 250 meters or less.
Kosovich, John J.
2008-01-01
In support of U.S. Geological Survey (USGS) disaster preparedness efforts, this map depicts 1:24,000- and 1:100,000-scale quadrangle footprints over a color shaded relief representation of the State of Florida. The first 30 feet of relief above mean sea level are displayed as brightly colored 5-foot elevation bands, which highlight low-elevation areas at a coarse spatial resolution. Standard USGS National Elevation Dataset (NED) 1 arc-second (nominally 30-meter) digital elevation model (DEM) data are the basis for the map, which is designed to be used at a broad scale and for informational purposes only. The NED source data for this map consists of a mixture of 30-meter- and 10-meter-resolution DEMs. The NED data were derived from the original 1:24,000-scale USGS topographic map bare-earth contours, which were converted into gridded quadrangle-based DEM tiles at a constant post spacing (grid cell size) of either 30 meters (data before the mid-1990s) or 10 meters (mid-1990s and later data). These individual-quadrangle DEMs were then converted to spherical coordinates (latitude/longitude decimal degrees) and edge-matched to ensure seamlessness. Figure 1 shows a similar representation for the entire U.S. Gulf Coast, using coarsened 30-meter NED data. Areas below sea level typically are surrounded by levees or some other type of flood-control structures. State and county boundary, hydrography, city, and road layers were modified from USGS National Atlas data downloaded in 2003. Quadrangle names, dated April, 2006, were obtained from the Federal Geographic Names Information System. The NED data were downloaded in 2004.
NASA Astrophysics Data System (ADS)
Motyka, R.; Fahnestock, M.; Howat, I.; Truffer, M.; Brecher, H.; Luethi, M.
2008-12-01
Jakobshavn Isbrae drains about 7 % of the Greenland Ice Sheet and is the ice sheet's largest outlet glacier. Two sets of high elevation (~13,500 m), high resolution (2 m) aerial photographs of Jakobshavn Isbrae were obtained about two weeks apart during July 1985 (Fastook et al, 1995). These historic photo sets have become increasingly important for documenting and understanding the dynamic state of this outlet stream prior to the rapid retreat and massive ice loss that began in 1998 and continues today. The original photogrammetric analysis of this imagery is summarized in Fastook et al. (1995). They derived a coarse DEM (3 km grid spacing) covering an area of approximately 100 km x 100 km by interpolating several hundred positions determined manually from block-aerial triangulation. We have re-analyzed these photos sets using digital photogrammetry (BAE Socet Set©) and significantly improved DEM quality and resolution (20, 50, and 100 m grids). The DEMs were in turn used to produce high quality orthophoto mosaics. Comparing our 1985 DEM to a DEM we derived from May 2006 NASA ATM measurements showed a total ice volume loss of ~ 105 km3 over the lower drainage area; almost all of this loss has occurred since 1997. Ice stream surface velocities derived from the 1985 orthomosaics showed speeds of 20 m/d on the floating tongue, diminishing to 5 m/d at 50 km further upstream. Velocities have since nearly doubled along the ice stream during its current retreat. Fastook, J.L., H.H. Brecher, and T.J. Hughes, 1995. J.of Glaciol. 11 (137), 161-173.
Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset
NASA Astrophysics Data System (ADS)
Lange, Stefan
2018-05-01
Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds). Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016) rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011) data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016). This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.
NASA Astrophysics Data System (ADS)
Brewer, M.; Mass, C.
2014-12-01
Though western Oregon and Washington summers are typically mild due to the influence of the nearby Pacific Ocean, this region occasionally experiences heat waves with temperatures in excess of 35ºC. These heat waves can have a substantial impact on this highly populated region, particularly since the population is unaccustomed to and generally unprepared for such conditions. A comprehensive evaluation is needed of past and future heat wave trends in frequency, intensity, and duration. Furthermore, it is important to understand the physical mechanisms of Northwest heat waves and how such mechanisms might change under anthropogenic global warming. Lower-tropospheric heat waves over the west coast of North America are the result of both synoptic and mesoscale factors, the latter requiring high-resolution models (roughly 12-15 km grid spacing) to simulate. Synoptic factors include large-scale warming due to horizontal advection and subsidence, as well as reductions in large-scale cloudiness. An important mesoscale factor is the occurrence of offshore (easterly) flow, resulting in an adiabatically warmed continental air mass spreading over the western lowlands rather than the more usual cool, marine air influence. To fully understand how heat waves will change under AGW, it is necessary to determine the combined impacts of both synoptic and mesoscale effects in a warming world. General Circulation Models (GCM) are generally are too coarse to simulate mesoscale effects realistically and thus may provide unreliable estimates of the frequency and magnitudes of West Coast heat waves. Therefore, to determine the regional implications of global warming, this work made use of long-term, high-resolution WRF simulations, at 36- and 12-km resolution, produced by dynamically downscaling GCM grids. This talk will examine the predicted trends in Pacific Northwest heat wave intensity, duration, and frequency during the 21st century (through 2100). The spatial distribution in the trends in heat waves, and the variability of these trends at different resolutions and among different models will also be described. Finally, changes in the synoptic and mesoscale configurations that drive Pacific Northwest heat waves and the modulating effects of local terrain and land/water contrast will be discussed.
Generic Wing-Body Aerodynamics Data Base
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Olsen, Thomas H.; Kwak, Dochan (Technical Monitor)
2001-01-01
The wing-body aerodynamics data base consists of a series of CFD (Computational Fluid Dynamics) simulations about a generic wing body configuration consisting of a ogive-circular-cylinder fuselage and a simple symmetric wing mid-mounted on the fuselage. Solutions have been obtained for Nonlinear Potential (P), Euler (E) and Navier-Stokes (N) solvers over a range of subsonic and transonic Mach numbers and angles of attack. In addition, each solution has been computed on a series of grids, coarse, medium and fine to permit an assessment of grid refinement errors.
Nutaro, James; Kuruganti, Teja
2017-02-24
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Subsonic Analysis of 0.04-Scale F-16XL Models Using an Unstructured Euler Code
NASA Technical Reports Server (NTRS)
Lessard, Wendy B.
1996-01-01
The subsonic flow field about an F-16XL airplane model configuration was investigated with an inviscid unstructured grid technique. The computed surface pressures were compared to wind-tunnel test results at Mach 0.148 for a range of angles of attack from 0 deg to 20 deg. To evaluate the effect of grid dependency on the solution, a grid study was performed in which fine, medium, and coarse grid meshes were generated. The off-surface vortical flow field was locally adapted and showed improved correlation to the wind-tunnel data when compared to the nonadapted flow field. Computational results are also compared to experimental five-hole pressure probe data. A detailed analysis of the off-body computed pressure contours, velocity vectors, and particle traces are presented and discussed.
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence; Govindaraju, Ravi C.; Atlas, Robert (Technical Monitor)
2002-01-01
The new stretched-grid design with multiple (four) areas of interest, one at each global quadrant, is implemented into both a stretched-grid GCM (general circulation model) and a stretched-grid data assimilation system (DAS). The four areas of interest include: the U.S./Northern Mexico, the El Nino area/Central South America, India/China, and the Eastern Indian Ocean/Australia. Both the stretched-grid GCM and DAS annual (November 1997 through December 1998) integrations are performed with 50 km regional resolution. The efficient regional down-scaling to mesoscales is obtained for each of the four areas of interest while the consistent interactions between regional and global scales and the high quality of global circulation, are preserved. This is the advantage of the stretched-grid approach. The global variable resolution DAS incorporating the stretched-grid GCM has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The anomalous regional climate events of 1998 that occurred over the U.S., Mexico, South America, China, India, African Sahel, and Australia are investigated in both simulation and data assimilation modes. Tree assimilated products are also used, along with gauge precipitation data, for validating the simulation results. The obtained results show that the stretched-grid GCM and DAS are capable of producing realistic high quality simulated and assimilated products at mesoscale resolution for regional climate studies and applications.
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
NASA Astrophysics Data System (ADS)
Dore, A. J.; Kryza, M.; Hall, J. R.; Hallsworth, S.; Keller, V. J. D.; Vieno, M.; Sutton, M. A.
2011-12-01
The Fine Resolution Atmospheric Multi-pollutant Exchange model (FRAME) has been applied to model the spatial distribution of nitrogen deposition and air concentration over the UK at a 1 km spatial resolution. The modelled deposition and concentration data were gridded at resolutions of 1 km, 5 km and 50 km to test the sensitivity of calculations of the exceedance of critical loads for nitrogen deposition to the deposition data resolution. The modelled concentrations of NO2 were validated by comparison with measurements from the rural sites in the national monitoring network and were found to achieve better agreement with the high resolution 1 km data. High resolution plots were found to represent a more physically realistic distribution of nitrogen air concentrations and deposition resulting from use of 1 km resolution precipitation and emissions data as compared to 5 km resolution data. Summary statistics for national scale exceedance of the critical load for nitrogen deposition were not highly sensitive to the grid resolution of the deposition data but did show greater area exceedance with coarser grid resolution due to spatial averaging of high nitrogen deposition hot spots. Local scale deposition at individual Sites of Special Scientific Interest and high precipitation upland sites was sensitive to choice of grid resolution of deposition data. Use of high resolution data tended to generate lower deposition values in sink areas for nitrogen dry deposition (Sites of Scientific Interest) and higher values in high precipitation upland areas. In areas with generally low exceedance (Scotland) and for certain vegetation types (montane), the exceedance statistics were more sensitive to model data resolution.
NASA Astrophysics Data System (ADS)
Dore, A. J.; Kryza, M.; Hall, J. R.; Hallsworth, S.; Keller, V. J. D.; Vieno, M.; Sutton, M. A.
2012-05-01
The Fine Resolution Atmospheric Multi-pollutant Exchange model (FRAME) was applied to model the spatial distribution of reactive nitrogen deposition and air concentration over the United Kingdom at a 1 km spatial resolution. The modelled deposition and concentration data were gridded at resolutions of 1 km, 5 km and 50 km to test the sensitivity of calculations of the exceedance of critical loads for nitrogen deposition to the deposition data resolution. The modelled concentrations of NO2 were validated by comparison with measurements from the rural sites in the national monitoring network and were found to achieve better agreement with the high resolution 1 km data. High resolution plots were found to represent a more physically realistic distribution of reactive nitrogen air concentrations and deposition resulting from use of 1 km resolution precipitation and emissions data as compared to 5 km resolution data. Summary statistics for national scale exceedance of the critical load for nitrogen deposition were not highly sensitive to the grid resolution of the deposition data but did show greater area exceedance with coarser grid resolution due to spatial averaging of high nitrogen deposition hot spots. Local scale deposition at individual Sites of Special Scientific Interest and high precipitation upland sites was sensitive to choice of grid resolution of deposition data. Use of high resolution data tended to generate lower deposition values in sink areas for nitrogen dry deposition (Sites of Scientific Interest) and higher values in high precipitation upland areas. In areas with generally low exceedance (Scotland) and for certain vegetation types (montane), the exceedance statistics were more sensitive to model data resolution.
The influence of model resolution on ozone in industrial volatile organic compound plumes.
Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G
2010-09-01
Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the authors suggest a need for quantitative metrics for horizontal grid resolution in future model guidance.
Snow and Ice Products from the Moderate Resolution Imaging Spectroradiometer
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Salomonson, Vincent V.; Riggs, George A.; Klein, Andrew G.
2003-01-01
Snow and sea ice products, derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument, flown on the Terra and Aqua satellites, are or will be available through the National Snow and Ice Data Center Distributed Active Archive Center (DAAC). The algorithms that produce the products are automated, thus providing a consistent global data set that is suitable for climate studies. The suite of MODIS snow products begins with a 500-m resolution, 2330-km swath snow-cover map that is then projected onto a sinusoidal grid to produce daily and 8-day composite tile products. The sequence proceeds to daily and 8-day composite climate-modeling grid (CMG) products at 0.05 resolution. A daily snow albedo product will be available in early 2003 as a beta test product. The sequence of sea ice products begins with a swath product at 1-km resolution that provides sea ice extent and ice-surface temperature (IST). The sea ice swath products are then mapped onto the Lambert azimuthal equal area or EASE-Grid projection to create a daily and 8-day composite sea ice tile product, also at 1 -km resolution. Climate-Modeling Grid (CMG) sea ice products in the EASE-Grid projection at 4-km resolution are planned for early 2003.
Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...
2015-01-20
Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less
NASA Astrophysics Data System (ADS)
Pradhan, Aniruddhe; Akhavan, Rayhaneh
2017-11-01
Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.
Vertical resolution of baroclinic modes in global ocean models
NASA Astrophysics Data System (ADS)
Stewart, K. D.; Hogg, A. McC.; Griffies, S. M.; Heerdegen, A. P.; Ward, M. L.; Spence, P.; England, M. H.
2017-05-01
Improvements in the horizontal resolution of global ocean models, motivated by the horizontal resolution requirements for specific flow features, has advanced modelling capabilities into the dynamical regime dominated by mesoscale variability. In contrast, the choice of the vertical grid remains a subjective choice, and it is not clear that efforts to improve vertical resolution adequately support their horizontal counterparts. Indeed, considering that the bulk of the vertical ocean dynamics (including convection) are parameterized, it is not immediately obvious what the vertical grid is supposed to resolve. Here, we propose that the primary purpose of the vertical grid in a hydrostatic ocean model is to resolve the vertical structure of horizontal flows, rather than to resolve vertical motion. With this principle we construct vertical grids based on their abilities to represent baroclinic modal structures commensurate with the theoretical capabilities of a given horizontal grid. This approach is designed to ensure that the vertical grids of global ocean models complement (and, importantly, to not undermine) the resolution capabilities of the horizontal grid. We find that for z-coordinate global ocean models, at least 50 well-positioned vertical levels are required to resolve the first baroclinic mode, with an additional 25 levels per subsequent mode. High-resolution ocean-sea ice simulations are used to illustrate some of the dynamical enhancements gained by improving the vertical resolution of a 1/10° global ocean model. These enhancements include substantial increases in the sea surface height variance (∼30% increase south of 40°S), the barotropic and baroclinic eddy kinetic energies (up to 200% increase on and surrounding the Antarctic continental shelf and slopes), and the overturning streamfunction in potential density space (near-tripling of the Antarctic Bottom Water cell at 65°S).
Downscaling modelling system for multi-scale air quality forecasting
NASA Astrophysics Data System (ADS)
Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.
2010-09-01
Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -É linear eddy-viscosity model, k - É non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.
High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Zhen; Voth, Gregory A., E-mail: gavoth@uchicago.edu
It is essential to be able to systematically construct coarse-grained (CG) models that can efficiently and accurately reproduce key properties of higher-resolution models such as all-atom. To fulfill this goal, a mapping operator is needed to transform the higher-resolution configuration to a CG configuration. Certain mapping operators, however, may lose information related to the underlying electrostatic properties. In this paper, a new mapping operator based on the centers of charge of CG sites is proposed to address this issue. Four example systems are chosen to demonstrate this concept. Within the multiscale coarse-graining framework, CG models that use this mapping operatormore » are found to better reproduce the structural correlations of atomistic models. The present work also demonstrates the flexibility of the mapping operator and the robustness of the force matching method. For instance, important functional groups can be isolated and emphasized in the CG model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marjanovic, Nikola; Mirocha, Jeffrey D.; Kosović, Branko
A generalized actuator line (GAL) wind turbine parameterization is implemented within the Weather Research and Forecasting model to enable high-fidelity large-eddy simulations of wind turbine interactions with boundary layer flows under realistic atmospheric forcing conditions. Numerical simulations using the GAL parameterization are evaluated against both an already implemented generalized actuator disk (GAD) wind turbine parameterization and two field campaigns that measured the inflow and near-wake regions of a single turbine. The representation of wake wind speed, variance, and vorticity distributions is examined by comparing fine-resolution GAL and GAD simulations and GAD simulations at both fine and coarse-resolutions. The higher-resolution simulationsmore » show slightly larger and more persistent velocity deficits in the wake and substantially increased variance and vorticity when compared to the coarse-resolution GAD. The GAL generates distinct tip and root vortices that maintain coherence as helical tubes for approximately one rotor diameter downstream. Coarse-resolution simulations using the GAD produce similar aggregated wake characteristics to both fine-scale GAD and GAL simulations at a fraction of the computational cost. The GAL parameterization provides the capability to resolve near wake physics, including vorticity shedding and wake expansion.« less
NASA Technical Reports Server (NTRS)
Obrien, S. O. (Principal Investigator)
1980-01-01
The program, LACVIN, calculates vegetative indexes numbers on limited area coverage/high resolution picture transmission data for selected IJ grid sections. The IJ grid sections were previously extracted from the full resolution data tapes and stored on disk files.
NASA Astrophysics Data System (ADS)
Afeyan, Bedros; Casas, Fernando; Crouseilles, Nicolas; Dodhy, Adila; Faou, Erwan; Mehrenberger, Michel; Sonnendrücker, Eric
2014-10-01
KEEN waves are non-stationary, nonlinear, self-organized asymptotic states in Vlasov plasmas. They lie outside the precepts of linear theory or perturbative analysis, unlike electron plasma waves or ion acoustic waves. Steady state, nonlinear constructs such as BGK modes also do not apply. The range in velocity that is strongly perturbed by KEEN waves depends on the amplitude and duration of the ponderomotive force generated by two crossing laser beams, for instance, used to drive them. Smaller amplitude drives manage to devolve into multiple highly-localized vorticlets, after the drive is turned off, and may eventually succeed to coalesce into KEEN waves. Fragmentation once the drive stops, and potential eventual remerger, is a hallmark of the weakly driven cases. A fully formed (more strongly driven) KEEN wave has one dominant vortical core. But it also involves fine scale complex dynamics due to shedding and merging of smaller vortical structures with the main one. Shedding and merging of vorticlets are involved in either case, but at different rates and with different relative importance. The narrow velocity range in which one must maintain sufficient resolution in the weakly driven cases, challenges fixed velocity grid numerical schemes. What is needed is the capability of resolving locally in velocity while maintaining a coarse grid outside the highly perturbed region of phase space. We here report on a new Semi-Lagrangian Vlasov-Poisson solver based on conservative non-uniform cubic splines in velocity that tackles this problem head on. An additional feature of our approach is the use of a new high-order time-splitting scheme which allows much longer simulations per computational effort. This is needed for low amplitude runs. There, global coherent structures take a long time to set up, such as KEEN waves, if they do so at all. The new code's performance is compared to uniform grid simulations and the advantages are quantified. The birth pains associated with weakly driven KEEN waves are captured in these simulations. Canonical KEEN waves with ample drive are also treated using these advanced techniques. They will allow the efficient simulation of KEEN waves in multiple dimensions, which will be tackled next, as well as generalizations to Vlasov-Maxwell codes. These are essential for pursuing the impact of KEEN waves in high energy density plasmas and in inertial confinement fusion applications. More generally, one needs a fully-adaptive grid-in-phase-space method which could handle all small vorticlet dynamics whether pealing off or remerging. Such fully adaptive grids would have to be computed sparsely in order to be viable. This two-velocity grid method is a concrete and fruitful step in that direction. Contribution to the Topical Issue "Theory and Applications of the Vlasov Equation", edited by Francesco Pegoraro, Francesco Califano, Giovanni Manfredi and Philip J. Morrison.
Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core
NASA Astrophysics Data System (ADS)
Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey
2017-05-01
SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.
NASA Astrophysics Data System (ADS)
King, C. H.; Wagenbrenner, J.; Fedora, M.; Watkins, D.; Watkins, M. K.; Huckins, C.
2017-12-01
The Great Lakes Region of North America has experienced more frequent extreme precipitation events in recent decades, resulting in a large number of stream crossing failures. While there are accepted methods for designing stream crossings to accommodate peak storm discharges, less attention has been paid to assessing the risk of failure. To evaluate failure risk and potential impacts, coarse-resolution stream crossing surveys were completed on 51 stream crossings and dams in the North Branch Paint River watershed in Michigan's Upper Peninsula. These inventories determined stream crossing dimensions along with stream and watershed characteristics. Eleven culverts were selected from the coarse surveys for high resolution hydraulic analysis to estimate discharge conditions expected at crossing failure. Watershed attributes upstream of the crossing, including area, slope, and storage, were acquired. Sediment discharge and the economic impact associated with a failure event were also estimated for each stream crossing. Impacts to stream connectivity and fish passability were assessed from the coarse-level surveys. Using information from both the coarse and high-resolution surveys, we also developed indicators to predict failure risk without the need for complex hydraulic modeling. These passability scores and failure risk indicators will help to prioritize infrastructure replacement and improve the overall connectivity of river systems throughout the upper Great Lakes Region.
Ruiz-Arias, Jose A; Gueymard, Christian A; Santos-Alamillos, Francisco J; Pozo-Vázquez, David
2016-08-10
Concentrating solar technologies, which are fuelled by the direct normal component of solar irradiance (DNI), are among the most promising solar technologies. Currently, the state-of the-art methods for DNI evaluation use datasets of aerosol optical depth (AOD) with only coarse (typically monthly) temporal resolution. Using daily AOD data from both site-specific observations at ground stations as well as gridded model estimates, a methodology is developed to evaluate how the calculated long-term DNI resource is affected by using AOD data averaged over periods from 1 to 30 days. It is demonstrated here that the use of monthly representations of AOD leads to systematic underestimations of the predicted long-term DNI up to 10% in some areas with high solar resource, which may result in detrimental consequences for the bankability of concentrating solar power projects. Recommendations for the use of either daily or monthly AOD data are provided on a geographical basis.
Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow
NASA Technical Reports Server (NTRS)
Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin
1996-01-01
An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.
Ruiz-Arias, Jose A.; Gueymard, Christian A.; Santos-Alamillos, Francisco J.; Pozo-Vázquez, David
2016-01-01
Concentrating solar technologies, which are fuelled by the direct normal component of solar irradiance (DNI), are among the most promising solar technologies. Currently, the state-of the-art methods for DNI evaluation use datasets of aerosol optical depth (AOD) with only coarse (typically monthly) temporal resolution. Using daily AOD data from both site-specific observations at ground stations as well as gridded model estimates, a methodology is developed to evaluate how the calculated long-term DNI resource is affected by using AOD data averaged over periods from 1 to 30 days. It is demonstrated here that the use of monthly representations of AOD leads to systematic underestimations of the predicted long-term DNI up to 10% in some areas with high solar resource, which may result in detrimental consequences for the bankability of concentrating solar power projects. Recommendations for the use of either daily or monthly AOD data are provided on a geographical basis. PMID:27507711
Zarzycki, Colin M.; Reed, Kevin A.; Bacmeister, Julio T.; ...
2016-02-25
This article discusses the sensitivity of tropical cyclone climatology to surface coupling strategy in high-resolution configurations of the Community Earth System Model. Using two supported model setups, we demonstrate that the choice of grid on which the lowest model level wind stress and surface fluxes are computed may lead to differences in cyclone strength in multi-decadal climate simulations, particularly for the most intense cyclones. Using a deterministic framework, we show that when these surface quantities are calculated on an ocean grid that is coarser than the atmosphere, the computed frictional stress is misaligned with wind vectors in individual atmospheric gridmore » cells. This reduces the effective surface drag, and results in more intense cyclones when compared to a model configuration where the ocean and atmosphere are of equivalent resolution. Our results demonstrate that the choice of computation grid for atmosphere–ocean interactions is non-negligible when considering climate extremes at high horizontal resolution, especially when model components are on highly disparate grids.« less
Nonuniform depth grids in parabolic equation solutions.
Sanders, William M; Collins, Michael D
2013-04-01
The parabolic wave equation is solved using a finite-difference solution in depth that involves a nonuniform grid. The depth operator is discretized using Galerkin's method with asymmetric hat functions. Examples are presented to illustrate that this approach can be used to improve efficiency for problems in ocean acoustics and seismo-acoustics. For shallow water problems, accuracy is sensitive to the precise placement of the ocean bottom interface. This issue is often addressed with the inefficient approach of using a fine grid spacing over all depth. Efficiency may be improved by using a relatively coarse grid with nonuniform sampling to precisely position the interface. Efficiency may also be improved by reducing the sampling in the sediment and in an absorbing layer that is used to truncate the computational domain. Nonuniform sampling may also be used to improve the implementation of a single-scattering approximation for sloping fluid-solid interfaces.
NASA Astrophysics Data System (ADS)
Cervone, A.; Manservisi, S.; Scardovelli, R.
2010-09-01
A multilevel VOF approach has been coupled to an accurate finite element Navier-Stokes solver in axisymmetric geometry for the simulation of incompressible liquid jets with high density ratios. The representation of the color function over a fine grid has been introduced to reduce the discontinuity of the interface at the cell boundary. In the refined grid the automatic breakup and coalescence occur at a spatial scale much smaller than the coarse grid spacing. To reduce memory requirements, we have implemented on the fine grid a compact storage scheme which memorizes the color function data only in the mixed cells. The capillary force is computed by using the Laplace-Beltrami operator and a volumetric approach for the two principal curvatures. Several simulations of axisymmetric jets have been performed to show the accuracy and robustness of the proposed scheme.
DPW-VI Results Using FUN3D with Focus on k-kL-MEAH2015 (k-kL) Turbulence Model
NASA Technical Reports Server (NTRS)
Abdol-Hamid, K. S.; Carlson, Jan-Renee; Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Park, Michael A.
2017-01-01
The Common Research Model wing-body configuration is investigated with the k-kL-MEAH2015 turbulence model implemented in FUN3D. This includes results presented at the Sixth Drag Prediction Workshop and additional results generated after the workshop with a nonlinear Quadratic Constitutive Relation (QCR) variant of the same turbulence model. The workshop provided grids are used, and a uniform grid refinement study is performed at the design condition. A large variation between results with and without a reconstruction limiter is exhibited on "medium" grid sizes, indicating that the medium grid size is too coarse for drawing conclusions in comparison with experiment. This variation is reduced with grid refinement. At a fixed angle of attack near design conditions, the QCR variant yielded decreased lift and drag compared with the linear eddy-viscosity model by an amount that was approximately constant with grid refinement. The k-kL-MEAH2015 turbulence model produced wing root junction flow behavior consistent with wind tunnel observations.
Is there potential added value in COSMO-CLM forced by ERA reanalysis data?
NASA Astrophysics Data System (ADS)
Lenz, Claus-Jürgen; Früh, Barbara; Adalatpanah, Fatemeh Davary
2017-12-01
An application of the potential added value (PAV) concept suggested by Di Luca et al. (Clim Dyn 40:443-464, 2013a) is applied to ERA Interim driven runs of the regional climate model COSMO-CLM. They are performed for the time period 1979-2013 for the EURO-CORDEX domain at horizontal grid resolutions 0.11°, 0.22°, and 0.44° such that the higher resolved model grid fits into the next coarser grid. The concept of the potential added value is applied to annual, seasonal, and monthly means of the 2 m air temperature. Results show the highest potential added value at the run with the finest grid and generally increasing PAV with increasing resolution. The potential added value strongly depends on the season as well as the region of consideration. The gain of PAV is higher enhancing the resolution from 0.44° to 0.22° than from 0.22° to 0.11°. At grid aggregations to 0.88° and 1.76° the differences in PAV between the COSMO-CLM runs on the mentioned grid resolutions are maximal. They nearly vanish at aggregations to even coarser grids. In all cases the PAV is dominated by at least 80% by its stationary part.
A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes
With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less
NASA Astrophysics Data System (ADS)
Dennis, L.; Roesler, E. L.; Guba, O.; Hillman, B. R.; McChesney, M.
2016-12-01
The Atmospheric Radiation Measurement (ARM) climate research facility has three siteslocated on the North Slope of Alaska (NSA): Barrrow, Oliktok, and Atqasuk. These sites, incombination with one other at Toolik Lake, have the potential to become a "megasite" whichwould combine observational data and high resolution modeling to produce high resolutiondata products for the climate community. Such a data product requires high resolutionmodeling over the area of the megasite. We present three variable resolution atmosphericgeneral circulation model (AGCM) configurations as potential alternatives to stand-alonehigh-resolution regional models. Each configuration is based on a global cubed-sphere gridwith effective resolution of 1 degree, with a refinement in resolution down to 1/8 degree overan area surrounding the ARM megasite. The three grids vary in the size of the refined areawith 13k, 9k, and 7k elements. SquadGen, NCL, and GIMP are used to create the grids.Grids vary based upon the selection of areas of refinement which capture climate andweather processes that may affect a proposed NSA megasite. A smaller area of highresolution may not fully resolve climate and weather processes before they reach the NSA,however grids with smaller areas of refinement have a significantly reduced computationalcost compared with grids with larger areas of refinement. Optimal size and shape of thearea of refinement for a variable resolution model at the NSA is investigated.
FitEM2EM—Tools for Low Resolution Study of Macromolecular Assembly and Dynamics
Frankenstein, Ziv; Sperling, Joseph; Sperling, Ruth; Eisenstein, Miriam
2008-01-01
Studies of the structure and dynamics of macromolecular assemblies often involve comparison of low resolution models obtained using different techniques such as electron microscopy or atomic force microscopy. We present new computational tools for comparing (matching) and docking of low resolution structures, based on shape complementarity. The matched or docked objects are represented by three dimensional grids where the value of each grid point depends on its position with regard to the interior, surface or exterior of the object. The grids are correlated using fast Fourier transformations producing either matches of related objects or docking models depending on the details of the grid representations. The procedures incorporate thickening and smoothing of the surfaces of the objects which effectively compensates for differences in the resolution of the matched/docked objects, circumventing the need for resolution modification. The presented matching tool FitEM2EMin successfully fitted electron microscopy structures obtained at different resolutions, different conformers of the same structure and partial structures, ranking correct matches at the top in every case. The differences between the grid representations of the matched objects can be used to study conformation differences or to characterize the size and shape of substructures. The presented low-to-low docking tool FitEM2EMout ranked the expected models at the top. PMID:18974836
Tsunami Forecasting in the Atlantic Basin
NASA Astrophysics Data System (ADS)
Knight, W. R.; Whitmore, P.; Sterling, K.; Hale, D. A.; Bahng, B.
2012-12-01
The mission of the West Coast and Alaska Tsunami Warning Center (WCATWC) is to provide advance tsunami warning and guidance to coastal communities within its Area-of-Responsibility (AOR). Predictive tsunami models, based on the shallow water wave equations, are an important part of the Center's guidance support. An Atlantic-based counterpart to the long-standing forecasting ability in the Pacific known as the Alaska Tsunami Forecast Model (ATFM) is now developed. The Atlantic forecasting method is based on ATFM version 2 which contains advanced capabilities over the original model; including better handling of the dynamic interactions between grids, inundation over dry land, new forecast model products, an optional non-hydrostatic approach, and the ability to pre-compute larger and more finely gridded regions using parallel computational techniques. The wide and nearly continuous Atlantic shelf region presents a challenge for forecast models. Our solution to this problem has been to develop a single unbroken high resolution sub-mesh (currently 30 arc-seconds), trimmed to the shelf break. This allows for edge wave propagation and for kilometer scale bathymetric feature resolution. Terminating the fine mesh at the 2000m isobath keeps the number of grid points manageable while allowing for a coarse (4 minute) mesh to adequately resolve deep water tsunami dynamics. Higher resolution sub-meshes are then included around coastal forecast points of interest. The WCATWC Atlantic AOR includes eastern U.S. and Canada, the U.S. Gulf of Mexico, Puerto Rico, and the Virgin Islands. Puerto Rico and the Virgin Islands are in very close proximity to well-known tsunami sources. Because travel times are under an hour and response must be immediate, our focus is on pre-computing many tsunami source "scenarios" and compiling those results into a database accessible and calibrated with observations during an event. Seismic source evaluation determines the order of model pre-computation - starting with those sources that carry the highest risk. Model computation zones are confined to regions at risk to save computation time. For example, Atlantic sources have been shown to not propagate into the Gulf of Mexico. Therefore, fine grid computations are not performed in the Gulf for Atlantic sources. Outputs from the Atlantic model include forecast marigrams at selected sites, maximum amplitudes, drawdowns, and currents for all coastal points. The maximum amplitude maps will be supplemented with contoured energy flux maps which show more clearly the effects of bathymetric features on tsunami wave propagation. During an event, forecast marigrams will be compared to observations to adjust the model results. The modified forecasts will then be used to set alert levels between coastal breakpoints, and provided to emergency management.
NASA Astrophysics Data System (ADS)
Zolina, Olga; Simmer, Clemens; Kapala, Alice; Mächel, Hermann; Gulev, Sergey; Groisman, Pavel
2014-05-01
We present new high resolution precipitation daily grids developed at Meteorological Institute, University of Bonn and German Weather Service (DWD) under the STAMMEX project (Spatial and Temporal Scales and Mechanisms of Extreme Precipitation Events over Central Europe). Daily precipitation grids have been developed from the daily-observing precipitation network of DWD, which runs one of the World's densest rain gauge networks comprising more than 7500 stations. Several quality-controlled daily gridded products with homogenized sampling were developed covering the periods 1931-onwards (with 0.5 degree resolution), 1951-onwards (0.25 degree and 0.5 degree), and 1971-2000 (0.1 degree). Different methods were tested to select the best gridding methodology that minimizes errors of integral grid estimates over hilly terrain. Besides daily precipitation values with uncertainty estimates (which include standard estimates of the kriging uncertainty as well as error estimates derived by a bootstrapping algorithm), the STAMMEX data sets include a variety of statistics that characterize temporal and spatial dynamics of the precipitation distribution (quantiles, extremes, wet/dry spells, etc.). Comparisons with existing continental-scale daily precipitation grids (e.g., CRU, ECA E-OBS, GCOS) which include considerably less observations compared to those used in STAMMEX, demonstrate the added value of high-resolution grids for extreme rainfall analyses. These data exhibit spatial variability pattern and trends in precipitation extremes, which are missed or incorrectly reproduced over Central Europe from coarser resolution grids based on sparser networks. The STAMMEX dataset can be used for high-quality climate diagnostics of precipitation variability, as a reference for reanalyses and remotely-sensed precipitation products (including the upcoming Global Precipitation Mission products), and for input into regional climate and operational weather forecast models. We will present numerous application of the STAMMEX grids spanning from case studies of the major Central European floods to long-term changes in different precipitation statistics, including those accounting for the alternation of dry and wet periods and precipitation intensities associated with prolonged rainy episodes.
An algebraic multigrid method for Q2-Q1 mixed discretizations of the Navier-Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokopenko, Andrey; Tuminaro, Raymond S.
Algebraic multigrid (AMG) preconditioners are considered for discretized systems of partial differential equations (PDEs) where unknowns associated with different physical quantities are not necessarily co-located at mesh points. Speci cally, we investigate a Q 2-Q 1 mixed finite element discretization of the incompressible Navier-Stokes equations where the number of velocity nodes is much greater than the number of pressure nodes. Consequently, some velocity degrees-of-freedom (dofs) are defined at spatial locations where there are no corresponding pressure dofs. Thus, AMG approaches lever- aging this co-located structure are not applicable. This paper instead proposes an automatic AMG coarsening that mimics certain pressure/velocitymore » dof relationships of the Q 2-Q 1 discretization. The main idea is to first automatically define coarse pressures in a somewhat standard AMG fashion and then to carefully (but automatically) choose coarse velocity unknowns so that the spatial location relationship between pressure and velocity dofs resembles that on the nest grid. To define coefficients within the inter-grid transfers, an energy minimization AMG (EMIN-AMG) is utilized. EMIN-AMG is not tied to specific coarsening schemes and grid transfer sparsity patterns, and so it is applicable to the proposed coarsening. Numerical results highlighting solver performance are given on Stokes and incompressible Navier-Stokes problems.« less
An algebraic multigrid method for Q2-Q1 mixed discretizations of the Navier-Stokes equations
Prokopenko, Andrey; Tuminaro, Raymond S.
2016-07-01
Algebraic multigrid (AMG) preconditioners are considered for discretized systems of partial differential equations (PDEs) where unknowns associated with different physical quantities are not necessarily co-located at mesh points. Speci cally, we investigate a Q 2-Q 1 mixed finite element discretization of the incompressible Navier-Stokes equations where the number of velocity nodes is much greater than the number of pressure nodes. Consequently, some velocity degrees-of-freedom (dofs) are defined at spatial locations where there are no corresponding pressure dofs. Thus, AMG approaches lever- aging this co-located structure are not applicable. This paper instead proposes an automatic AMG coarsening that mimics certain pressure/velocitymore » dof relationships of the Q 2-Q 1 discretization. The main idea is to first automatically define coarse pressures in a somewhat standard AMG fashion and then to carefully (but automatically) choose coarse velocity unknowns so that the spatial location relationship between pressure and velocity dofs resembles that on the nest grid. To define coefficients within the inter-grid transfers, an energy minimization AMG (EMIN-AMG) is utilized. EMIN-AMG is not tied to specific coarsening schemes and grid transfer sparsity patterns, and so it is applicable to the proposed coarsening. Numerical results highlighting solver performance are given on Stokes and incompressible Navier-Stokes problems.« less
NASA Astrophysics Data System (ADS)
Baker, Kirk R.; Hawkins, Andy; Kelly, James T.
2014-12-01
Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.
Great Lakes modeling: Are the mathematics outpacing the data and our understanding of the system?
Mathematical modeling in the Great Lakes has come a long way from the pioneering work done by Manhattan College in the 1970s, when the models operated on coarse computational grids (often lake-wide) and used simple eutrophication formulations. Moving forward 40 years, we are now...
Using Forest Health Monitoring to assess aspen forest cover change in the southern Rockies ecoregion
Paul Rogers
2002-01-01
Long-term qualitative observations suggest a marked decline in quaking aspen (Populus tremuloides Michx.) primarily due to advancing succession and fire suppression. This study presents an ecoregional coarse-grid analysis of the current aspen situation using Forest Health Monitoring (FHM) data from Idaho, Wyoming, and Colorado. A...
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
A Polar Initial Alignment Algorithm for Unmanned Underwater Vehicles
Yan, Zheping; Wang, Lu; Wang, Tongda; Zhang, Honghan; Zhang, Xun; Liu, Xiangling
2017-01-01
Due to its highly autonomy, the strapdown inertial navigation system (SINS) is widely used in unmanned underwater vehicles (UUV) navigation. Initial alignment is crucial because the initial alignment results will be used as the initial SINS value, which might affect the subsequent SINS results. Due to the rapid convergence of Earth meridians, there is a calculation overflow in conventional initial alignment algorithms, making conventional initial algorithms are invalid for polar UUV navigation. To overcome these problems, a polar initial alignment algorithm for UUV is proposed in this paper, which consists of coarse and fine alignment algorithms. Based on the principle of the conical slow drift of gravity, the coarse alignment algorithm is derived under the grid frame. By choosing the velocity and attitude as the measurement, the fine alignment with the Kalman filter (KF) is derived under the grid frame. Simulation and experiment are realized among polar, conventional and transversal initial alignment algorithms for polar UUV navigation. Results demonstrate that the proposed polar initial alignment algorithm can complete the initial alignment of UUV in the polar region rapidly and accurately. PMID:29168735
Aerodynamic design optimization via reduced Hessian SQP with solution refining
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
An all-at-once reduced Hessian Successive Quadratic Programming (SQP) scheme has been shown to be efficient for solving aerodynamic design optimization problems with a moderate number of design variables. This paper extends this scheme to allow solution refining. In particular, we introduce a reduced Hessian refining technique that is critical for making a smooth transition of the Hessian information from coarse grids to fine grids. Test results on a nozzle design using quasi-one-dimensional Euler equations show that through solution refining the efficiency and the robustness of the all-at-once reduced Hessian SQP scheme are significantly improved.
NASA Astrophysics Data System (ADS)
Abani, Neerav; Reitz, Rolf D.
2010-09-01
An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.
Bayesian calibration of coarse-grained forces: Efficiently addressing transferability
NASA Astrophysics Data System (ADS)
Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.
2016-04-01
Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.
Atmospheric Rivers in VR-CESM: Historical Comparison and Future Projections
NASA Astrophysics Data System (ADS)
McClenny, E. E.; Ullrich, P. A.
2016-12-01
Atmospheric rivers (ARs) are responsible for most of the horizontal vapor transport from the tropics, and bring upwards of half the annual precipitation to midlatitude west coasts. The difference between a drought year and a wet year can come down to 1-2 ARs. Such few events transform an otherwise arid region into one which supports remarkable biodiversity, productive agriculture, and booming human populations. It follows that such a sensitive hydroclimate feature would demand priority in evaluating end-of-century climate runs, and indeed, the AR subfield has grown significantly over the last decade. However, results tend to vary wildly from study to study, raising questions about how to best approach ARs in models. The disparity may result from any number of issues, including the ability for a model to properly resolve a precipitating AR, to the formulation and application of an AR detection algorithm. ARs pose a unique problem in global climate models (GCMs) computationally and physically, because the GCM horizontal grid must be fine enough to resolve coastal mountain range topography and force orographic precipitation. Thus far, most end-of-century projections on ARs have been performed on models whose grids are too coarse to resolve mountain ranges, causing authors to draw conclusions on AR intensity from water vapor content or transport alone. The use of localized grid refinement in the Variable Resolution version of NCAR's Community Earth System Model (VR-CESM) has succeeded in resolving AR landfall. This study applies an integrated water vapor AR detection algorithm to historical and future projections from VR-CESM, with historical ARs validated against NASA's Modern Era Retrospective-Analysis for Research and Applications. Results on end-of-century precipitating AR frequency, intensity, and landfall location will be discussed.
Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models
NASA Astrophysics Data System (ADS)
Xu, Shiming
2015-04-01
We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.
Wide-angle display-type retarding field analyzer with high energy and angular resolutions
NASA Astrophysics Data System (ADS)
Muro, Takayuki; Ohkochi, Takuo; Kato, Yukako; Izumi, Yudai; Fukami, Shun; Fujiwara, Hidenori; Matsushita, Tomohiro
2017-12-01
Deployments of spherical grids to obtain high energy and angular resolutions for retarding field analyzers (RFAs) having acceptance angles as large as or larger than ±45° were explored under the condition of using commercially available microchannel plates with effective diameters of approximately 100 mm. As a result of electron trajectory simulations, a deployment of three spherical grids with significantly different grid separations instead of conventional equidistant separations showed an energy resolving power (E/ΔE) of 3200 and an angular resolution of 0.6°. The mesh number of the wire mesh retarding grid used for the simulation was 250. An RFA constructed with the simulated design experimentally showed an E/ΔE of 1100 and an angular resolution of 1°. Using the RFA and synchrotron radiation of 900 eV, photoelectron diffraction (PED) measurements were performed for single-crystal graphite. A clear C 1s PED pattern was observed even when the differential energy of the RFA was set at 0.5 eV. Further improvement of the energy resolution was theoretically examined under the assumption of utilizing a retarding grid fabricated by making a large number of radially directed cylindrical holes through a partial spherical shell instead of using a wire mesh retarding grid. An E/ΔE of 14 500 was predicted for a hole design with a diameter of 60 μm and a depth of 100 μm. A retarding grid with this hole design and a holed area corresponding to an acceptance angle of ±7° was fabricated. An RFA constructed with this retarding grid experimentally showed an E/ΔE of 1800. Possible reasons for the experimental E/ΔE lower than the theoretical values are discussed.
NASA Astrophysics Data System (ADS)
Schiemann, Reinhard; Roberts, Charles J.; Bush, Stephanie; Demory, Marie-Estelle; Strachan, Jane; Vidale, Pier Luigi; Mizielinski, Matthew S.; Roberts, Malcolm J.
2015-04-01
Precipitation over land exhibits a high degree of variability due to the complex interaction of the precipitation generating atmospheric processes with coastlines, the heterogeneous land surface, and orography. Global general circulation models (GCMs) have traditionally had very limited ability to capture this variability on the mesoscale (here ~50-500 km) due to their low resolution. This has changed with recent investments in resolution and ensembles of multidecadal climate simulations of atmospheric GCMs (AGCMs) with ~25 km grid spacing are becoming increasingly available. Here, we evaluate the mesoscale precipitation distribution in one such set of simulations obtained in the UPSCALE (UK on PrACE - weather-resolving Simulations of Climate for globAL Environmental risk) modelling campaign with the HadGEM-GA3 AGCM. Increased model resolution also poses new challenges to the observational datasets used to evaluate models. Global gridded data products such as those provided by the Global Precipitation Climatology Project (GPCP) are invaluable for assessing large-scale features of the precipitation distribution but may not sufficiently resolve mesoscale structures. In the absence of independent estimates, the intercomparison of different observational datasets may be the only way to get some insight into the uncertainties associated with these observations. Here, we focus on mid-latitude continental regions where observations based on higher-density gauge networks are available in addition to the global data sets: Europe/the Alps, South and East Asia, and the continental US. The ability of GCMs to represent mesoscale variability is of interest in its own right, as climate information on this scale is required by impact studies. An additional motivation for the research proposed here arises from continuing efforts to quantify the components of the global radiation budget and water cycle. Recent estimates based on radiation measurements suggest that the global mean precipitation/evaporation may be up to 10 Wm-2 (about 0.35 mm day-1) larger than the estimate obtained from GPCP. While the main part of this discrepancy is thought to be due to the underestimation of remotely-sensed ocean precipitation, there is also considerable uncertainty about 'unobserved' precipitation over land, in particular in the form of snow in regions of high latitude/altitude. We aim to contribute to this discussion, at least at a qualitative level, by considering case studies of how area-averaged mountain precipitation is represented in different observational datasets and by HadGEM3-GA3 at different resolutions. Our results show that the AGCM simulates considerably more orographic precipitation at higher resolution. We find this at the global scale both for the winter and summer hemispheres, as well as in several case studies in mid-latitude regions. Gridded observations based on gauge measurements generally capture the mesoscale spatial variability of precipitation, but differ strongly from one another in the magnitude of area-averaged precipitation, so that they are of very limited use for evaluating this aspect of the modelled climate. We are currently conducting a sensitivity experiment (coarse-grained orography in high-resolution HadGEM3) to further investigate the resolution sensitivity seen in the model.
NASA Astrophysics Data System (ADS)
Lange, Heiner; Craig, George
2014-05-01
This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.
Recommended aquifer grid resolution for E-Area PA revision transport simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
This memorandum addresses portions of Section 3.5.2 of SRNL (2016) by recommending horizontal and vertical grid resolution for aquifer transport, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.
NASA Technical Reports Server (NTRS)
Tsiveriotis, K.; Brown, R. A.
1993-01-01
A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.
Filter and Grid Resolution in DG-LES
NASA Astrophysics Data System (ADS)
Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman
2017-11-01
The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.
Frontiers in Atmospheric Chemistry Modelling
NASA Astrophysics Data System (ADS)
Colette, Augustin; Bessagnet, Bertrand; Meleux, Frederik; Rouïl, Laurence
2013-04-01
The first pan-European kilometre-scale atmospheric chemistry simulation is introduced. The continental-scale air pollution episode of January 2009 is modelled with the CHIMERE offline chemistry-transport model with a massive grid of 2 million horizontal points, performed on 2000 CPU of a high performance computing system hosted by the Research and Technology Computing Center at the French Alternative Energies and Atomic Energy Commission (CCRT/CEA). Besides the technical challenge, which demonstrated the robustness of the selected air quality model, we discuss the added value in terms of air pollution modelling and decision support. The comparison with in-situ observations shows that model biases are significantly improved despite some spurious added spatial variability attributed to shortcomings in the emission downscaling process and coarse resolution of the meteorological fields. The increased spatial resolution is clearly beneficial for the detection of exceedances and exposure modelling. We reveal small scale air pollution patterns that highlight the contribution of city plumes to background air pollution levels. Up to a factor 5 underestimation of the fraction of population exposed to detrimental levels of pollution can be obtained with a coarse simulation if subgrid scale correction such as urban increments are ignored. This experiment opens new perspectives for environmental decision making. After two decades of efforts to reduce air pollutant emissions across Europe, the challenge is now to find the optimal trade-off between national and local air quality management strategies. While the first approach is based on sectoral strategies and energy policies, the later builds upon new alternatives such as urban development. The strategies, the decision pathways and the involvement of individual citizen differ, and a compromise based on cost and efficiency must be found. We illustrated how high performance computing in atmospheric science can contribute to this aim. Although further developments are still needed to secure the results for routine policy use, the door is now open...
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.
1999-01-01
The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.
Improving Barotropic Tides by Two-way Nesting High and Low Resolution Domains
NASA Astrophysics Data System (ADS)
Jeon, C. H.; Buijsman, M. C.; Wallcraft, A. J.; Shriver, J. F.; Hogan, P. J.; Arbic, B. K.; Richman, J. G.
2017-12-01
In a realistically forced global ocean model, relatively large sea-surface-height root-mean-square (RMS) errors are observed in the North Atlantic near the Hudson Strait. These may be associated with large tidal resonances interacting with coastal bathymetry that are not correctly represented with a low resolution grid. This issue can be overcome by using high resolution grids, but at a high computational cost. In this paper we apply two-way nesting as an alternative solution. This approach applies high resolution to the area with large RMS errors and a lower resolution to the rest. It is expected to improve the tidal solution as well as reduce the computational cost. To minimize modification of the original source codes of the ocean circulation model (HYCOM), we apply the coupler OASIS3-MCT. This coupler is used to exchange barotropic pressures and velocity fields through its APIs (Application Programming Interface) between the parent and the child components. The developed two-way nesting framework has been validated with an idealized test case where the parent and the child domains have identical grid resolutions. The result of the idealized case shows very small RMS errors between the child and parent solutions. We plan to show results for a case with realistic tidal forcing in which the resolution of the child grid is three times that of the parent grid. The numerical results of this realistic case are compared to TPXO data.
Zhang, Lina; Zhang, Haoxu; Zhou, Ruifeng; Chen, Zhuo; Li, Qunqing; Fan, Shoushan; Ge, Guanglu; Liu, Renxiao; Jiang, Kaili
2011-09-23
A novel grid for use in transmission electron microscopy is developed. The supporting film of the grid is composed of thin graphene oxide films overlying a super-aligned carbon nanotube network. The composite film combines the advantages of graphene oxide and carbon nanotube networks and has the following properties: it is ultra-thin, it has a large flat and smooth effective supporting area with a homogeneous amorphous appearance, high stability, and good conductivity. The graphene oxide-carbon nanotube grid has a distinct advantage when characterizing the fine structure of a mass of nanomaterials over conventional amorphous carbon grids. Clear high-resolution transmission electron microscopy images of various nanomaterials are obtained easily using the new grids.
NASA Astrophysics Data System (ADS)
Gailler, A.; Loevenbruck, A.; Hebert, H.
2013-12-01
Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).
Rapid inundation estimates using coastal amplification laws in the western Mediterranean basin
NASA Astrophysics Data System (ADS)
Gailler, Audrey; Loevenbruck, Anne; Hébert, Hélène
2014-05-01
Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake events and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).
Spectral multigrid methods for elliptic equations 2
NASA Technical Reports Server (NTRS)
Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.
1983-01-01
A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.
NASA Astrophysics Data System (ADS)
Adams, P. J.; Marks, M.
2015-12-01
The aerosol indirect effect is the largest source of forcing uncertainty in current climate models. This effect arises from the influence of aerosols on the reflective properties and lifetimes of clouds, and its magnitude depends on how many particles can serve as cloud droplet formation sites. Assessing levels of this subset of particles (cloud condensation nuclei, or CCN) requires knowledge of aerosol levels and their global distribution, size distributions, and composition. A key tool necessary to advance our understanding of CCN is the use of global aerosol microphysical models, which simulate the processes that control aerosol size distributions: nucleation, condensation/evaporation, and coagulation. Previous studies have found important differences in CO (Chen, D. et al., 2009) and ozone (Jang, J., 1995) modeled at different spatial resolutions, and it is reasonable to believe that short-lived, spatially-variable aerosol species will be similarly - or more - susceptible to model resolution effects. The goal of this study is to determine how CCN levels and spatial distributions change as simulations are run at higher spatial resolution - specifically, to evaluate how sensitive the model is to grid size, and how this affects comparisons against observations. Higher resolution simulations are necessary supports for model/measurement synergy. Simulations were performed using the global chemical transport model GEOS-Chem (v9-02). The years 2008 and 2009 were simulated at 4ox5o and 2ox2.5o globally and at 0.5ox0.667o over Europe and North America. Results were evaluated against surface-based particle size distribution measurements from the European Supersites for Atmospheric Aerosol Research project. The fine-resolution model simulates more spatial and temporal variability in ultrafine levels, and better resolves topography. Results suggest that the coarse model predicts systematically lower ultrafine levels than does the fine-resolution model. Significant differences are also evident with respect to model-measurement comparisons, and will be discussed.
INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL
The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-10-01
In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.
Development of coarse-scale spatial data for wildland fire and fuel management
Kirsten M. Schmidt; James P. Menakis; Colin C. Hardy; Wendall J. Hann; David L. Bunnell
2002-01-01
We produced seven coarse-scale, 1-km2 resolution, spatial data layers for the conterminous United States to support national-level fire planning and risk assessments. Four of these layers were developed to evaluate ecological conditions and risk to ecosystem components: Potential Natural Vegetation Groups, a layer of climax vegetation types representing site...
Downscaling scheme to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, Annika; Venema, Victor; Lindau, Ralf; Ament, Felix; Simmer, Clemens
2010-05-01
The earth's surface is characterized by heterogeneity at a broad range of scales. Weather forecast models and climate models are not able to resolve this heterogeneity at the smaller scales. Many processes in the soil or at the surface, however, are highly nonlinear. This holds, for example, for evaporation processes, where stomata or aerodynamic resistances are nonlinear functions of the local micro-climate. Other examples are threshold dependent processes, e.g., the generation of runoff or the melting of snow. It has been shown that using averaged parameters in the computation of these processes leads to errors and especially biases, due to the involved nonlinearities. Thus it is necessary to account for the sub-grid scale surface heterogeneities in atmospheric modeling. One approach to take the variability of the earth's surface into account is the mosaic approach. Here the soil-vegetation-atmosphere transfer (SVAT) model is run on an explicit higher resolution than the atmospheric part of a coupled model, which is feasible due to generally lower computational costs of a SVAT model compared to the atmospheric part. The question arises how to deal with the scale differences at the interface between the two resolutions. Usually the assumption of a homogeneous forcing for all sub-pixels is made. However, over a heterogeneous surface, usually the boundary layer is also heterogeneous. Thus, by assuming a constant atmospheric forcing again biases in the turbulent heat fluxes may occur due to neglected atmospheric forcing variability. Therefore we have developed and tested a downscaling scheme to disaggregate the atmospheric variables of the lower atmosphere that are used as input to force a SVAT model. Our downscaling scheme consists of three steps: 1) a bi-quadratic spline interpolation of the coarse-resolution field; 2) a "deterministic" part, where relationships between surface and near-surface variables are exploited; and 3) a noise-generation step, in which the still missing, not explained, variance is added as noise. The scheme has been developed and tested based on high-resolution (400 m) model output of the weather forecast (and regional climate) COSMO model. Downscaling steps 1 and 2 reduce the error made by the homogeneous assumption considerably, whereas the third step leads to close agreement of the sub-grid scale variance with the reference. This is, however, achieved at the cost of higher root mean square errors. Thus, before applying the downscaling system to atmospheric data a decision should be made whether the lowest possible errors (apply only downscaling step 1 and 2) or a most realistic sub-grid scale variability (apply also step 3) is desired. This downscaling scheme is currently being implemented into the COSMO model, where it will be used in combination with the mosaic approach. However, this downscaling scheme can also be applied to drive stand-alone SVAT models or hydrological models, which usually also need high-resolution atmospheric forcing data.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo M.; Putman, William; Nattala, J.
2014-01-01
This document describes the gridded output files produced by a two-year global, non-hydrostatic mesoscale simulation for the period 2005-2006 produced with the non-hydrostatic version of GEOS-5 Atmospheric Global Climate Model (AGCM). In addition to standard meteorological parameters (wind, temperature, moisture, surface pressure), this simulation includes 15 aerosol tracers (dust, sea-salt, sulfate, black and organic carbon), O3, CO and CO2. This model simulation is driven by prescribed sea-surface temperature and sea-ice, daily volcanic and biomass burning emissions, as well as high-resolution inventories of anthropogenic sources. A description of the GEOS-5 model configuration used for this simulation can be found in Putman et al. (2014). The simulation is performed at a horizontal resolution of 7 km using a cubed-sphere horizontal grid with 72 vertical levels, extending up to to 0.01 hPa (approximately 80 km). For user convenience, all data products are generated on two logically rectangular longitude-latitude grids: a full-resolution 0.0625 deg grid that approximately matches the native cubed-sphere resolution, and another 0.5 deg reduced-resolution grid. The majority of the full-resolution data products are instantaneous with some fields being time-averaged. The reduced-resolution datasets are mostly time-averaged, with some fields being instantaneous. Hourly data intervals are used for the reduced-resolution datasets, while 30-minute intervals are used for the full-resolution products. All full-resolution output is on the model's native 72-layer hybrid sigma-pressure vertical grid, while the reduced-resolution output is given on native vertical levels and on 48 pressure surfaces extending up to 0.02 hPa. Section 4 presents additional details on horizontal and vertical grids. Information of the model surface representation can be found in Appendix B. The GEOS-5 product is organized into file collections that are described in detail in Appendix C. Additional details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
2015-06-15
Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-02-01
In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.
Micro-Slit Collimators for X-Ray/Gamma-Ray Imaging
NASA Technical Reports Server (NTRS)
Appleby, Michael; Fraser, Iain; Klinger, Jill
2011-01-01
A hybrid photochemical-machining process is coupled with precision stack lamination to allow for the fabrication of multiple ultra-high-resolution grids on a single array substrate. In addition, special fixturing and etching techniques have been developed that allow higher-resolution multi-grid collimators to be fabricated. Building on past work of developing a manufacturing technique for fabricating multi-grid, high-resolution coating modulation collimators for arcsecond and subarcsecond x-ray and gamma-ray imaging, the current work reduces the grid pitch by almost a factor of two, down to 22 microns. Additionally, a process was developed for reducing thin, high-Z (tungsten or molybdenum) from the thinnest commercially available foil (25 microns thick) down to approximately equal to 10 microns thick using precisely controlled chemical etching
Simulation of a Wall-Bounded Flow using a Hybrid LES/RAS Approach with Turbulence Recycling
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; Mcdaniel, James; Baurle, Robert A.
2012-01-01
Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/ Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters the three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case, and these comparisons indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. The effect of turbulence recycling on the solution is illustrated by performing coarse grid simulations with and without inflow turbulence recycling. Two shock sensors, one of Ducros and one of Larsson, are assessed for use with the hybridized inviscid flux reconstruction scheme.
Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN
NASA Technical Reports Server (NTRS)
Quinlan, Jesse; McDaniel, James; Baurle, Robert A.
2013-01-01
Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Fuyu; Collins, William D.; Wehner, Michael F.
High-resolution climate models have been shown to improve the statistics of tropical storms and hurricanes compared to low-resolution models. The impact of increasing horizontal resolution in the tropical storm simulation is investigated exclusively using a series of Atmospheric Global Climate Model (AGCM) runs with idealized aquaplanet steady-state boundary conditions and a fixed operational storm-tracking algorithm. The results show that increasing horizontal resolution helps to detect more hurricanes, simulate stronger extreme rainfall, and emulate better storm structures in the models. However, increasing model resolution does not necessarily produce stronger hurricanes in terms of maximum wind speed, minimum sea level pressure, andmore » mean precipitation, as the increased number of storms simulated by high-resolution models is mainly associated with weaker storms. The spatial scale at which the analyses are conducted appears to have more important control on these meteorological statistics compared to horizontal resolution of the model grid. When the simulations are analyzed on common low-resolution grids, the statistics of the hurricanes, particularly the hurricane counts, show reduced sensitivity to the horizontal grid resolution and signs of scale invariant.« less
Wind turbine wake interactions at field scale: An LES study of the SWiFT facility
NASA Astrophysics Data System (ADS)
Yang, Xiaolei; Boomsma, Aaron; Barone, Matthew; Sotiropoulos, Fotis
2014-06-01
The University of Minnesota Virtual Wind Simulator (VWiS) code is employed to simulate turbine/atmosphere interactions in the Scaled Wind Farm Technology (SWiFT) facility developed by Sandia National Laboratories in Lubbock, TX, USA. The facility presently consists of three turbines and the simulations consider the case of wind blowing from South such that two turbines are in the free stream and the third turbine in the direct wake of one upstream turbine with separation of 5 rotor diameters. Large-eddy simulation (LES) on two successively finer grids is carried out to examine the sensitivity of the computed solutions to grid refinement. It is found that the details of the break-up of the tip vortices into small-scale turbulence structures can only be resolved on the finer grid. It is also shown that the power coefficient CP of the downwind turbine predicted on the coarse grid is somewhat higher than that obtained on the fine mesh. On the other hand, the rms (root-mean-square) of the CP fluctuations are nearly the same on both grids, although more small-scale turbulence structures are resolved upwind of the downwind turbine on the finer grid.
High Resolution Modeling of Hurricanes in a Climate Context
NASA Astrophysics Data System (ADS)
Knutson, T. R.
2007-12-01
Modeling of tropical cyclone activity in a climate context initially focused on simulation of relatively weak tropical storm-like disturbances as resolved by coarse grid (200 km) global models. As computing power has increased, multi-year simulations with global models of grid spacing 20-30 km have become feasible. Increased resolution also allowed for simulation storms of increasing intensity, and some global models generate storms of hurricane strength, depending on their resolution and other factors, although detailed hurricane structure is not simulated realistically. Results from some recent high resolution global model studies are reviewed. An alternative for hurricane simulation is regional downscaling. An early approach was to embed an operational (GFDL) hurricane prediction model within a global model solution, either for 5-day case studies of particular model storm cases, or for "idealized experiments" where an initial vortex is inserted into an idealized environments derived from global model statistics. Using this approach, hurricanes up to category five intensity can be simulated, owing to the model's relatively high resolution (9 km grid) and refined physics. Variants on this approach have been used to provide modeling support for theoretical predictions that greenhouse warming will increase the maximum intensities of hurricanes. These modeling studies also simulate increased hurricane rainfall rates in a warmer climate. The studies do not address hurricane frequency issues, and vertical shear is neglected in the idealized studies. A recent development is the use of regional model dynamical downscaling for extended (e.g., season-length) integrations of hurricane activity. In a study for the Atlantic basin, a non-hydrostatic model with grid spacing of 18km is run without convective parameterization, but with internal spectral nudging toward observed large-scale (basin wavenumbers 0-2) atmospheric conditions from reanalyses. Using this approach, our model reproduces the observed increase in Atlantic hurricane activity (numbers, Accumulated Cyclone Energy (ACE), Power Dissipation Index (PDI), etc.) over the period 1980-2006 fairly realistically, and also simulates ENSO-related interannual variations in hurricane counts. Annual simulated hurricane counts from a two-member ensemble correlate with observed counts at r=0.86. However, the model does not simulate hurricanes as intense as those observed, with minimum central pressures of 937 hPa (category 4) and maximum surface winds of 47 m/s (category 2) being the most intense simulated so far in these experiments. To explore possible impacts of future climate warming on Atlantic hurricane activity, we are re-running the 1980- 2006 seasons, keeping the interannual to multidecadal variations unchanged, but altering the August-October mean climate according to changes simulated by an 18-member ensemble of AR4 climate models (years 2080- 2099, A1B emission scenario). The warmer climate state features higher Atlantic SSTs, and also increased vertical wind shear across the Caribbean (Vecchi and Soden, GRL 2007). A key assumption of this approach is that the 18-model ensemble-mean climate change is the best available projection of future climate change in the Atlantic. Some of the 18 global models show little increase in wind shear, or even a decrease, and thus there will be considerable uncertainty associated with the hurricane frequency results, which will require further exploration. Results from our simulations will be presented at the meeting.
Design and testing of a novel multi-stroke micropositioning system with variable resolutions.
Xu, Qingsong
2014-02-01
Multi-stroke stages are demanded in micro-/nanopositioning applications which require smaller and larger motion strokes with fine and coarse resolutions, respectively. This paper presents the conceptual design of a novel multi-stroke, multi-resolution micropositioning stage driven by a single actuator for each working axis. It eliminates the issue of the interference among different drives, which resides in conventional multi-actuation stages. The stage is devised based on a fully compliant variable stiffness mechanism, which exhibits unequal stiffnesses in different strokes. Resistive strain sensors are employed to offer variable position resolutions in the different strokes. To quantify the design of the motion strokes and coarse/fine resolution ratio, analytical models are established. These models are verified through finite-element analysis simulations. A proof-of-concept prototype XY stage is designed, fabricated, and tested to demonstrate the feasibility of the presented ideas. Experimental results of static and dynamic testing validate the effectiveness of the proposed design.
A multi-block adaptive solving technique based on lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zhang, Yang; Xie, Jiahua; Li, Xiaoyue; Ma, Zhenghai; Zou, Jianfeng; Zheng, Yao
2018-05-01
In this paper, a CFD parallel adaptive algorithm is self-developed by combining the multi-block Lattice Boltzmann Method (LBM) with Adaptive Mesh Refinement (AMR). The mesh refinement criterion of this algorithm is based on the density, velocity and vortices of the flow field. The refined grid boundary is obtained by extending outward half a ghost cell from the coarse grid boundary, which makes the adaptive mesh more compact and the boundary treatment more convenient. Two numerical examples of the backward step flow separation and the unsteady flow around circular cylinder demonstrate the vortex structure of the cold flow field accurately and specifically.
NASA Astrophysics Data System (ADS)
Kennedy, A. M.; Lane, J.; Ebert, M. A.
2014-03-01
Plan review systems often allow dose volume histogram (DVH) recalculation as part of a quality assurance process for trials. A review of the algorithms provided by a number of systems indicated that they are often very similar. One notable point of variation between implementations is in the location and frequency of dose sampling. This study explored the impact such variations can have on DVH based plan evaluation metrics (Normal Tissue Complication Probability (NTCP), min, mean and max dose), for a plan with small structures placed over areas of high dose gradient. Dose grids considered were exported from the original planning system at a range of resolutions. We found that for the CT based resolutions used in all but one plan review systems (CT and CT with guaranteed minimum number of sampling voxels in the x and y direction) results were very similar and changed in a similar manner with changes in the dose grid resolution despite the extreme conditions. Differences became noticeable however when resolution was increased in the axial (z) direction. Evaluation metrics also varied differently with changing dose grid for CT based resolutions compared to dose grid based resolutions. This suggests that if DVHs are being compared between systems that use a different basis for selecting sampling resolution it may become important to confirm that a similar resolution was used during calculation.
Overset grid applications on distributed memory MIMD computers
NASA Technical Reports Server (NTRS)
Chawla, Kalpana; Weeratunga, Sisira
1994-01-01
Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.
Numerical Model of Turbulence, Sediment Transport, and Sediment Cover in a Large Canyon-Bound River
NASA Astrophysics Data System (ADS)
Alvarez, L. V.; Schmeeckle, M. W.
2013-12-01
The Colorado River in Grand Canyon is confined by bedrock and coarse-grained sediments. Finer grain sizes are supply limited, and sandbars primarily occur in lateral separation eddies downstream of coarse-grained tributary debris fans. These sandbars are important resources for native fish, recreational boaters, and as a source of aeolian transport preventing the erosion of archaeological resources by gully extension. Relatively accurate prediction of deposition and, especially, erosion of these sandbar beaches has proven difficult using two- and three-dimensional, time-averaged morphodynamic models. We present a parallelized, three-dimensional, turbulence-resolving model using the Detached-Eddy Simulation (DES) technique. DES is a hybrid large eddy simulation (LES) and Reynolds-averaged Navier Stokes (RANS). RANS is applied to the near-bed grid cells, where grid resolution is not sufficient to fully resolve wall turbulence. LES is applied further from the bed and banks. We utilize the Spalart-Allmaras one equation turbulence closure with a rough wall extension. The model resolves large-scale turbulence using DES and simultaneously integrates the suspended sediment advection-diffusion equation. The Smith and McLean suspended sediment boundary condition is used to calculate the upward and downward settling of sediment fluxes in the grid cells attached to the bed. The model calculates the entrainment of five grain sizes at every time step using a mixing layer model. Where the mixing layer depth becomes zero, the net entrainment is zero or negative. As such, the model is able to predict the exposure and burial of bedrock and coarse-grained surfaces by fine-grained sediments. A separate program was written to automatically construct the computational domain between the water surface and a triangulated surface of a digital elevation model of the given river reach. Model results compare favorably with ADCP measurements of flow taken on the Colorado River in Grand Canyon during the High Flow Experiment (HFE) of 2008. The model accurately reproduces the size and position of the major recirculation currents, and the error in velocity magnitude was found to be less than 17% or 0.22 m/s absolute error. The mean deviation of the direction of velocity with respect to the measured velocity was found to be 20 degrees. Large-scale turbulence structures with vorticity predominantly in the vertical direction are produced at the shear layer between the main channel and the separation zone. However, these structures rapidly become three-dimensional with no preferred orientation of vorticity. Surprisingly, cross-stream velocities, into the main recirculation zone just upstream of the point of reattachment and out of the main recirculation region just downstream of the point of separation, are highest near the bed. Lateral separation eddies are more efficient at storing and exporting sediment than previously modeled. The input of sediment to the eddy recirculation zone occurs near the reattachment zone and is relatively continuous in time. While, the export of sediment to the main channel by the return current occurs in pulses. Pulsation of the strength of the return current becomes a key factor to determine the rates of erosion and deposition in the main recirculation zone.
NASA Astrophysics Data System (ADS)
Bashir, F.; Zeng, X.; Gupta, H. V.; Hazenberg, P.
2017-12-01
Drought as an extreme event may have far reaching socio-economic impacts on agriculture based economies like Pakistan. Effective assessment of drought requires high resolution spatiotemporally continuous hydrometeorological information. For this purpose, new in-situ daily observations based gridded analyses of precipitation, maximum, minimum and mean temperature and diurnal temperature range are developed, that covers whole Pakistan on 0.01º latitude-longitude for a 54-year period (1960-2013). The number of participating meteorological observatories used in these gridded analyses is 2 to 6 times greater than any other similar product available. This data set is used to identify extreme wet and dry periods and their spatial patterns across Pakistan using Palmer Drought Severity Index (PDSI) and Standardized Precipitation Index (SPI). Periodicity of extreme events is estimated at seasonal to decadal scales. Spatiotemporal signatures of drought incidence indicating its extent and longevity in different areas may help water resource managers and policy makers to mitigate the severity of the drought and its impact on food security through suitable adaptive techniques. Moreover, this high resolution gridded in-situ observations of precipitation and temperature is used to evaluate other coarser-resolution gridded products.
USDA-ARS?s Scientific Manuscript database
The SMAP (Soil Moisture Active Passive) mission provides global surface soil moisture product at 36 km resolution from its L-band radiometer. While the coarse resolution is satisfactory to many applications there are also a lot of applications which would benefit from a higher resolution soil moistu...
High capacitance of coarse-grained carbide derived carbon electrodes
Dyatkin, Boris; Gogotsi, Oleksiy; Malinovskiy, Bohdan; ...
2016-01-01
Here, we report exceptional electrochemical properties of supercapacitor electrodes composed of large, granular carbide-derived carbon (CDC) particles. We synthesized 70–250 μm sized particles with high surface area and a narrow pore size distribution, using a titanium carbide (TiC) precursor. Electrochemical cycling of these coarse-grained powders defied conventional wisdom that a small particle size is strictly required for supercapacitor electrodes and allowed high charge storage densities, rapid transport, and good rate handling ability. Moreover, the material showcased capacitance above 100 F g -1 at sweep rates as high as 250 mV s -1 in organic electrolyte. 250–1000 micron thick dense CDCmore » films with up to 80 mg cm -2 loading showed superior areal capacitances. The material significantly outperformed its activated carbon counterpart in organic electrolytes and ionic liquids. Furthermore, large internal/external surface ratio of coarse-grained carbons allowed the resulting electrodes to maintain high electrochemical stability up to 3.1 V in ionic liquid electrolyte. In addition to presenting novel insights into the electrosorption process, these coarse-grained carbons offer a pathway to low-cost, high-performance implementation of supercapacitors in automotive and grid-storage applications.« less
High capacitance of coarse-grained carbide derived carbon electrodes
NASA Astrophysics Data System (ADS)
Dyatkin, Boris; Gogotsi, Oleksiy; Malinovskiy, Bohdan; Zozulya, Yuliya; Simon, Patrice; Gogotsi, Yury
2016-02-01
We report exceptional electrochemical properties of supercapacitor electrodes composed of large, granular carbide-derived carbon (CDC) particles. Using a titanium carbide (TiC) precursor, we synthesized 70-250 μm sized particles with high surface area and a narrow pore size distribution. Electrochemical cycling of these coarse-grained powders defied conventional wisdom that a small particle size is strictly required for supercapacitor electrodes and allowed high charge storage densities, rapid transport, and good rate handling ability. The material showcased capacitance above 100 F g-1 at sweep rates as high as 250 mV s-1 in organic electrolyte. 250-1000 micron thick dense CDC films with up to 80 mg cm-2 loading showed superior areal capacitances. The material significantly outperformed its activated carbon counterpart in organic electrolytes and ionic liquids. Furthermore, large internal/external surface ratio of coarse-grained carbons allowed the resulting electrodes to maintain high electrochemical stability up to 3.1 V in ionic liquid electrolyte. In addition to presenting novel insights into the electrosorption process, these coarse-grained carbons offer a pathway to low-cost, high-performance implementation of supercapacitors in automotive and grid-storage applications.
Use of upscaled elevation and surface roughness data in two-dimensional surface water models
Hughes, J.D.; Decker, J.D.; Langevin, C.D.
2011-01-01
In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.
NASA Astrophysics Data System (ADS)
Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier
2018-02-01
This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.
NASA Astrophysics Data System (ADS)
Keshtpoor, M.; Carnacina, I.; Blair, A.; Yablonsky, R. M.
2017-12-01
Storm surge caused by Extratropical Cyclones (ETCs) has significantly impacted not only the life of private citizens but also the insurance and reinsurance industry in Great Britain. The storm surge risk assessment requires a larger dataset of storms than the limited recorded historical ETCs. Thus, historical ETCs were perturbed to generate a 10,000-year stochastic catalog that accounts for surge-generating ETCs in the study area with return periods from one year to 10,000 years. Delft3D-Flexible Mesh hydrodynamic model was used to numerically simulate the storm surge along the Great Britain coastline. A nested grid technique was used to increase the simulation grid resolution up to 200 m near the highly populated coastal areas. Coarse and fine mesh models were calibrated and validated using historical recorded water elevations. Then, numerical simulations were performed on a 10,000-year stochastic catalog. The 50-, 100-, and 500-year return period maps were generated for Great Britain coastal areas. The corresponding events with return periods of 50-, 100-, and 500-years in Humber Bay and Thames River coastal areas were identified, and simulated with the consideration of projected sea level rises to reveal the effect of rising sea levels on the inundation return period maps in two highly-populated coastal areas. Finally, the return period of Storm Xaver (2013) was determined with and without the effect of rising sea levels.
Towards the Irving-Kirkwood limit of the mechanical stress tensor
NASA Astrophysics Data System (ADS)
Smith, E. R.; Heyes, D. M.; Dini, D.
2017-06-01
The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V =ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ =1.0 , a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.
Towards the Irving-Kirkwood limit of the mechanical stress tensor.
Smith, E R; Heyes, D M; Dini, D
2017-06-14
The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ 3 . Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.
Sea-ice deformation in a coupled ocean-sea-ice model and in satellite remote sensing data
NASA Astrophysics Data System (ADS)
Spreen, Gunnar; Kwok, Ron; Menemenlis, Dimitris; Nguyen, An T.
2017-07-01
A realistic representation of sea-ice deformation in models is important for accurate simulation of the sea-ice mass balance. Simulated sea-ice deformation from numerical simulations with 4.5, 9, and 18 km horizontal grid spacing and a viscous-plastic (VP) sea-ice rheology are compared with synthetic aperture radar (SAR) satellite observations (RGPS, RADARSAT Geophysical Processor System) for the time period 1996-2008. All three simulations can reproduce the large-scale ice deformation patterns, but small-scale sea-ice deformations and linear kinematic features (LKFs) are not adequately reproduced. The mean sea-ice total deformation rate is about 40 % lower in all model solutions than in the satellite observations, especially in the seasonal sea-ice zone. A decrease in model grid spacing, however, produces a higher density and more localized ice deformation features. The 4.5 km simulation produces some linear kinematic features, but not with the right frequency. The dependence on length scale and probability density functions (PDFs) of absolute divergence and shear for all three model solutions show a power-law scaling behavior similar to RGPS observations, contrary to what was found in some previous studies. Overall, the 4.5 km simulation produces the most realistic divergence, vorticity, and shear when compared with RGPS data. This study provides an evaluation of high and coarse-resolution viscous-plastic sea-ice simulations based on spatial distribution, time series, and power-law scaling metrics.
Calcium waves in a grid of clustered channels with synchronous IP3 binding and unbinding.
Rückl, M; Rüdiger, S
2016-11-01
Calcium signals in cells occur at multiple spatial scales and variable temporal duration. However, a physical explanation for transitions between long-lasting global oscillations and localized short-term elevations (puffs) of cytoplasmic Ca 2+ is still lacking. Here we introduce a phenomenological, coarse-grained model for the calcium variable, which is represented by ordinary differential equations. Due to its small number of parameters, and its simplicity, this model allows us to numerically study the interplay of multi-scale calcium concentrations with stochastic ion channel gating dynamics even in larger systems. We apply this model to a single cluster of inositol trisphosphate (IP 3 ) receptor channels and find further evidence for the results presented in earlier work: a single cluster may be capable of producing different calcium release types, where long-lasting events are accompanied by unbinding of IP 3 from the receptor (Rückl et al., PLoS Comput. Biol. 11, e1003965 (2015)). Finally, we show the practicability of the model in a grid of 64 clusters which is computationally intractable with previous high-resolution models. Here long-lasting events can lead to synchronized oscillations and waves, while short events stay localized. The frequency of calcium releases as well as their coherence can thereby be regulated by the amplitude of IP 3 stimulation. Finally the model allows for a new explanation of oscillating [IP 3 ], which is not based on metabolic production and degradation of IP 3 .
Kress, Michael E.; Benimoff, Alan I.; Fritz, William J.; Thatcher, Cindy A.; Blanton, Brian O.; Dzedzits, Eugene
2016-01-01
Hurricane Sandy made landfall on October 29, 2012, near Brigantine, New Jersey, and had a transformative impact on Staten Island and the New York Metropolitan area. Of the 43 New York City fatalities, 23 occurred on Staten Island. The borough, with a population of approximately 500,000, experienced some of the most devastating impacts of the storm. Since Hurricane Sandy, protective dunes have been constructed on the southeast shore of Staten Island. ADCIRC+SWAN model simulations run on The City University of New York's Cray XE6M, housed at the College of Staten Island, using updated topographic data show that the coast of Staten Island is still susceptible to tidal surge similar to those generated by Hurricane Sandy. Sandy hindcast simulations of storm surges focusing on Staten Island are in good agreement with observed storm tide measurements. Model results calculated from fine-scaled and coarse-scaled computational grids demonstrate that finer grids better resolve small differences in the topography of critical hydraulic control structures, which affect storm surge inundation levels. The storm surge simulations, based on post-storm topography obtained from high-resolution lidar, provide much-needed information to understand Staten Island's changing vulnerability to storm surge inundation. The results of fine-scale storm surge simulations can be used to inform efforts to improve resiliency to future storms. For example, protective barriers contain planned gaps in the dunes to provide for beach access that may inadvertently increase the vulnerability of the area.
Towards the Irving-Kirkwood limit of the mechanical stress tensor
Heyes, D. M.; Dini, D.
2017-01-01
The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems. PMID:29166053
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Saltzman, Jeff S.
1992-01-01
We describe the development of a structured adaptive mesh algorithm (AMR) for the Connection Machine-2 (CM-2). We develop a data layout scheme that preserves locality even for communication between fine and coarse grids. On 8K of a 32K machine we achieve performance slightly less than 1 CPU of the Cray Y-MP. We apply our algorithm to an inviscid compressible flow problem.
On Spurious Numerics in Solving Reactive Equations
NASA Technical Reports Server (NTRS)
Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.
2013-01-01
The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.
The functional micro-organization of grid cells revealed by cellular-resolution imaging
Heys, James G.; Rangarajan, Krsna V.; Dombeck, Daniel A.
2015-01-01
Summary Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater micro-circuit level understanding of the brain’s representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to non-grid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: The similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a “Mexican Hat” shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. PMID:25467986
Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts
Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.
2009-01-01
We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.
A model of regional primary production for use with coarse resolution satellite data
NASA Technical Reports Server (NTRS)
Prince, S. D.
1991-01-01
A model of crop primary production, which was originally developed to relate the amount of absorbed photosynthetically active radiation (APAR) to net production in field studies, is discussed in the context of coarse resolution regional remote sensing of primary production. The model depends on an approximately linear relationship between APAR and the normalized difference vegetation index. A more comprehensive form of the conventional model is shown to be necessary when different physiological types of plants or heterogeneous vegetation types occur within the study area. The predicted variable in the new model is total assimilation (net production plus respiration) rather than net production alone or harvest yield.
Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui
2018-01-01
Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589
Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool
NASA Astrophysics Data System (ADS)
Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.
2015-03-01
Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.
Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.
2017-12-01
The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both the south and north polar regions, grids can be exported from GMRT in a variety of formats including ASCII, GeoTIFF and NetCDF to support use in common mapping software applications such as ArcGIS, GMT, Matlab, and Python. New web services have also been developed to enable programmatic access to grids and images in north and south polar projections.
Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David
2013-05-21
We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.
Meteorology, Emissions, and Grid Resolution: Effects on Discrete and Probabilistic Model Performance
In this study, we analyze the impacts of perturbations in meteorology and emissions and variations in grid resolution on air quality forecast simulations. The meteorological perturbations con-sidered in this study introduce a typical variability of ~1°C, 250 - 500 m, 1 m/s, and 1...
NASA Astrophysics Data System (ADS)
Burke, Sophia; Mulligan, Mark
2017-04-01
WaterWorld is a widely used spatial hydrological policy support system. The last user census indicates regular use by 1029 institutions across 141 countries. A key feature of WaterWorld since 2001 is that it comes pre-loaded with all of the required data for simulation anywhere in the world at a 1km or 1 ha resolution. This means that it can be easily used, without specialist technical ability, to examine baseline hydrology and the impacts of scenarios for change or management interventions to support policy formulation, hence its labelling as a policy support system. WaterWorld is parameterised by an extensive global gridded database of more than 600 variables, developed from many sources, since 1998, the so-called simTerra database. All of these data are available globally at 1km resolution and some variables (terrain, land cover, urban areas, water bodies) are available globally at 1ha resolution. If users have access to better data than is pre-loaded, they can upload their own data. WaterWorld is generally applied at the national or basin scale at 1km resolution, or locally (for areas of <10,000km2) at 1ha resolution, though continental (1km resolution) and global (10km resolution) applications are possible so it is a model with local to global applications. WaterWorld requires some 140 maps to run including monthly climate data, land cover and use, terrain, population, water bodies and more. Whilst publically-available terrain and land cover data are now well developed for local scale application, climate and land use data remain a challenge, with most global products being available at 1km or 10km resolution or worse, which is rather coarse for local application. As part of the EartH2Observe project we have used WFDEI (WATCH Forcing Data methodology applied to ERA-Interim data) at 1km resolution to provide an alternative input to WaterWorld's preloaded climate data. Here we examine the impacts of that on key hydrological outputs: water balance, water quality and outline the remaining challenges of using datasets like these for local scale application.
NASA Astrophysics Data System (ADS)
Nolte, C. G.; Otte, T. L.; Bowden, J. H.; Otte, M. J.
2010-12-01
There is disagreement in the regional climate modeling community as to the appropriateness of the use of internal nudging. Some investigators argue that the regional model should be minimally constrained and allowed to respond to regional-scale forcing, while others have noted that in the absence of interior nudging, significant large-scale discrepancies develop between the regional model solution and the driving coarse-scale fields. These discrepancies lead to reduced confidence in the ability of regional climate models to dynamically downscale global climate model simulations under climate change scenarios, and detract from the usability of the regional simulations for impact assessments. The advantages and limitations of interior nudging schemes for regional climate modeling are investigated in this study. Multi-year simulations using the WRF model driven by reanalysis data over the continental United States at 36km resolution are conducted using spectral nudging, grid point nudging, and for a base case without interior nudging. The means, distributions, and inter-annual variability of temperature and precipitation will be evaluated in comparison to regional analyses.
NASA Astrophysics Data System (ADS)
Wohland, Jan; Reyers, Mark; Weber, Juliane; Witthaut, Dirk
2017-11-01
Limiting anthropogenic climate change requires the fast decarbonization of the electricity system. Renewable electricity generation is determined by the weather and is hence subject to climate change. We simulate the operation of a coarse-scale fully renewable European electricity system based on downscaled high-resolution climate data from EURO-CORDEX. Following a high-emission pathway (RCP8.5), we find a robust but modest increase (up to 7 %) of backup energy in Europe through the end of the 21st century. The absolute increase in the backup energy is almost independent of potential grid expansion, leading to the paradoxical effect that relative impacts of climate change increase in a highly interconnected European system. The increase is rooted in more homogeneous wind conditions over Europe resulting in intensified simultaneous generation shortfalls. Individual country contributions to European generation shortfall increase by up to 9 TWh yr-1, reflecting an increase of up to 4 %. Our results are strengthened by comparison with a large CMIP5 ensemble using an approach based on circulation weather types.
Effect of spatial averaging on multifractal properties of meteorological time series
NASA Astrophysics Data System (ADS)
Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika
2016-04-01
Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased by -29.1 % (precipitation; width of MS) up to >4 % (min. Temperature, Radiation; asymmetry of MS). Also, the spatial variability of MS parameters was strongly affected at the highest aggregation (100 km). Obtained results confirm that spatial data aggregation may strongly affect temporal scaling properties. This should be taken into account when upscaling for large-scale studies. Acknowledgements The study was conducted within FACCE MACSUR. Please see Baranowski et al. (2015) for details on funding. References Baranowski, P., Krzyszczak, J., Sławiński, C. et al. (2015). Climate Research 65, 39-52. Hoffman, H., G. Zhao, L.G.J. Van Bussel et al. (2015). Climate Research 65, 53-69. Zhao, G., Siebert, S., Rezaei E. et al. (2015). Agricultural and Forest Meteorology 200, 156-171.
Development of CO2 inversion system based on the adjoint of the global coupled transport model
NASA Astrophysics Data System (ADS)
Belikov, Dmitry; Maksyutov, Shamil; Chevallier, Frederic; Kaminski, Thomas; Ganshin, Alexander; Blessing, Simon
2014-05-01
We present the development of an inverse modeling system employing an adjoint of the global coupled transport model consisting of the National Institute for Environmental Studies (NIES) Eulerian transport model (TM) and the Lagrangian plume diffusion model (LPDM) FLEXPART. NIES TM is a three-dimensional atmospheric transport model, which solves the continuity equation for a number of atmospheric tracers on a grid spanning the entire globe. Spatial discretization is based on a reduced latitude-longitude grid and a hybrid sigma-isentropic coordinate in the vertical. NIES TM uses a horizontal resolution of 2.5°×2.5°. However, to resolve synoptic-scale tracer distributions and to have the ability to optimize fluxes at resolutions of 0.5° and higher we coupled NIES TM with the Lagrangian model FLEXPART. The Lagrangian component of the forward and adjoint models uses precalculated responses of the observed concentration to the surface fluxes and 3-D concentrations field simulated with the FLEXPART model. NIES TM and FLEXPART are driven by JRA-25/JCDAS reanalysis dataset. Construction of the adjoint of the Lagrangian part is less complicated, as LPDMs calculate the sensitivity of measurements to the surrounding emissions field by tracking a large number of "particles" backwards in time. Developing of the adjoint to Eulerian part was performed with automatic differentiation tool the Transformation of Algorithms in Fortran (TAF) software (http://www.FastOpt.com). This method leads to the discrete adjoint of NIES TM. The main advantage of the discrete adjoint is that the resulting gradients of the numerical cost function are exact, even for nonlinear algorithms. The overall advantages of our method are that: 1. No code modification of Lagrangian model is required, making it applicable to combination of global NIES TM and any Lagrangian model; 2. Once run, the Lagrangian output can be applied to any chemically neutral gas; 3. High-resolution results can be obtained over limited regions close to the monitoring sites (using the LPDM part), and at coarse resolution for the rest of the globe (using the Eulerian part), minimizing aggregation errors and computation cost. The adjoint of the coupled high-resolution Eulerian-Lagrangian model will be incorporated into the PYVAR CO2 variational inverse system (Chevallier et al., 2005). Chevallier, F., Fisher, M., Peylin, P., Serrar, S., Bousquet, P., Bréon, F.-M., Chédin, A., and Ciais, P.: Inferring CO2 sources and sinks from satellite observations: method and application to TOVS data, J. Geophys. Res., 110, D24309, doi:10.1029/2005JD006390, 2005.
Mehl, S.; Hill, M.C.
2004-01-01
This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.
2016-02-01
The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases per year. GMRT is available as both gridded data and images that can be viewed and downloaded directly through the Java application GeoMapApp (www.geomapapp.org) and the web-based GMRT MapTool. In addition, the GMRT GridServer API provides programmatic access to grids, imagery, profiles, and single point elevation values.
Clouds Optically Gridded by Stereo COGS product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oktem, Rusen; Romps, David
COGS product is a 4D grid of cloudiness covering a 6 km × 6 km × 6 km cube centered at the central facility of SGP site at a spatial resolution of 50 meters and a temporal resolution of 20 seconds. The dimensions are X, Y, Z, and time, where X,Y, Z, correspond to east-west, north-south, and altitude of the grid point, respectively. COGS takes on values 0, 1, and -1 denoting "cloud", "no cloud", and "not available".
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
Sahra integrated modeling approach to address water resources management in semi-arid river basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springer, E. P.; Gupta, Hoshin V.; Brookshire, David S.
Water resources decisions in the 21Sf Century that will affect allocation of water for economic and environmental will rely on simulations from integrated models of river basins. These models will not only couple natural systems such as surface and ground waters, but will include economic components that can assist in model assessments of river basins and bring the social dimension to the decision process. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated models to assess impacts of climate variability and land use change on water resources inmore » semi-arid river basins. The objectives of this paper are to describe the SAHRA integrated modeling approach and to describe the linkage between social and natural sciences in these models. Water resources issues that arise from climate variability or land use change may require different resolution models to answer different questions. For example, a question related to streamflow may not need a high-resolution model whereas a question concerning the source and nature of a pollutant will. SAHRA has taken a multiresolution approach to integrated model development because one cannot anticipate the questions in advance, and the computational and data resources may not always be available or needed for the issue to be addressed. The coarsest resolution model is based on dynamic simulation of subwatersheds or river reaches. This model resolution has the advantage of simplicity and social factors are readily incorporated. Users can readily take this model (and they have) and examine the effects of various management strategies such as increased cost of water. The medium resolution model is grid based and uses variable grid cells of 1-12 km. The surface hydrology is more physically based using basic equations for energy and water balance terms, and modules are being incorporated that will simulate engineering components such as reservoirs or irrigation diversions and economic features such as variable demand. The fine resolution model is viewed as a tool to examine basin response using best available process models. The fine resolution model operates on a grid cell size of 100 m or less, which is consistent with the scale that our process knowledge has developed. The fine resolution model couples atmosphere, surface water and groundwater modules using high performance computing. The medium and fine resolution models are not expected at this time to be operated by users as opposed to the coarse resolution model. One of the objectives of the SAHRA integrated modeling task is to present results in a manner that can be used by those making decisions. The application of these models within SAHRA is driven by a scenario analysis and a place location. The place is the Rio Grande from its headwaters in Colorado to the New Mexico-Texas border. This provides a focus for model development and an attempt to see how the results from the various models relate. The scenario selected by SAHRA is the impact of a 1950's style drought using 1990's population and land use on Rio Grande water resources including surface and groundwater. The same climate variables will be used to drive all three models so that comparison will be based on how the three resolutions partition and route water through the river basin. Aspects of this scenario will be discussed and initial model simulation will be presented. The issue of linking economic modules into the modeling effort will be discussed and the importance of feedback from the social and economic modules to the natural science modules will be reviewed.« less
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Long, D. G.
2017-12-01
Since 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Up until recently, the available global gridded passive microwave data sets have not been produced consistently. Various projections (equal-area, polar stereographic), a number of different gridding techniques were used, along with various temporal sampling as well as a mix of Level 2 source data versions. In addition, not all data from all sensors have been processed completely and they have not been processed in any one consistent way. Furthermore, the original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. As part of NASA MEaSUREs, we have re-processed all data from SMMR, all SSM/I-SSMIS and AMSR-E instruments, using the most mature Level 2 data. The Calibrated, Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) gridded data are now available from the NSIDC DAAC. The data are distributed as netCDF files that comply with CF-1.6 and ACDD-1.3 conventions. The data have been produced on EASE 2.0 projections at smoothed, 25 kilometer resolution and spatially-enhanced resolutions, up to 3.125 km depending on channel frequency, using the radiometer version of the Scatterometer Image Reconstruction (rSIR) method. We expect this newly produced data set to enable scientists to better analyze trends in coastal regions, marginal ice zones and in mountainous terrain that were not possible with the previous gridded passive microwave data. The use of the EASE-Grid 2.0 definition and netCDF-CF formatting allows users to extract compliant geotiff images and provides for easy importing and correct reprojection interoperability in many standard packages. As a consistently-processed, high-quality satellite passive microwave ESDR, we expect this data set to replace earlier gridded passive microwave data sets, and to pave the way for new insights from higher-resolution derived geophysical products.
Utilization of all Spectral Channels of IASI for the Retrieval of the Atmospheric State
NASA Astrophysics Data System (ADS)
Del Bianco, S.; Cortesi, U.; Carli, B.
2010-12-01
The retrieval of atmospheric state parameters from broadband measurements acquired by high spectral resolution sensors, such as the Infrared Atmospheric Sounding Interferometer (IASI) onboard the Meteorological Operational (MetOp) platform, generally requires to deal with a prohibitively large number of spectral elements available from a single observation (8461 samples in the case of IASI, covering the 645-2760 cm-1 range with a resolution of 0.5 cm-1 and a spectral sampling of 0.25 cm-1). Most inversion algorithms developed for both operational and scientific analysis of IASI spectra perform a reduction of the data - typically based on channel selection, super-channel clustering or Principal Component Analysis (PCA) techniques - in order to handle the high dimensionality of the problem. Accordingly, simultaneous processing of all IASI channels received relatively low attention. Here we prove the feasibility of a retrieval approach exploiting all spectral channels of IASI, to extract information on water vapor, temperature and ozone profiles. This multi-target retrieval removes the systematic errors due to interfering parameters and makes the channel selection no longer necessary. The challenging computation is made possible by the use of a coarse spectral grid for the forward model calculation and by the abatement of the associated modeling errors through the use of a variance-covariance matrix of the residuals that takes into account all the forward model errors.
Global Distribution and Density of Constructed Impervious Surfaces.
Elvidge, Christopher D; Tuttle, Benjamin T; Sutton, Paul C; Baugh, Kimberly E; Howard, Ara T; Milesi, Cristina; Bhaduri, Budhendra; Nemani, Ramakrishna
2007-09-21
We present the first global inventory of the spatial distribution and density ofconstructed impervious surface area (ISA). Examples of ISA include roads, parking lots,buildings, driveways, sidewalks and other manmade surfaces. While high spatialresolution is required to observe these features, the new product reports the estimateddensity of ISA on a one-km² grid based on two coarse resolution indicators of ISA - thebrightness of satellite observed nighttime lights and population count. The model wascalibrated using 30-meter resolution ISA of the USA from the U.S. Geological Survey.Nominally the product is for the years 2000-01 since both the nighttime lights andreference data are from those two years. We found that 1.05% of the United States landarea is impervious surface (83,337 km²) and 0.43 % of the world's land surface (579,703km²) is constructed impervious surface. China has more ISA than any other country(87,182 km²), but has only 67 m² of ISA per person, compared to 297 m² per person in theUSA. The distribution of ISA in the world's primary drainage basins indicates that watersheds damaged by ISA are primarily concentrated in the USA, Europe, Japan, China and India. The authors believe the next step for improving the product is to include reference ISA data from many more areas around the world.
Nagler, Pamela L.; Pearlstein, Susanna; Glenn, Edward P.; Brown, Tim B.; Bateman, Heather L.; Bean, Dan W.; Hultine, Kevin R.
2013-01-01
We measured the rate of dispersal of saltcedar leaf beetles (Diorhabda carinulata), a defoliating insect released on western rivers to control saltcedar shrubs (Tamarix spp.), on a 63 km reach of the Virgin River, U.S. Dispersal was measured by satellite imagery, ground surveys and phenocams. Pixels from the Moderate Resolution Imaging Spectrometer (MODIS) sensors on the Terra satellite showed a sharp drop in NDVI in midsummer followed by recovery, correlated with defoliation events as revealed in networked digital camera images and ground surveys. Ground surveys and MODIS imagery showed that beetle damage progressed downstream at a rate of about 25 km yr−1 in 2010 and 2011, producing a 50% reduction in saltcedar leaf area index and evapotranspiration by 2012, as estimated by algorithms based on MODIS Enhanced Vegetation Index values and local meteorological data for Mesquite, Nevada. This reduction is the equivalent of 10.4% of mean annual river flows on this river reach. Our results confirm other observations that saltcedar beetles are dispersing much faster than originally predicted in pre-release biological assessments, presenting new challenges and opportunities for land, water and wildlife managers on western rivers. Despite relatively coarse resolution (250 m) and gridding artifacts, single MODIS pixels can be useful in tracking the effects of defoliating insects in riparian corridors.
This study applied a phenology-based land-cover classification approach across the Laurentian Great Lakes Basin (GLB) using time-series data consisting of 23 Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) composite images (250 ...
Atmospheric and Fundamental Parameters of Stars in Hubble's Next Generation Spectral Library
NASA Technical Reports Server (NTRS)
Heap, Sally
2010-01-01
Hubble's Next Generation Spectral Library (NGSL) consists of R approximately 1000 spectra of 374 stars of assorted temperature, gravity, and metallicity. We are presently working to determine the atmospheric and fundamental parameters of the stars from the NGSL spectra themselves via full-spectrum fitting of model spectra to the observed (extinction-corrected) spectrum over the full wavelength range, 0.2-1.0 micron. We use two grids of model spectra for this purpose: the very low-resolution spectral grid from Castelli-Kurucz (2004), and the grid from MARCS (2008). Both the observed spectrum and the MARCS spectra are first degraded in resolution to match the very low resolution of the Castelli-Kurucz models, so that our fitting technique is the same for both model grids. We will present our preliminary results with a comparison with those from the Sloan/Segue Stellar Parameter Pipeline, ELODIE, and MILES, etc.
Influence of Terraced area DEM Resolution on RUSLE LS Factor
NASA Astrophysics Data System (ADS)
Zhang, Hongming; Baartman, Jantiene E. M.; Yang, Xiaomei; Gai, Lingtong; Geissen, Viollette
2017-04-01
Topography has a large impact on the erosion of soil by water. Slope steepness and slope length are combined (the LS factor) in the universal soil-loss equation (USLE) and its revised version (RUSLE) for predicting soil erosion. The LS factor is usually extracted from a digital elevation model (DEM). The grid size of the DEM will thus influence the LS factor and the subsequent calculation of soil loss. Terracing is considered as a support practice factor (P) in the USLE/RUSLE equations, which is multiplied with the other USLE/RUSLE factors. However, as terraces change the slope length and steepness, they also affect the LS factor. The effect of DEM grid size on the LS factor has not been investigated for a terraced area. We obtained a high-resolution DEM by unmanned aerial vehicles (UAVs) photogrammetry, from which the slope steepness, slope length, and LS factor were extracted. The changes in these parameters at various DEM resolutions were then analysed. The DEM produced detailed LS-factor maps, particularly for low LS factors. High (small valleys, gullies, and terrace ridges) and low (flats and terrace fields) spatial frequencies were both sensitive to changes in resolution, so the areas of higher and lower slope steepness both decreased with increasing grid size. Average slope steepness decreased and average slope length increased with grid size. Slope length, however, had a larger effect than slope steepness on the LS factor as the grid size varied. The LS factor increased when the grid size increased from 0.5 to 30-m and increased significantly at grid sizes >5-m. The LS factor was increasingly overestimated as grid size decreased. The LS factor decreased from grid sizes of 30 to 100-m, because the details of the terraced terrain were gradually lost, but the factor was still overestimated.
Aerodynamic design and optimization in one shot
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.
1992-01-01
This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.
Chervenkov, Hristo
2013-12-01
An appropriate method for evaluating the air quality of a certain area is to contrast the actual air pollution levels to the critical ones, prescribed in the legislative standards. The application of numerical simulation models for assessing the real air quality status is allowed by the legislation of the European Community (EC). This approach is preferable, especially when the area of interest is relatively big and/or the network of measurement stations is sparse, and the available observational data are scarce, respectively. Such method is very efficient for similar assessment studies due to continuous spatio-temporal coverage of the obtained results. In the study the values of the concentration of the harmful substances sulphur dioxide, (SO2), nitrogen dioxide (NO2), particulate matter - coarse (PM10) and fine (PM2.5) fraction, ozone (O3), carbon monoxide (CO) and ammonia (NH3) in the surface layer obtained from modelling simulations with resolution 10 km on hourly bases are taken to calculate the necessary statistical quantities which are used for comparison with the corresponding critical levels, prescribed in the EC directives. For part of them (PM2.5, CO and NH3) this is done for first time with such resolution. The computational grid covers Bulgaria entirely and some surrounding territories and the calculations are made for every year in the period 1991-2000. The averaged over the whole time slice results can be treated as representative for the air quality situation of the last decade of the former century.
NASA Astrophysics Data System (ADS)
Prince, Alyssa; Trout, Joseph; di Mercurio, Alexis
2017-01-01
The Weather Research and Forecasting (WRF) Model is a nested-grid, mesoscale numerical weather prediction system maintained by the Developmental Testbed Center. The model simulates the atmosphere by integrating partial differential equations, which use the conservation of horizontal momentum, conservation of thermal energy, and conservation of mass along with the ideal gas law. This research investigated the possible use of WRF in investigating the effects of weather on wing tip wake turbulence. This poster shows the results of an investigation into the accuracy of WRF using different grid resolutions. Several atmospheric conditions were modeled using different grid resolutions. In general, the higher the grid resolution, the better the simulation, but the longer the model run time. This research was supported by Dr. Manuel A. Rios, Ph.D. (FAA) and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA'' (13-G-006). Dr. Manuel A. Rios, Ph.D. (FAA), and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''
2013-08-01
both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3
Packing of sidechains in low-resolution models for proteins.
Keskin, O; Bahar, I
1998-01-01
Atomic level rotamer libraries for sidechains in proteins have been proposed by several groups. Conformations of side groups in coarse-grained models, on the other hand, have not yet been analyzed, although low resolution approaches are the only efficient way to explore global structural features. A residue-specific backbone-dependent library for sidechain isomers, compatible with a coarse-grained model, is proposed. The isomeric states are utilized in packing sidechains of known backbone structures. Sidechain positions are predicted with a root-mean-square deviation (r.m.s.d.) of 2.40 A with respect to crystal structure for 50 test proteins. The rmsd for core residues is 1.60 A and decreases to 1.35 A when conformational correlations and directional effects in inter-residue couplings are considered. An automated method for assigning sidechain positions in coarse-grained model proteins is proposed and made available on the internet; the method accounts satisfactorily for sidechain packing, particularly in the core.
A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.
High Quality Data for Grid Integration Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clifton, Andrew; Draxl, Caroline; Sengupta, Manajit
As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. The existing electric grid infrastructure in the US in particular poses significant limitations on wind power expansion. In this presentation we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather predictionmore » to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets are presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. The need for high-resolution weather data pushes modeling towards finer scales and closer synchronization. We also present how we anticipate such datasets developing in the future, their benefits, and the challenges with using and disseminating such large amounts of data.« less
Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.
Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V
2017-10-23
Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.
Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1999-01-01
The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.
The functional micro-organization of grid cells revealed by cellular-resolution imaging.
Heys, James G; Rangarajan, Krsna V; Dombeck, Daniel A
2014-12-03
Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater microcircuit-level understanding of the brain's representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to nongrid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: the similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a "Mexican hat"-shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. Copyright © 2014 Elsevier Inc. All rights reserved.
A multi-temporal analysis approach for land cover mapping in support of nuclear incident response
NASA Astrophysics Data System (ADS)
Sah, Shagan; van Aardt, Jan A. N.; McKeown, Donald M.; Messinger, David W.
2012-06-01
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use maps from the two image sources were integrated using a probability-based approach. Classification results were obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. The classifications were augmented using this fused approach, with few supplementary advantages such as correction for cloud cover and independence from time of year. We concluded that this method would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date, high spatial resolution, multi-spectral image.
A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT
Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...
Bottom currents and sediment transport in Long Island Sound: A modeling study
Signell, R.P.; List, J.H.; Farris, A.S.
2000-01-01
A high resolution (300-400 m grid spacing), process oriented modeling study was undertaken to elucidate the physical processes affecting the characteristics and distribution of sea-floor sedimentary environments in Long Island Sound. Simulations using idealized forcing and high-resolution bathymetry were performed using a three-dimensional circulation model ECOM (Blumberg and Mellor, 1987) and a stationary shallow water wave model HISWA (Holthuijsen et al., 1989). The relative contributions of tide-, density-, wind- and wave-driven bottom currents are assessed and related to observed characteristics of the sea-floor environments, and simple bedload sediment transport simulations are performed. The fine grid spacing allows features with scales of several kilometers to be resolved. The simulations clearly show physical processes that affect the observed sea-floor characteristics at both regional and local scales. Simulations of near-bottom tidal currents reveal a strong gradient in the funnel-shaped eastern part of the Sound, which parallels an observed gradient in sedimentary environments from erosion or nondeposition, through bedload transport and sediment sorting, to fine-grained deposition. A simulation of estuarine flow driven by the along-axis gradient in salinity shows generally westward bottom currents of 2-4 cm/s that are locally enhanced to 6-8 cm/s along the axial depression of the Sound. Bottom wind-driven currents flow downwind along the shallow margins of the basin, but flow against the wind in the deeper regions. These bottom flows (in opposition to the wind) are strongest in the axial depression and add to the estuarine flow when winds are from the west. The combination of enhanced bottom currents due to both estuarine circulation and the prevailing westerly winds provide an explanation for the relatively coarse sediments found along parts of the axial depression. Climatological simulations of wave-driven bottom currents show that frequent high-energy events occur along the shallow margins of the Sound, explaining the occurrence of relatively coarse sediments in these regions. Bedload sediment transport calculations show that the estuarine circulation coupled with the oscillatory tidal currents result in a net westward transport of sand in much of the eastern Sound. Local departures from this regional westward trend occur around topographic and shoreline irregularities, and there is strong predicted convergence of bedload transport over most of the large, linear sand ridges in the eastern Sound, providing a mechanism which prevents their decay. The strong correlation between the near-bottom current intensity based on the model results and the sediment response, as indicated by the distribution of sedimentary environments, provides a framework for predicting the long-term effects of anthropogenic activities.
Hybrid finite difference/finite element immersed boundary method.
E Griffith, Boyce; Luo, Xiaoyu
2017-12-01
The immersed boundary method is an approach to fluid-structure interaction that uses a Lagrangian description of the structural deformations, stresses, and forces along with an Eulerian description of the momentum, viscosity, and incompressibility of the fluid-structure system. The original immersed boundary methods described immersed elastic structures using systems of flexible fibers, and even now, most immersed boundary methods still require Lagrangian meshes that are finer than the Eulerian grid. This work introduces a coupling scheme for the immersed boundary method to link the Lagrangian and Eulerian variables that facilitates independent spatial discretizations for the structure and background grid. This approach uses a finite element discretization of the structure while retaining a finite difference scheme for the Eulerian variables. We apply this method to benchmark problems involving elastic, rigid, and actively contracting structures, including an idealized model of the left ventricle of the heart. Our tests include cases in which, for a fixed Eulerian grid spacing, coarser Lagrangian structural meshes yield discretization errors that are as much as several orders of magnitude smaller than errors obtained using finer structural meshes. The Lagrangian-Eulerian coupling approach developed in this work enables the effective use of these coarse structural meshes with the immersed boundary method. This work also contrasts two different weak forms of the equations, one of which is demonstrated to be more effective for the coarse structural discretizations facilitated by our coupling approach. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.
2017-12-01
Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.
NASA Astrophysics Data System (ADS)
Rasera, L. G.; Mariethoz, G.; Lane, S. N.
2017-12-01
Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.
Summary of the Third AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Brodersen, Olaf P.; Eisfeld, Bernhard; Wahls, Richard A.; Morrison, Joseph H.; Zickuhr, Tom; Laflin, Kelly R.; Mavriplis, DImitri J.
2007-01-01
The workshop focused on the prediction of both absolute and differential drag levels for wing-body and wing-al;one configurations of that are representative of transonic transport aircraft. The baseline DLR-F6 wing-body geometry, previously utilized in DPW-II, is also augmented with a side-body fairing to help reduce the complexity of the flow physics in the wing-body juncture region. In addition, two new wing-alone geometries have been developed for the DPW-II. Numerical calculations are performed using industry-relevant test cases that include lift-specific and fixed-alpha flight conditions, as well as full drag polars. Drag, lift, and pitching moment predictions from previous Reynolds-Averaged Navier-Stokes computational fluid Dynamics Methods are presented, focused on fully-turbulent flows. Solutions are performed on structured, unstructured, and hybrid grid systems. The structured grid sets include point-matched multi-block meshes and over-set grid systems. The unstructured and hybrid grid sets are comprised of tetrahedral, pyramid, and prismatic elements. Effort was made to provide a high-quality and parametrically consistent family of grids for each grid type about each configuration under study. The wing-body families are comprised of a coarse, medium, and fine grid, while the wing-alone families also include an extra-fine mesh. These mesh sequences are utilized to help determine how the provided flow solutions fair with respect to asymptotic grid convergence, and are used to estimate an absolute drag of each configuration.
A novel hybrid approach with multidimensional-like effects for compressible flow computations
NASA Astrophysics Data System (ADS)
Kalita, Paragmoni; Dass, Anoop K.
2017-07-01
A multidimensional scheme achieves good resolution of strong and weak shocks irrespective of whether the discontinuities are aligned with or inclined to the grid. However, these schemes are computationally expensive. This paper achieves similar effects by hybridizing two schemes, namely, AUSM and DRLLF and coupling them through a novel shock switch that operates - unlike existing switches - on the gradient of the Mach number across the cell-interface. The schemes that are hybridized have contrasting properties. The AUSM scheme captures grid-aligned (and strong) shocks crisply but it is not so good for non-grid-aligned weaker shocks, whereas the DRLLF scheme achieves sharp resolution of non-grid-aligned weaker shocks, but is not as good for grid-aligned strong shocks. It is our experience that if conventional shock switches based on variables like density, pressure or Mach number are used to combine the schemes, the desired effect of crisp resolution of grid-aligned and non-grid-aligned discontinuities are not obtained. To circumvent this problem we design a shock switch based - for the first time - on the gradient of the cell-interface Mach number with very impressive results. Thus the strategy of hybridizing two carefully selected schemes together with the innovative design of the shock switch that couples them, affords a method that produces the effects of a multidimensional scheme with a lower computational cost. It is further seen that hybridization of the AUSM scheme with the recently developed DRLLFV scheme using the present shock switch gives another scheme that provides crisp resolution for both shocks and boundary layers. Merits of the scheme are established through a carefully selected set of numerical experiments.
An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.
Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui
2016-09-22
The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.
Science Enabling Applications of Gridded Radiances and Products
NASA Astrophysics Data System (ADS)
Goldberg, M.; Wolf, W.; Zhou, L.
2005-12-01
New generations of hyperspectral sounders and imagers are not only providing vastly improved information to monitor, assess and predict the Earth's environment, they also provide tremendous volumes of data to manage. Key management challenges must include data processing, distribution, archive and utilization. At the NOAA/NESDIS Office of Research and Applications, we have started to address the challenge of utilizing high volume satellite by thinning observations and developing gridded datasets from the observations made from the NASA AIRS, AMSU and MODIS instrument. We have developed techniques for intelligent thinning of AIRS data for numerical weather prediction, by selecting the clearest AIRS 14 km field of view within a 3 x 3 array. The selection uses high spatial resolution 1 km MODIS data which are spatially convolved to the AIRS field of view. The MODIS cloud masks and AIRS cloud tests are used to select the clearest. During the real-time processing the data are thinned and gridded to support monitoring, validation and scientific studies. Products from AIRS, which includes profiles of temperature, water vapor and ozone and cloud-corrected infrared radiances for more than 2000 channels, are derived from a single AIRS/AMSU field of regard, which is a 3 x 3 array of AIRS footprints (each with a 14 km spatial resolution) collocated with a single AMSU footprint (42 km). One of our key gridded dataset is a daily 3 x 3 latitude/longitude projection which contains the nearest AIRS/AMSU field of regard with respect to the center of the 3 x 3 lat/lon grid. This particular gridded dataset is 1/40 the size of the full resolution data. This gridded dataset is the type of product request that can be used to support algorithm validation and improvements. It also provides for a very economical approach for reprocessing, testing and improving algorithms for climate studies without having to reprocess the full resolution data stored at the DAAC. For example, on a single CPU workstation, all the AIRS derived products can be derived from a single year of gridded data in 5 days. This relatively short turnaround time, which can be reduced considerably to 3 hours by using a cluster of 40 pc G5processors, allows for repeated reprocessing at the PIs home institution before substantial investments are made to reprocess the full resolution data sets archived at the DAAC. In other words, do not reprocess the full resolution data until the science community have tested and selected the optimal algorithm on the gridded data. Development and applications of gridded radiances and products will be discussed. The applications can be provided as part of a web-based service.
Although remote sensing technology has long been used in wetland inventory and monitoring, the accuracy and detail level of derived wetland maps were limited or often unsatisfactory largely due to the relatively coarse spatial resolution of conventional satellite imagery. This re...
Hydrologic downscaling of soil moisture using global data without site-specific calibration
USDA-ARS?s Scientific Manuscript database
Numerous applications require fine-resolution (10-30 m) soil moisture patterns, but most satellite remote sensing and land-surface models provide coarse-resolution (9-60 km) soil moisture estimates. The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales soil moistu...
Bajocco, Sofia; Dragoz, Eleni; Gitas, Ioannis; Smiraglia, Daniela; Salvati, Luca; Ricotta, Carlo
2015-01-01
Traditionally fuel maps are built in terms of ‘fuel types’, thus considering the structural characteristics of vegetation only. The aim of this work is to derive a phenological fuel map based on the functional attributes of coarse-scale vegetation phenology, such as seasonality and productivity. MODIS NDVI 250m images of Sardinia (Italy), a large Mediterranean island with high frequency of fire incidence, were acquired for the period 2000–2012 to construct a mean annual NDVI profile of the vegetation at the pixel-level. Next, the following procedure was used to develop the phenological fuel map: (i) image segmentation on the Fourier components of the NDVI profiles to identify phenologically homogeneous landscape units, (ii) cluster analysis of the phenological units and post-hoc analysis of the fire-proneness of the phenological fuel classes (PFCs) obtained, (iii) environmental characterization (in terms of land cover and climate) of the PFCs. Our results showed the ability of coarse-resolution satellite time-series to characterize the fire-proneness of Sardinia with an adequate level of accuracy. The remotely sensed phenological framework presented may represent a suitable basis for the development of fire distribution prediction models, coarse-scale fuel maps and for various biogeographic studies. PMID:25822505
High-resolution computer-aided moire
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1991-12-01
This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.
In this study we investigate the CMAQ model response in terms of simulated mercury concentration and deposition to boundary/initial conditions (BC/IC), model grid resolution (12- versus 36-km), and two alternative Hg(II) reduction mechanisms. The model response to the change of g...
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-04
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
Linear mixing model applied to AVHRR LAC data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
Improving Spectroscopic Performance of a Coplanar-Anode High-Pressure Xenon Gamma-Ray Spectrometer
NASA Astrophysics Data System (ADS)
Kiff, Scott Douglas; He, Zhong; Tepper, Gary C.
2007-08-01
High-pressure xenon (HPXe) gas is a desirable radiation detection medium for homeland security applications because of its good inherent room-temperature energy resolution, potential for large, efficient devices, and stability over a broad temperature range. Past work in HPXe has produced large-diameter gridded ionization chambers with energy resolution at 662 keV between 3.5 and 4% FWHM. However, one major limitation of these detectors is resolution degradation due to Frisch grid microphonics. A coplanar-anode HPXe detector has been developed as an alternative to gridded chambers. An investigation of this detector's energy resolution is reported in this submission. A simulation package is used to investigate the contributions of important physical processes to the measured photopeak broadening. Experimental data is presented for pure Xe and Xe + 0.2%H2 mixtures, including an analysis of interaction location effects on the energy spectrum.
Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.
Effect of particle size distribution on the hydrodynamics of dense CFB risers
NASA Astrophysics Data System (ADS)
Bakshi, Akhilesh; Khanna, Samir; Venuturumilli, Raj; Altantzis, Christos; Ghoniem, Ahmed
2015-11-01
Circulating Fluidized Beds (CFB) are favorable in the energy and chemical industries, due to their high efficiency. While accurate hydrodynamic modeling is essential for optimizing performance, most CFB riser simulations are performed assuming equally-sized solid particles, owing to limited computational resources. Even though this approach yields reasonable predictions, it neglects commonly observed experimental findings suggesting the strong effect of particle size distribution (psd) on the hydrodynamics and chemical conversion. Thus, this study is focused on the inclusion of discrete particle sizes to represent the psd and its effect on fluidization via 2D numerical simulations. The particle sizes and corresponding mass fluxes are obtained using experimental data in dense CFB riser while the modeling framework is described in Bakshi et al 2015. Simulations are conducted at two scales: (a) fine grid to resolve heterogeneous structures and (b) coarse grid using EMMS sub-grid modifications. Using suitable metrics which capture bed dynamics, this study provides insights into segregation and mixing of particles as well as highlights need for improved sub-grid models.
Coarsening of physics for biogeochemical model in NEMO
NASA Astrophysics Data System (ADS)
Bricaud, Clement; Le Sommer, Julien; Madec, Gurvan; Deshayes, Julie; Chanut, Jerome; Perruche, Coralie
2017-04-01
Ocean mesoscale and submesoscale turbulence contribute to ocean tracer transport and to shaping ocean biogeochemical tracers distribution. Representing adequately tracer transport in ocean models therefore requires to increase model resolution so that the impact of ocean turbulence is adequately accounted for. But due to supercomputers power and storage limitations, global biogeochemical models are not yet run routinely at eddying resolution. Still, because the "effective resolution" of eddying ocean models is much coarser than the physical model grid resolution, tracer transport can be reconstructed to a large extent by computing tracer transport and diffusion with a model grid resolution close to the effective resolution of the physical model. This observation has motivated the implementation of a new capability in NEMO ocean model (http://www.nemo-ocean.eu/) that allows to run the physical model and the tracer transport model at different grid resolutions. In a first time, we present results obtained with this new capability applied to a synthetic age tracer in a global eddying model configuration. In this model configuration, ocean dynamic is computed at ¼° resolution but tracer transport is computed at 3/4° resolution. The solution obtained is compared to 2 reference setup ,one at ¼° resolution for both physics and passive tracer models and one at 3/4° resolution for both physics and passive tracer model. We discuss possible options for defining the vertical diffusivity coefficient for the tracer transport model based on information from the high resolution grid. We describe the impact of this choice on the distribution and one the penetration of the age tracer. In a second time we present results obtained by coupling the physics with the biogeochemical model PISCES. We look at the impact of this methodology on some tracers distribution and dynamic. The method described here can found applications in ocean forecasting, such as the Copernicus Marine service operated by Mercator-Ocean, and in Earth System Models for climate applications.
Dynamical Downscaling of Typhoon Vera (1959) and related Storm Surge based on JRA-55 Reanalysis
NASA Astrophysics Data System (ADS)
Ninomiya, J.; Takemi, T.; Mori, N.; Shibutani, Y.; Kim, S.
2015-12-01
Typhoon Vera in 1959 is historical extreme typhoon that caused severest typhoon damage mainly due to the storm surge up to 389 cm in Japan. Vera developed 895 hPa on offshore and landed with 929.2 hPa. There are many studies of the dynamical downscaling of Vera but it is difficult to simulate accurately because of the lack of the accuracy of global reanalysis data. This study carried out dynamical downscaling experiment of Vera using WRF downscaling forced by JRA-55 that are latest atmospheric model and reanalysis data. In this study, the reproducibility of five global reanalysis data for Typhoon Vera were compered. Comparison shows that reanalysis data doesn't have strong typhoon information except for JRA-55, so that downscaling with conventional reanalysis data goes wrong. The dynamical downscaling method for storm surge is studied very much (e.g. choice of physical model, nudging, 4D-VAR, bogus and so on). In this study, domain size and resolution of the coarse domain were considered. The coarse domain size influences the typhoon route and central pressure, and larger domain restrains the typhoon strength. The results of simulations with different domain size show that the threshold of developing restrain is whether the coarse domain fully includes the area of wind speed more than 15 m/s around the typhoon. The results of simulations with different resolution show that the resolution doesn't affect the typhoon route, and higher resolution gives stronger typhoon simulation.
One-way coupling of an atmospheric and a hydrologic model in Colorado
Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.
2006-01-01
This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.