Sample records for finer grid resolution

  1. High-resolution wavefront reconstruction using the frozen flow hypothesis

    NASA Astrophysics Data System (ADS)

    Liu, Xuewen; Liang, Yonghui; Liu, Jin; Xu, Jieping

    2017-10-01

    This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.

  2. The influence of model resolution on ozone in industrial volatile organic compound plumes.

    PubMed

    Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G

    2010-09-01

    Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the authors suggest a need for quantitative metrics for horizontal grid resolution in future model guidance.

  3. A low-rank control variate for multilevel Monte Carlo simulation of high-dimensional uncertain systems

    NASA Astrophysics Data System (ADS)

    Fairbanks, Hillary R.; Doostan, Alireza; Ketelsen, Christian; Iaccarino, Gianluca

    2017-07-01

    Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.

  4. Influence of grid resolution in fluid-model simulation of nanosecond dielectric barrier discharge plasma actuator

    NASA Astrophysics Data System (ADS)

    Hua, Weizhuo; Fukagata, Koji

    2018-04-01

    Two-dimensional numerical simulation of a surface dielectric barrier discharge (SDBD) plasma actuator, driven by a nanosecond voltage pulse, is conducted. A special focus is laid upon the influence of grid resolution on the computational result. It is found that the computational result is not very sensitive to the streamwise grid spacing, whereas the wall-normal grid spacing has a critical influence. In particular, the computed propagation velocity changes discontinuously around the wall-normal grid spacing about 2 μm due to a qualitative change of discharge structure. The present result suggests that a computational grid finer than that was used in most of previous studies is required to correctly capture the structure and dynamics of streamer: when a positive nanosecond voltage pulse is applied to the upper electrode, a streamer forms in the vicinity of upper electrode and propagates along the dielectric surface with a maximum propagation velocity of 2 × 108 cm/s, and a gap with low electron and ion density (i.e., plasma sheath) exists between the streamer and dielectric surface. Difference between the results obtained using the finer and the coarser grid is discussed in detail in terms of the electron transport at a position near the surface. When the finer grid is used, the low electron density near the surface is caused by the absence of ionization avalanche: in that region, the electrons generated by ionization is compensated by drift-diffusion flux. In contrast, when the coarser grid is used, underestimated drift-diffusion flux cannot compensate the electrons generated by ionization, and it leads to an incorrect increase of electron density.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cololla, P.

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  6. Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.

    Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less

  7. Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals

    DOE PAGES

    Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.; ...

    2018-02-09

    Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less

  8. Simulation of Anomalous Regional Climate Events with a Variable Resolution Stretched Grid GCM

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.

    1999-01-01

    The stretched-grid approach provides an efficient down-scaling and consistent interactions between global and regional scales due to using one variable-resolution model for integrations. It is a workable alternative to the widely used nested-grid approach introduced over a decade ago as a pioneering step in regional climate modeling. A variable-resolution General Circulation Model (GCM) employing a stretched grid, with enhanced resolution over the US as the area of interest, is used for simulating two anomalous regional climate events, the US summer drought of 1988 and flood of 1993. The special mode of integration using a stretched-grid GCM and data assimilation system is developed that allows for imitating the nested-grid framework. The mode is useful for inter-comparison purposes and for underlining the differences between these two approaches. The 1988 and 1993 integrations are performed for the two month period starting from mid May. Regional resolutions used in most of the experiments is 60 km. The major goal and the result of the study is obtaining the efficient down-scaling over the area of interest. The monthly mean prognostic regional fields for the stretched-grid integrations are remarkably close to those of the verifying analyses. Simulated precipitation patterns are successfully verified against gauge precipitation observations. The impact of finer 40 km regional resolution is investigated for the 1993 integration and an example of recovering subregional precipitation is presented. The obtained results show that the global variable-resolution stretched-grid approach is a viable candidate for regional and subregional climate studies and applications.

  9. On the impact of the resolution on the surface and subsurface Eastern Tropical Atlantic warm bias

    NASA Astrophysics Data System (ADS)

    Martín-Rey, Marta; Lazar, Alban

    2016-04-01

    The tropical variability has a great importance for the climate of adjacent areas. Its sea surface temperature anomalies (SSTA) affect in particular the Brazilian Nordeste and the Sahelian region, as well as the tropical Pacific or the Euro-Atlantic sector. Nevertheless, the state-of the art climate models exhibits very large systematic errors in reproducing the seasonal cycle and inter-annual variability in the equatorial and coastal Africa upwelling zones (up to several °C for SST). Theses biases exist already, in smaller proportions though, in forced ocean models (several 1/10th of °C), and affect not only the mixed layer but also the whole thermocline. Here, we present an analysis of the impact of horizontal and vertical resolution changes on these biases. Three different DRAKKAR NEMO OGCM simulations have been analysed, associated to the same forcing set (DFS4.4) with different grid resolutions: "REF" for reference (1/4°, 46 vertical levels), "HH" with a finer horizontal grid (1/12°, 46 v.l.) and "HV" with a finer vertical grid (1/4°, 75 v.l.). At the surface, a more realistic seasonal SST cycle is produced in HH in the three upwellings, where the warm bias decreases (by 10% - 20%) during boreal spring and summer. A notable result is that increasing vertical resolution in HV causes a shift (in advance) of the upwelling SST seasonal cycles. In order to better understand these results, we estimate the three upwelling subsurface temperature errors, using various in-situ datasets, and provide thus a three-dimensional view of the biases.

  10. Spatial Downscaling of Alien Species Presences using Machine Learning

    NASA Astrophysics Data System (ADS)

    Daliakopoulos, Ioannis N.; Katsanevakis, Stelios; Moustakas, Aristides

    2017-07-01

    Large scale, high-resolution data on alien species distributions are essential for spatially explicit assessments of their environmental and socio-economic impacts, and management interventions for mitigation. However, these data are often unavailable. This paper presents a method that relies on Random Forest (RF) models to distribute alien species presence counts at a finer resolution grid, thus achieving spatial downscaling. A sufficiently large number of RF models are trained using random subsets of the dataset as predictors, in a bootstrapping approach to account for the uncertainty introduced by the subset selection. The method is tested with an approximately 8×8 km2 grid containing floral alien species presence and several indices of climatic, habitat, land use covariates for the Mediterranean island of Crete, Greece. Alien species presence is aggregated at 16×16 km2 and used as a predictor of presence at the original resolution, thus simulating spatial downscaling. Potential explanatory variables included habitat types, land cover richness, endemic species richness, soil type, temperature, precipitation, and freshwater availability. Uncertainty assessment of the spatial downscaling of alien species’ occurrences was also performed and true/false presences and absences were quantified. The approach is promising for downscaling alien species datasets of larger spatial scale but coarse resolution, where the underlying environmental information is available at a finer resolution than the alien species data. Furthermore, the RF architecture allows for tuning towards operationally optimal sensitivity and specificity, thus providing a decision support tool for designing a resource efficient alien species census.

  11. Demeter-W

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-09-27

    Demeter-W, an open-access software written in Python, consists of extensible module packages. It is developed with statistical downscaling algorithms, to spatially and temporally downscale water demand data into finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. For better understanding of the driving forces and patterns for global water withdrawal, the researchers is able to utilize Demeter-W to reconstruct the data sets to examine the issues related to water withdrawals at fine spatial and temporal scales.

  12. Enhanced-Resolution Satellite Microwave Brightness Temperature Records for Mapping Boreal-Arctic Landscape Freeze-Thaw Heterogeneity

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Du, J.; Kimball, J. S.

    2017-12-01

    The landscape freeze-thaw (FT) status derived from satellite microwave remote sensing is closely linked to vegetation phenology and productivity, surface energy exchange, evapotranspiration, snow/ice melt dynamics, and trace gas fluxes over land areas affected by seasonally frozen temperatures. A long-term global satellite microwave Earth System Data Record of daily landscape freeze-thaw status (FT-ESDR) was developed using similar calibrated 37GHz, vertically-polarized (V-pol) brightness temperatures (Tb) from SMMR, SSM/I, and SSMIS sensors. The FT-ESDR shows mean annual spatial classification accuracies of 90.3 and 84.3 % for PM and AM overpass retrievals relative surface air temperature (SAT) measurement based FT estimates from global weather stations. However, the coarse FT-ESDR gridding (25-km) is insufficient to distinguish finer scale FT heterogeneity. In this study, we tested alternative finer scale FT estimates derived from two enhanced polar-grid (3.125-km and 6-km resolution), 36.5 GHz V-pol Tb records derived from calibrated AMSR-E and AMSR2 sensor observations. The daily FT estimates are derived using a modified seasonal threshold algorithm that classifies daily Tb variations in relation to grid cell-wise FT thresholds calibrated using ERA-Interim reanalysis based SAT, downscaled using a digital terrain map and estimated temperature lapse rates. The resulting polar-grid FT records for a selected study year (2004) show mean annual spatial classification accuracies of 90.1% (84.2%) and 93.1% (85.8%) for respective PM (AM) 3.125km and 6-km Tb retrievals relative to in situ SAT measurement based FT estimates from regional weather stations. Areas with enhanced FT accuracy include water-land boundaries and mountainous terrain. Differences in FT patterns and relative accuracy obtained from the enhanced grid Tb records were attributed to several factors, including different noise contributions from underlying Tb processing and spatial mismatches between Tb retrievals and SAT calibrated FT thresholds.

  13. High Quality Data for Grid Integration Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifton, Andrew; Draxl, Caroline; Sengupta, Manajit

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. The existing electric grid infrastructure in the US in particular poses significant limitations on wind power expansion. In this presentation we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather predictionmore » to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets are presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. The need for high-resolution weather data pushes modeling towards finer scales and closer synchronization. We also present how we anticipate such datasets developing in the future, their benefits, and the challenges with using and disseminating such large amounts of data.« less

  14. Eddy Effects in the General Circulation, Spanning Mean Currents, Mesoscale Eddies, and Topographic Generation, Including Submesoscale Nests

    DTIC Science & Technology

    2014-09-30

    against real-world data in cooperation with William S. Kessler and Hristina Hristova from PMEL (Solomon Sea), and Satoshi Mitarai and Taichi Sakagami from...refined grids, starting with basin-wide eddy permitting resolutions (although substantially finer than that used in climate modeling), and downscaling it...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send

  15. Regional Climate Simulation with a Variable Resolution Stretched Grid GCM: The Regional Down-Scaling Effects

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.

    1999-01-01

    The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.

  16. Convergence behavior of idealized convection-resolving simulations of summertime deep moist convection over land

    NASA Astrophysics Data System (ADS)

    Panosetti, Davide; Schlemmer, Linda; Schär, Christoph

    2018-05-01

    Convection-resolving models (CRMs) can explicitly simulate deep convection and resolve interactions between convective updrafts. They are thus increasingly used in numerous weather and climate applications. However, the truncation of the continuous energy cascade at scales of O (1 km) poses a serious challenge, as in kilometer-scale simulations the size and properties of the simulated convective cells are often determined by the horizontal grid spacing (Δ x ).In this study, idealized simulations of deep moist convection over land are performed to assess the convergence behavior of a CRM at Δ x = 8, 4, 2, 1 km and 500 m. Two types of convergence estimates are investigated: bulk convergence addressing domain-averaged and integrated variables related to the water and energy budgets, and structural convergence addressing the statistics and scales of individual clouds and updrafts. Results show that bulk convergence generally begins at Δ x =4 km, while structural convergence is not yet fully achieved at the kilometer scale, despite some evidence that the resolution sensitivity of updraft velocities and convective mass fluxes decreases at finer resolution. In particular, at finer grid spacings the maximum updraft velocity generally increases, and the size of the smallest clouds is mostly determined by Δ x . A number of different experiments are conducted, and it is found that the presence of orography and environmental vertical wind shear yields more energetic structures at scales much larger than Δ x , sometimes reducing the resolution sensitivity. Overall the results lend support to the use of kilometer-scale resolutions in CRMs, despite the inability of these models to fully resolve the associated cloud field.

  17. Photochemical grid model performance with varying horizontal grid resolution and sub-grid plume treatment for the Martins Creek near-field SO2 study

    NASA Astrophysics Data System (ADS)

    Baker, Kirk R.; Hawkins, Andy; Kelly, James T.

    2014-12-01

    Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.

  18. The Sensitivity of Numerical Simulations of Cloud-Topped Boundary Layers to Cross-Grid Flow

    NASA Astrophysics Data System (ADS)

    Wyant, Matthew C.; Bretherton, Christopher S.; Blossey, Peter N.

    2018-02-01

    In mesoscale and global atmospheric simulations with large horizontal domains, strong horizontal flow across the grid is often unavoidable, but its effects on cloud-topped boundary layers have received comparatively little study. Here the effects of cross-grid flow on large-eddy simulations of stratocumulus and trade-cumulus marine boundary layers are studied across a range of grid resolutions (horizontal × vertical) between 500 m × 20 m and 35 m × 5 m. Three cases are simulated: DYCOMS nocturnal stratocumulus, BOMEX trade cumulus, and a GCSS stratocumulus-to-trade cumulus case. Simulations are performed with a stationary grid (with 4-8 m s-1 horizontal winds blowing through the cyclic domain) and a moving grid (equivalent to subtracting off a fixed vertically uniform horizontal wind) approximately matching the mean boundary-layer wind speed. For stratocumulus clouds, cross-grid flow produces two primary effects on stratocumulus clouds: a filtering of fine-scale resolved turbulent eddies, which reduces stratocumulus cloud-top entrainment, and a vertical broadening of the stratocumulus-top inversion which enhances cloud-top entrainment. With a coarse (20 m) vertical grid, the former effect dominates and leads to strong increases in cloud cover and LWP, especially as horizontal resolution is coarsened. With a finer (5 m) vertical grid, the latter effect is stronger and leads to small reductions in cloud cover and LWP. For the BOMEX trade cumulus case, cross-grid flow tends to produce fewer and larger clouds with higher LWP, especially for coarser vertical grid spacing. The results presented are robust to choice of scalar advection scheme and Courant number.

  19. Sensitivity to grid resolution in the ability of a chemical transport model to simulate observed oxidant chemistry under high-isoprene conditions

    NASA Astrophysics Data System (ADS)

    Yu, Karen; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Miller, Christopher C.; Travis, Katherine R.; Zhu, Lei; Yantosca, Robert M.; Sulprizio, Melissa P.; Cohen, Ron C.; Dibb, Jack E.; Fried, Alan; Mikoviny, Tomas; Ryerson, Thomas B.; Wennberg, Paul O.; Wisthaler, Armin

    2016-04-01

    Formation of ozone and organic aerosol in continental atmospheres depends on whether isoprene emitted by vegetation is oxidized by the high-NOx pathway (where peroxy radicals react with NO) or by low-NOx pathways (where peroxy radicals react by alternate channels, mostly with HO2). We used mixed layer observations from the SEAC4RS aircraft campaign over the Southeast US to test the ability of the GEOS-Chem chemical transport model at different grid resolutions (0.25° × 0.3125°, 2° × 2.5°, 4° × 5°) to simulate this chemistry under high-isoprene, variable-NOx conditions. Observations of isoprene and NOx over the Southeast US show a negative correlation, reflecting the spatial segregation of emissions; this negative correlation is captured in the model at 0.25° × 0.3125° resolution but not at coarser resolutions. As a result, less isoprene oxidation takes place by the high-NOx pathway in the model at 0.25° × 0.3125° resolution (54 %) than at coarser resolution (59 %). The cumulative probability distribution functions (CDFs) of NOx, isoprene, and ozone concentrations show little difference across model resolutions and good agreement with observations, while formaldehyde is overestimated at coarse resolution because excessive isoprene oxidation takes place by the high-NOx pathway with high formaldehyde yield. The good agreement of simulated and observed concentration variances implies that smaller-scale non-linearities (urban and power plant plumes) are not important on the regional scale. Correlations of simulated vs. observed concentrations do not improve with grid resolution because finer modes of variability are intrinsically more difficult to capture. Higher model resolution leads to decreased conversion of NOx to organic nitrates and increased conversion to nitric acid, with total reactive nitrogen oxides (NOy) changing little across model resolutions. Model concentrations in the lower free troposphere are also insensitive to grid resolution. The overall low sensitivity of modeled concentrations to grid resolution implies that coarse resolution is adequate when modeling continental boundary layer chemistry for global applications.

  20. Disaggregating census data for population mapping using random forests with remotely-sensed and ancillary data.

    PubMed

    Stevens, Forrest R; Gaughan, Andrea E; Linard, Catherine; Tatem, Andrew J

    2015-01-01

    High resolution, contemporary data on human population distributions are vital for measuring impacts of population growth, monitoring human-environment interactions and for planning and policy development. Many methods are used to disaggregate census data and predict population densities for finer scale, gridded population data sets. We present a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, "Random Forest" estimation technique. We outline the combination of widely available, remotely-sensed and geospatial data that contribute to the modeled dasymetric weights and then use the Random Forest model to generate a gridded prediction of population density at ~100 m spatial resolution. This prediction layer is then used as the weighting surface to perform dasymetric redistribution of the census counts at a country level. As a case study we compare the new algorithm and its products for three countries (Vietnam, Cambodia, and Kenya) with other common gridded population data production methodologies. We discuss the advantages of the new method and increases over the accuracy and flexibility of those previous approaches. Finally, we outline how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America.

  1. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  2. Nested high-resolution large-eddy simulations in WRF to support wind power

    NASA Astrophysics Data System (ADS)

    Mirocha, J.; Kirkil, G.; Kosovic, B.; Lundquist, J. K.

    2009-12-01

    The WRF model’s grid nesting capability provides a potentially powerful framework for simulating flow over a wide range of scales. One such application is computation of realistic inflow boundary conditions for large eddy simulations (LES) by nesting LES domains within mesoscale domains. While nesting has been widely and successfully applied at GCM to mesoscale resolutions, the WRF model’s nesting behavior at the high-resolution (Δx < 1000m) end of the spectrum is less well understood. Nesting LES within msoscale domains can significantly improve turbulent flow prediction at the scale of a wind park, providing a basis for superior site characterization, or for improved simulation of turbulent inflows encountered by turbines. We investigate WRF’s grid nesting capability at high mesh resolutions using nested mesoscale and large-eddy simulations. We examine the spatial scales required for flow structures to equilibrate to the finer mesh as flow enters a nest, and how the process depends on several parameters, including grid resolution, turbulence subfilter stress models, relaxation zones at nest interfaces, flow velocities, surface roughnesses, terrain complexity and atmospheric stability. Guidance on appropriate domain sizes and turbulence models for LES in light of these results is provided This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-416482

  3. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies.

    PubMed

    Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui

    2009-01-01

    The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  4. Wind turbine wake interactions at field scale: An LES study of the SWiFT facility

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolei; Boomsma, Aaron; Barone, Matthew; Sotiropoulos, Fotis

    2014-06-01

    The University of Minnesota Virtual Wind Simulator (VWiS) code is employed to simulate turbine/atmosphere interactions in the Scaled Wind Farm Technology (SWiFT) facility developed by Sandia National Laboratories in Lubbock, TX, USA. The facility presently consists of three turbines and the simulations consider the case of wind blowing from South such that two turbines are in the free stream and the third turbine in the direct wake of one upstream turbine with separation of 5 rotor diameters. Large-eddy simulation (LES) on two successively finer grids is carried out to examine the sensitivity of the computed solutions to grid refinement. It is found that the details of the break-up of the tip vortices into small-scale turbulence structures can only be resolved on the finer grid. It is also shown that the power coefficient CP of the downwind turbine predicted on the coarse grid is somewhat higher than that obtained on the fine mesh. On the other hand, the rms (root-mean-square) of the CP fluctuations are nearly the same on both grids, although more small-scale turbulence structures are resolved upwind of the downwind turbine on the finer grid.

  5. Preliminary Cost Benefit Assessment of Systems for Detection of Hazardous Weather. Volume I,

    DTIC Science & Technology

    1981-07-01

    not be sufficient for adequate stream flow forecasting , it has important potential for real - time flash flood warning. This was illustrated by the 1977...provide a finer spatial resolution of the gridded data. See Table 9. 42 The results of a demonstration of the real - time capabilities of a radar-man system ...detailed real time measurement capabilities and scope for quantitative forecasting is most likely to provide the degree of lead time required if maximum

  6. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  7. Insights into the physico-chemical evolution of pyrogenic organic carbon emissions from biomass burning using coupled Lagrangian-Eulerian simulations

    NASA Astrophysics Data System (ADS)

    Suciu, L. G.; Griffin, R. J.; Masiello, C. A.

    2017-12-01

    Wildfires and prescribed burning are important sources of particulate and gaseous pyrogenic organic carbon (PyOC) emissions to the atmosphere. These emissions impact atmospheric chemistry, air quality and climate, but the spatial and temporal variabilities of these impacts are poorly understood, primarily because small and fresh fire plumes are not well predicted by three-dimensional Eulerian chemical transport models due to their coarser grid size. Generally, this results in underestimation of downwind deposition of PyOC, hydroxyl radical reactivity, secondary organic aerosol formation and ozone (O3) production. However, such models are very good for simulation of multiple atmospheric processes that could affect the lifetimes of PyOC emissions over large spatiotemporal scales. Finer resolution models, such as Lagrangian reactive plumes models (or plume-in-grid), could be used to trace fresh emissions at the sub-grid level of the Eulerian model. Moreover, Lagrangian plume models need background chemistry predicted by the Eulerian models to accurately simulate the interactions of the plume material with the background air during plume aging. Therefore, by coupling the two models, the physico-chemical evolution of the biomass burning plumes can be tracked from local to regional scales. In this study, we focus on the physico-chemical changes of PyOC emissions from sub-grid to grid levels using an existing chemical mechanism. We hypothesize that finer scale Lagrangian-Eulerian simulations of several prescribed burns in the U.S. will allow more accurate downwind predictions (validated by airborne observations from smoke plumes) of PyOC emissions (i.e., submicron particulate matter, organic aerosols, refractory black carbon) as well as O3 and other trace gases. Simulation results could be used to optimize the implementation of additional PyOC speciation in the existing chemical mechanism.

  8. Regional photochemical air quality modeling in the Mexico-US border area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza, A.; Russell, A.G.; Mejia, G.M.

    1998-12-31

    The Mexico-United States border area has become an increasingly important region due to its commercial, industrial and urban growth. As a result, environmental concerns have risen. Treaties like the North American Free Trade Agreement (NAFTA) have further motivated the development of environmental impact assessment in the area. Of particular concern are air quality, and how the activities on both sides of the border contribute to its degradation. This paper presents results of applying a three-dimensional photochemical airshed model to study air pollution dynamics along the Mexico-United States border. In addition, studies were conducted to assess how size resolution impacts themore » model performance. The model performed within acceptable statistic limits using 12.5 x 12.5 km{sup 2} grid cells, and the benefits using finer grids were limited. Results were further used to assess the influence of grid-cell size on the modeling of control strategies, where coarser grids lead to significant loss of information.« less

  9. Impact of buildings on surface solar radiation over urban Beijing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Bin; Liou, Kuo-Nan; Gu, Yu

    The rugged surface of an urban area due to varying buildings can interact with solar beams and affect both the magnitude and spatiotemporal distribution of surface solar fluxes. Here we systematically examine the impact of buildings on downward surface solar fluxes over urban Beijing by using a 3-D radiation parameterization that accounts for 3-D building structures vs. the conventional plane-parallel scheme. We find that the resulting downward surface solar flux deviations between the 3-D and the plane-parallel schemes are generally ±1–10 W m -2 at 800 m grid resolution and within ±1 W m -2 at 4 km resolution. Pairsmore » of positive–negative flux deviations on different sides of buildings are resolved at 800 m resolution, while they offset each other at 4 km resolution. Flux deviations from the unobstructed horizontal surface at 4 km resolution are positive around noon but negative in the early morning and late afternoon. The corresponding deviations at 800 m resolution, in contrast, show diurnal variations that are strongly dependent on the location of the grids relative to the buildings. Both the magnitude and spatiotemporal variations of flux deviations are largely dominated by the direct flux. Furthermore, we find that flux deviations can potentially be an order of magnitude larger by using a finer grid resolution. Atmospheric aerosols can reduce the magnitude of downward surface solar flux deviations by 10–65 %, while the surface albedo generally has a rather moderate impact on flux deviations. The results imply that the effect of buildings on downward surface solar fluxes may not be critically significant in mesoscale atmospheric models with a grid resolution of 4 km or coarser. However, the effect can play a crucial role in meso-urban atmospheric models as well as microscale urban dispersion models with resolutions of 1 m to 1 km.« less

  10. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2013-12-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation type data from the European Space Agency (ESA) GlobCover Project, and 30 arc-sec Leaf Area Index and Fraction of Absorbed Photosynthetically Active Radiation data from the ESA GlobCarbon Project. Simulations are carried out for the Metropolitan Area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers with depths of 0.01 and 1.0 m are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering the period from 6 to 7 September 2007 are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, topographic and land-use databases and grid resolution. Our comparisons show overall good agreement between simulated and observed data and also indicate that the low resolution of the 30 arc-sec soil database from United States Geological Survey, the soil moisture and skin temperature initial conditions assimilated from the GFS analyses and the synoptic forcing on the lateral boundaries of the finer grids may affect an adequate spatial description of the meteorological variables.

  11. Disaggregating Census Data for Population Mapping Using Random Forests with Remotely-Sensed and Ancillary Data

    PubMed Central

    Stevens, Forrest R.; Gaughan, Andrea E.; Linard, Catherine; Tatem, Andrew J.

    2015-01-01

    High resolution, contemporary data on human population distributions are vital for measuring impacts of population growth, monitoring human-environment interactions and for planning and policy development. Many methods are used to disaggregate census data and predict population densities for finer scale, gridded population data sets. We present a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, “Random Forest” estimation technique. We outline the combination of widely available, remotely-sensed and geospatial data that contribute to the modeled dasymetric weights and then use the Random Forest model to generate a gridded prediction of population density at ~100 m spatial resolution. This prediction layer is then used as the weighting surface to perform dasymetric redistribution of the census counts at a country level. As a case study we compare the new algorithm and its products for three countries (Vietnam, Cambodia, and Kenya) with other common gridded population data production methodologies. We discuss the advantages of the new method and increases over the accuracy and flexibility of those previous approaches. Finally, we outline how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America. PMID:25689585

  12. Enhancing GIS Capabilities for High Resolution Earth Science Grids

    NASA Astrophysics Data System (ADS)

    Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.

    2017-12-01

    Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS's parallel subsetting capabilities including challenges in the design and implementation of a scientific data subsetter.

  13. SoilGrids250m: Global gridded soil information based on machine learning

    PubMed Central

    Mendes de Jesus, Jorge; Heuvelink, Gerard B. M.; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N.; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A.; Batjes, Niels H.; Leenaars, Johan G. B.; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods—random forest and gradient boosting and/or multinomial logistic regression—as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10–fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License. PMID:28207752

  14. SoilGrids250m: Global gridded soil information based on machine learning.

    PubMed

    Hengl, Tomislav; Mendes de Jesus, Jorge; Heuvelink, Gerard B M; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A; Batjes, Niels H; Leenaars, Johan G B; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License.

  15. A Spectroscopic Catalog of Nearby, High Proper Motion M subdwarfs

    NASA Astrophysics Data System (ADS)

    Hejazi, Neda; Lepine, Sebastien; Homeier, Derek

    2018-01-01

    We present a catalog of 350 metal-poor M subdwarfs, most of them likely from the local Galactic halo population, assembled from medium-resolution observations made at the MDM observatory. All objects are high proper motion stars, with 257 of them having proper motions > 0.4"/yr. We have identified the brightest prototypes for each bin of a grid of 14 spectral subtypes (M0, M0.5, M1, … M6.5) and 9 metallicity bins that go from the moderately metal-poor subdwarfs (sdM), to the more metal-poor extreme subdwarfs (esdM), to the most metal-poor ultra subdwarfs (usdM), each of which is subdivided into three finer metallicity subclasses. The spectral classification by subtype and metallicity class has been determined by a template-fit method, and confirmed by synthetic-model fitting using the BT-Settl spectral grid. We provide the list of the brightest prototypes for each subtype/subclass, as a guide for future high-resolution surveys of low-mass, metal-poor stars.

  16. Grid Sensitivity Study for Slat Noise Simulations

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.; Buning, Pieter G.

    2014-01-01

    The slat noise from the 30P/30N high-lift system is being investigated through computational fluid dynamics simulations in conjunction with a Ffowcs Williams-Hawkings acoustics solver. Many previous simulations have been performed for the configuration, and the case was introduced as a new category for the Second AIAA workshop on Benchmark problems for Airframe Noise Configurations (BANC-II). However, the cost of the simulations has restricted the study of grid resolution effects to a baseline grid and coarser meshes. In the present study, two different approaches are being used to investigate the effect of finer resolution of near-field unsteady structures. First, a standard grid refinement by a factor of two is used, and the calculations are performed by using the same CFL3D solver employed in the majority of the previous simulations. Second, the OVERFLOW code is applied to the baseline grid, but with a 5th-order upwind spatial discretization as compared with the second-order discretization used in the CFL3D simulations. In general, the fine grid CFL3D simulation and OVERFLOW calculation are in very good agreement and exhibit the lowest levels of both surface pressure fluctuations and radiated noise. Although the smaller scales resolved by these simulations increase the velocity fluctuation levels, they appear to mitigate the influence of the larger scales on the surface pressure. These new simulations are used to investigate the influence of the grid on unsteady high-lift simulations and to gain a better understanding of the physics responsible for the noise generation and radiation.

  17. Continental-scale river flow in climate models

    NASA Technical Reports Server (NTRS)

    Miller, James R.; Russell, Gary L.; Caliri, Guilherme

    1994-01-01

    The hydrologic cycle is a major part of the global climate system. There is an atmospheric flux of water from the ocean surface to the continents. The cycle is closed by return flow in rivers. In this paper a river routing model is developed to use with grid box climate models for the whole earth. The routing model needs an algorithm for the river mass flow and a river direction file, which has been compiled for 4 deg x 5 deg and 2 deg x 2.5 deg resolutions. River basins are defined by the direction files. The river flow leaving each grid box depends on river and lake mass, downstream distance, and an effective flow speed that depends on topography. As input the routing model uses monthly land source runoff from a 5-yr simulation of the NASA/GISS atmospheric climate model (Hansen et al.). The land source runoff from the 4 deg x 5 deg resolution model is quartered onto a 2 deg x 2.5 deg grid, and the effect of grid resolution is examined. Monthly flow at the mouth of the world's major rivers is compared with observations, and a global error function for river flow is used to evaluate the routing model and its sensitivity to physical parameters. Three basinwide parameters are introduced: the river length weighted by source runoff, the turnover rate, and the basinwide speed. Although the values of these parameters depend on the resolution at which the rivers are defined, the values should converge as the grid resolution becomes finer. When the routing scheme described here is coupled with a climate model's source runoff, it provides the basis for closing the hydrologic cycle in coupled atmosphere-ocean models by realistically allowing water to return to the ocean at the correct location and with the proper magnitude and timing.

  18. Los Angeles and San Diego Margin High-Resolution Multibeam Bathymetry and Backscatter Data

    USGS Publications Warehouse

    Dartnell, Peter; Gardner, James V.; Mayer, Larry A.; Hughes-Clarke, John E.

    2004-01-01

    Summary -- The U.S. Geological Survey in cooperation with the University of New Hampshire and the University of New Brunswick mapped the nearshore regions off Los Angeles and San Diego, California using multibeam echosounders. Multibeam bathymetry and co-registered, corrected acoustic backscatter were collected in water depths ranging from about 3 to 900 m offshore Los Angeles and in water depths ranging from about 17 to 1230 m offshore San Diego. Continuous, 16-m spatial resolution, GIS ready format data of the entire Los Angeles Margin and San Diego Margin are available online as separate USGS Open-File Reports. For ongoing research, the USGS has processed sub-regions within these datasets at finer resolutions. The resolution of each sub-region was determined by the density of soundings within the region. This Open-File Report contains the finer resolution multibeam bathymetry and acoustic backscatter data that the USGS, Western Region, Coastal and Marine Geology Team has processed into GIS ready formats as of April 2004. The data are available in ArcInfo GRID and XYZ formats. See the Los Angeles or San Diego maps for the sub-region locations. These datasets in their present form were not originally intended for publication. The bathymetry and backscatter have data-collection and processing artifacts. These data are being made public to fulfill a Freedom of Information Act request. Care must be taken not to confuse artifacts with real seafloor morphology and acoustic backscatter.

  19. Large-Eddy Simulations of Atmospheric Flows Over Complex Terrain Using the Immersed-Boundary Method in the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Ma, Yulong; Liu, Heping

    2017-12-01

    Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.

  20. The T-REX valley wind intercomparison project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidli, J; Billings, B J; Burton, R

    2008-08-07

    An accurate simulation of the evolution of the atmospheric boundary layer is very important, as the evolution of the boundary layer sets the stage for many weather phenomena, such as deep convection. Over mountain areas the evolution of the boundary layer is particularly complex, due to the nonlinear interaction between boundary layer turbulence and thermally-induced mesoscale wind systems, such as the slope and valley winds. As the horizontal resolution of operational forecasts progresses to finer and finer resolution, more and more of the thermally-induced mesoscale wind systems can be explicitly resolved, and it is very timely to document the currentmore » state-of-the-art of mesoscale models at simulating the coupled evolution of the mountain boundary layer and the valley wind system. In this paper we present an intercomparison of valley wind simulations for an idealized valley-plain configuration using eight state-of-the-art mesoscale models with a grid spacing of 1 km. Different sets of three-dimensional simulations are used to explore the effects of varying model dynamical cores and physical parameterizations. This intercomparison project was conducted as part of the Terrain-induced Rotor Experiment (T-REX; Grubisic et al., 2008).« less

  1. Does resolution of flow field observation influence apparent habitat use and energy expenditure in juvenile coho salmon?

    USGS Publications Warehouse

    Tullos, Desiree D.; Walter, Cara; Dunham, Jason B.

    2016-01-01

    This study investigated how the resolution of observation influences interpretation of how fish, juvenile Coho Salmon (Oncorhynchus kisutch), exploit the hydraulic environment in streams. Our objectives were to evaluate how spatial resolution of the flow field observation influenced: (1) the velocities considered to be representative of habitat units; (2) patterns of use of the hydraulic environment by fish; and (3) estimates of energy expenditure. We addressed these objectives using observations within a 1:1 scale physical model of a full-channel log jam in an outdoor experimental stream. Velocities were measured with Acoustic Doppler Velocimetry at a 10 cm grid spacing, whereas fish locations and tailbeat frequencies were documented over time using underwater videogrammetry. Results highlighted that resolution of observation did impact perceived habitat use and energy expenditure, as did the location of measurement within habitat units and the use of averaging to summarize velocities within a habitat unit. In this experiment, the range of velocities and energy expenditure estimates increased with coarsening resolution (grid spacing from 10 to 100 cm), reducing the likelihood of measuring the velocities locally experienced by fish. In addition, the coarser resolutions contributed to fish appearing to select velocities that were higher than what was measured at finer resolutions. These findings indicate the need for careful attention to and communication of resolution of observation in investigating the hydraulic environment and in determining the habitat needs and bioenergetics of aquatic biota.

  2. Evaluating DEM conditioning techniques, elevation source data, and grid resolution for field-scale hydrological parameter extraction

    NASA Astrophysics Data System (ADS)

    Woodrow, Kathryn; Lindsay, John B.; Berg, Aaron A.

    2016-09-01

    Although digital elevation models (DEMs) prove useful for a number of hydrological applications, they are often the end result of numerous processing steps that each contains uncertainty. These uncertainties have the potential to greatly influence DEM quality and to further propagate to DEM-derived attributes including derived surface and near-surface drainage patterns. This research examines the impacts of DEM grid resolution, elevation source data, and conditioning techniques on the spatial and statistical distribution of field-scale hydrological attributes for a 12,000 ha watershed of an agricultural area within southwestern Ontario, Canada. Three conditioning techniques, including depression filling (DF), depression breaching (DB), and stream burning (SB), were examined. The catchments draining to each boundary of 7933 agricultural fields were delineated using the surface drainage patterns modeled from LiDAR data, interpolated to a 1 m, 5 m, and 10 m resolution DEMs, and from a 10 m resolution photogrammetric DEM. The results showed that variation in DEM grid resolution resulted in significant differences in the spatial and statistical distributions of contributing areas and the distributions of downslope flowpath length. Degrading the grid resolution of the LiDAR data from 1 m to 10 m resulted in a disagreement in mapped contributing areas of between 29.4% and 37.3% of the study area, depending on the DEM conditioning technique. The disagreements among the field-scale contributing areas mapped from the 10 m LiDAR DEM and photogrammetric DEM were large, with nearly half of the study area draining to alternate field boundaries. Differences in derived contributing areas and flowpaths among various conditioning techniques increased substantially at finer grid resolutions, with the largest disagreement among mapped contributing areas occurring between the 1 m resolution DB DEM and the SB DEM (37% disagreement) and the DB-DF comparison (36.5% disagreement in mapped areas). These results demonstrate that the decision to use one DEM conditioning technique over another, and the constraints of available DEM data resolution and source, can greatly impact the modeled surface drainage patterns at the scale of individual fields. This work has significance for applications that attempt to optimize best-management practices (BMPs) for reducing soil erosion and runoff contamination within agricultural watersheds.

  3. LES-based filter-matrix lattice Boltzmann model for simulating fully developed turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Zhuo, Congshan; Zhong, Chengwen

    2016-11-01

    In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.

  4. Modeled Full-Flight Aircraft Emissions Impacts on Air Quality and Their Sensitivity to Grid Resolution

    NASA Astrophysics Data System (ADS)

    Vennam, L. P.; Vizuete, W.; Talgo, K.; Omary, M.; Binkowski, F. S.; Xing, J.; Mathur, R.; Arunachalam, S.

    2017-12-01

    Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9-12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed 1.3% (mean, min-max: 0.46, 0.3-0.5 ppbv) and 0.2% (0.013, 0.004-0.02 μg/m3) of total O3 and PM2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O3 0.69, 0.5-0.85 ppbv) and 0.5% (PM2.5 0.03, 0.01-0.05 μg/m3)) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km2) and fine (36 × 36 km2) grid resolutions in NA showed 70 times and 13 times higher aviation impacts for O3 and PM2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions.

  5. Modeled Full-Flight Aircraft Emissions Impacts on Air Quality and Their Sensitivity to Grid Resolution

    PubMed Central

    Vennam, L. P.; Vizuete, W.; Talgo, K.; Omary, M.; Binkowski, F. S.; Xing, J.; Mathur, R.; Arunachalam, S.

    2018-01-01

    Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9–12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed ~1.3% (mean, min–max: 0.46, 0.3–0.5 ppbv) and 0.2% (0.013, 0.004–0.02 μg/m3) of total O3 and PM2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O3 0.69, 0.5–0.85 ppbv) and 0.5% (PM2.5 0.03, 0.01–0.05 μg/m3)) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km2) and fine (36 × 36 km2) grid resolutions in NA showed ~70 times and ~13 times higher aviation impacts for O3 and PM2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions. PMID:29707471

  6. Modeled Full-Flight Aircraft Emissions Impacts on Air Quality and Their Sensitivity to Grid Resolution.

    PubMed

    Vennam, L P; Vizuete, W; Talgo, K; Omary, M; Binkowski, F S; Xing, J; Mathur, R; Arunachalam, S

    2017-01-01

    Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9-12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed ~1.3% (mean, min-max: 0.46, 0.3-0.5 ppbv) and 0.2% (0.013, 0.004-0.02 μg/m 3 ) of total O 3 and PM 2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O 3 0.69, 0.5-0.85 ppbv) and 0.5% (PM 2.5 0.03, 0.01-0.05 μg/m 3 )) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km 2 ) and fine (36 × 36 km 2 ) grid resolutions in NA showed ~70 times and ~13 times higher aviation impacts for O 3 and PM 2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions.

  7. Hexagonal Pixels and Indexing Scheme for Binary Images

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    2004-01-01

    A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.

  8. Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.

    2001-01-01

    A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.

  9. Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations

    NASA Technical Reports Server (NTRS)

    Moon, Young J.; Liou, Meng-Sing

    1989-01-01

    Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.

  10. On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Jan, A.; Painter, S. L.; Coon, E. T.

    2017-12-01

    Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the DOE Office of Science.

  11. Impacts of spatial resolution and representation of flow connectivity on large-scale simulation of floods

    NASA Astrophysics Data System (ADS)

    Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan

    2017-10-01

    Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.

  12. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  13. Searching for the right scale in catchment hydrology: the effect of soil spatial variability in simulated states and fluxes

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-04-01

    The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029

  14. Fine-scale application of WRF-CAM5 during a dust storm episode over East Asia: Sensitivity to grid resolutions and aerosol activation parameterizations

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin

    2018-03-01

    An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on radiations, T2, precipitation, and air quality (e.g., decreasing O3) through complex aerosol-radiation-cloud-chemistry feedbacks. The inclusion of adsorptive activation of dust particles in the FN series scheme has similar impacts on the meteorology and air quality but to lesser extent as compared to differences between the FN series and AG schemes. Compared to the overall differences between the FN series and AG schemes, impacts of adsorptive activation of dust particles can contribute significantly to the increase of total CDNC (∼45%) during dust storm events and indicate their importance in modulating regional climate over East Asia.

  15. Performance of European chemistry transport models as function of horizontal resolution

    NASA Astrophysics Data System (ADS)

    Schaap, M.; Cuvelier, C.; Hendriks, C.; Bessagnet, B.; Baldasano, J. M.; Colette, A.; Thunis, P.; Karam, D.; Fagerli, H.; Graff, A.; Kranenburg, R.; Nyiri, A.; Pay, M. T.; Rouïl, L.; Schulz, M.; Simpson, D.; Stern, R.; Terrenoire, E.; Wind, P.

    2015-07-01

    Air pollution causes adverse effects on human health as well as ecosystems and crop yield and also has an impact on climate change trough short-lived climate forcers. To design mitigation strategies for air pollution, 3D Chemistry Transport Models (CTMs) have been developed to support the decision process. Increases in model resolution may provide more accurate and detailed information, but will cubically increase computational costs and pose additional challenges concerning high resolution input data. The motivation for the present study was therefore to explore the impact of using finer horizontal grid resolution for policy support applications of the European Monitoring and Evaluation Programme (EMEP) model within the Long Range Transboundary Air Pollution (LRTAP) convention. The goal was to determine the "optimum resolution" at which additional computational efforts do not provide increased model performance using presently available input data. Five regional CTMs performed four runs for 2009 over Europe at different horizontal resolutions. The models' responses to an increase in resolution are broadly consistent for all models. The largest response was found for NO2 followed by PM10 and O3. Model resolution does not impact model performance for rural background conditions. However, increasing model resolution improves the model performance at stations in and near large conglomerations. The statistical evaluation showed that the increased resolution better reproduces the spatial gradients in pollution regimes, but does not help to improve significantly the model performance for reproducing observed temporal variability. This study clearly shows that increasing model resolution is advantageous, and that leaving a resolution of 50 km in favour of a resolution between 10 and 20 km is practical and worthwhile. As about 70% of the model response to grid resolution is determined by the difference in the spatial emission distribution, improved emission allocation procedures at high spatial and temporal resolution are a crucial factor for further model resolution improvements.

  16. Development of Finer Spatial Resolution Optical Properties from MODIS

    DTIC Science & Technology

    2008-02-04

    infrared (SWIR) channels at 1240 nm and 2130 run. The increased resolution spectral Rrs channels are input into bio-optical algorithms (Quasi...processes. Additionally, increased resolution is required for validation of ocean color products in coastal regions due to the shorter spatial scales of...with in situ Rrs data to determine the "best" method in coastal regimes. We demonstrate that finer resolution is required for validation of coastal

  17. A new method to assess the added value of high-resolution regional climate simulations: application to the EURO-CORDEX dataset

    NASA Astrophysics Data System (ADS)

    Soares, P. M. M.; Cardoso, R. M.

    2017-12-01

    Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons and regions. For temperature, the added value is smaller. AcknowledgmentsThe authors wish to acknowledge SOLAR (PTDC/GEOMET/7078/2014) and FCT UID/GEO/50019/ 2013 (Instituto Dom Luiz) projects.

  18. High resolution energy analyzer for broad ion beam characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanarov, V.; Hayes, A.; Yevtukhov, R.

    2008-09-15

    Characterization of the ion energy distribution function (IEDF) of low energy high current density ion beams by conventional retarding field and deflection type energy analyzers is limited due to finite ion beam emittance and beam space charge spreading inside the analyzer. These deficiencies are, to a large extent, overcome with the recent development of the variable-focusing retarding field energy analyzer (RFEA), which has a cylindrical focusing electrode preceding the planar retarding grid. The principal concept of this analyzer is conversion of a divergent charged particle beam into a quasiparallel beam before analyzing it by the planar retarding field. This allowsmore » analysis of the beam particle total kinetic energy distribution with greatly improved energy resolution. Whereas this concept was first applied to analyze 5-10 keV pulsed electron beams, the present authors have adapted it to analyze the energy distribution of a low energy ({<=}1 KeV) broad ion beam. In this paper we describe the RFEA design, which was modified from the original, mainly as required by the specifics of broad ion beam energy analysis, and the device experimental characterization and modeling results. Among the modifications, an orifice electrode placed in front of the RFEA provides better spatial resolution of the broad ion beam ion optics emission region and reduces the beam plasma density in the vicinity of analyzer entry. An electron repeller grid placed in front of the RFEA collector was found critical for suppressing secondary electrons, both those incoming to the collector and those released from its surface, and improved energy spectrum measurement repeatability and accuracy. The use of finer mesh single- and double-grid retarding structures reduces the retarding grid lens effect and improves the analyzer energy resolution and accuracy of the measured spectrum mean energy. However, additional analyzer component and configuration improvements did not further change the analyzed IEDF shape or mean energy value. This led us to conclude that the optimized analyzer construction provides an energy resolution considerably narrower than the investigated ion beam energy spectrum full width at half maximum, and the derived energy spectrum is an objective and accurate representation of the analyzed broad ion beam energy distribution characteristics. A quantitative study of the focusing voltage and retarding grid field effects based on the experimental data and modeling results have supported this conclusion.« less

  19. An economic prediction of the finer resolution level wavelet coefficients in electronic structure calculations.

    PubMed

    Nagy, Szilvia; Pipek, János

    2015-12-21

    In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.

  20. Decadal Variability of Temperature and Salinity in the Northwest Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Mishonov, A. V.; Seidov, D.; Reagan, J. R.; Boyer, T.; Parsons, A. R.

    2017-12-01

    There are only a few regions in the World Ocean where the density of observations collected over the past 60 years is sufficient for reliable data mapping with spatial resolutions finer than one-degree. The Northwest Atlantic basin is one such regions where a spatial resolution of gridded temperature and salinity fields, comparable to those generated by eddy-resolving numerical models of ocean circulation, has recently becomes available. Using the new high-resolution Northwest Atlantic Regional Climatology, built on quarter-degree and one-tenth-degree resolution fields, we analyzed decadal variability and trends of temperature and salinity over 60 years in the Northwest Atlantic, and two 30-year ocean climates of 1955-1984 and 1985-2012 to evaluate the oceanic climate shift in this region. The 30-year climate shift is demonstrated using an innovative 3-D visualization of temperature and salinity. Spatial and temporal variability of heat accumulation found in previous research of the entire North Atlantic Ocean persists in the Northwest Atlantic Ocean. Salinity changes between two 30-year climates were also computed and are discussed.

  1. A Validation Study of the Compressible Rayleigh–Taylor Instability Comparing the Ares and Miranda Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.

    In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less

  2. A Validation Study of the Compressible Rayleigh–Taylor Instability Comparing the Ares and Miranda Codes

    DOE PAGES

    Rehagen, Thomas J.; Greenough, Jeffrey A.; Olson, Britton J.

    2017-04-20

    In this paper, the compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution.more » The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. Finally, to have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares).« less

  3. Cause and Cure-Deterioration in Accuracy of CFD Simulations with Use of High-Aspect-Ratio Triangular/Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji

    2017-01-01

    In the multi-dimensional space-time conservation element and solution element16 (CESE) method, triangles and tetrahedral mesh elements turn out to be the most natural building blocks for 2D and 3D spatial grids, respectively. As such, the CESE method is naturally compatible with the simplest 2D and 3D unstructured grids and thus can be easily applied to solve problems with complex geometries. However, because (a) accurate solution of a high-Reynolds number flow field near a solid wall requires that the grid intervals along the direction normal to the wall be much finer than those in a direction parallel to the wall and, as such, the use of grid cells with extremely high aspect ratio (103 to 106) may become mandatory, and (b) unlike quadrilateral hexahedral grids, it is well-known that accuracy of gradient computations involving triangular tetrahedral grids tends to deteriorate rapidly as cell aspect ratio increases. As a result, the use of triangular tetrahedral grid cells near a solid wall has long been deemed impractical by CFD researchers. In view of (a) the critical role played by triangular tetrahedral grids in the CESE development, and (b) the importance of accurate resolution of high-Reynolds number flow field near a solid wall, as will be presented in the main paper, a comprehensive and rigorous mathematical framework that clearly identifies the reasons behind the accuracy deterioration as described above has been developed for the 2D case involving triangular cells. By avoiding the pitfalls identified by the 2D framework, and its 3D extension, it has been shown numerically.

  4. Bringing the Coastal Zone into Finer Focus

    NASA Astrophysics Data System (ADS)

    Guild, L. S.; Hooker, S. B.; Kudela, R. M.; Morrow, J. H.; Torres-Perez, J. L.; Palacios, S. L.; Negrey, K.; Dungan, J. L.

    2015-12-01

    Measurements over extents from submeter to 10s of meters are critical science requirements for the design and integration of remote sensing instruments for coastal zone research. Various coastal ocean phenomena operate at different scales (e.g. meters to kilometers). For example, river plumes and algal blooms have typical extents of 10s of meters and therefore can be resolved with satellite data, however, shallow benthic ecosystem (e.g., coral, seagrass, and kelp) biodiversity and change are best studied at resolutions of submeter to meter, below the pixel size of typical satellite products. The delineation of natural phenomena do not fit nicely into gridded pixels and the coastal zone is complicated by mixed pixels at the land-sea interface with a range of bio-optical signals from terrestrial and water components. In many standard satellite products, these coastal mixed pixels are masked out because they confound algorithms for the ocean color parameter suite. In order to obtain data at the land/sea interface, finer spatial resolution satellite data can be achieved yet spectral resolution is sacrificed. This remote sensing resolution challenge thwarts the advancement of research in the coastal zone. Further, remote sensing of benthic ecosystems and shallow sub-surface phenomena are challenged by the requirements to sense through the sea surface and through a water column with varying light conditions from the open ocean to the water's edge. For coastal waters, >80% of the remote sensing signal is scattered/absorbed due to the atmospheric constituents, sun glint from the sea surface, and water column components. In addition to in-water measurements from various platforms (e.g., ship, glider, mooring, and divers), low altitude aircraft outfitted with high quality bio-optical radiometer sensors and targeted channels matched with in-water sensors and higher altitude platform sensors for ocean color products, bridge the sea-truth measurements to the pixels acquired from satellite and high altitude platforms. We highlight a novel NASA airborne calibration, validation, and research capability for addressing the coastal remote sensing resolution challenge.

  5. Application of FUN3D Solver for Aeroacoustics Simulation of a Nose Landing Gear Configuration

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.

    2011-01-01

    Numerical simulations have been performed for a nose landing gear configuration corresponding to the experimental tests conducted in the Basic Aerodynamic Research Tunnel at NASA Langley Research Center. A widely used unstructured grid code, FUN3D, is examined for solving the unsteady flow field associated with this configuration. A series of successively finer unstructured grids has been generated to assess the effect of grid refinement. Solutions have been obtained on purely tetrahedral grids as well as mixed element grids using hybrid RANS/LES turbulence models. The agreement of FUN3D solutions with experimental data on the same size mesh is better on mixed element grids compared to pure tetrahedral grids, and in general improves with grid refinement.

  6. The added value of stochastic spatial disaggregation for short-term rainfall forecasts currently available in Canada

    NASA Astrophysics Data System (ADS)

    Gagnon, Patrick; Rousseau, Alain N.; Charron, Dominique; Fortin, Vincent; Audet, René

    2017-11-01

    Several businesses and industries rely on rainfall forecasts to support their day-to-day operations. To deal with the uncertainty associated with rainfall forecast, some meteorological organisations have developed products, such as ensemble forecasts. However, due to the intensive computational requirements of ensemble forecasts, the spatial resolution remains coarse. For example, Environment and Climate Change Canada's (ECCC) Global Ensemble Prediction System (GEPS) data is freely available on a 1-degree grid (about 100 km), while those of the so-called High Resolution Deterministic Prediction System (HRDPS) are available on a 2.5-km grid (about 40 times finer). Potential users are then left with the option of using either a high-resolution rainfall forecast without uncertainty estimation and/or an ensemble with a spectrum of plausible rainfall values, but at a coarser spatial scale. The objective of this study was to evaluate the added value of coupling the Gibbs Sampling Disaggregation Model (GSDM) with ECCC products to provide accurate, precise and consistent rainfall estimates at a fine spatial resolution (10-km) within a forecast framework (6-h). For 30, 6-h, rainfall events occurring within a 40,000-km2 area (Québec, Canada), results show that, using 100-km aggregated reference rainfall depths as input, statistics of the rainfall fields generated by GSDM were close to those of the 10-km reference field. However, in forecast mode, GSDM outcomes inherit of the ECCC forecast biases, resulting in a poor performance when GEPS data were used as input, mainly due to the inherent rainfall depth distribution of the latter product. Better performance was achieved when the Regional Deterministic Prediction System (RDPS), available on a 10-km grid and aggregated at 100-km, was used as input to GSDM. Nevertheless, most of the analyzed ensemble forecasts were weakly consistent. Some areas of improvement are identified herein.

  7. Optimal Grid Size for Inter-Comparability of MODIS And VIIRS Vegetation Indices at Level 2G or Higher

    NASA Astrophysics Data System (ADS)

    Campagnolo, M.; Schaaf, C.

    2016-12-01

    Due to the necessity of time compositing and other user requirements, vegetation indices, as well as many other EOS derived products, are distributed in a gridded format (level L2G or higher) using an equal area sinusoidal grid, at grid sizes of 232 m, 463 m or 926 m. In this process, the actual surface signal suffers somewhat of a degradation, caused by both the sensor's point spread function and this resampling from swath to the regular grid. The magnitude of that degradation depends on a number of factors, such as surface heterogeneity, band nominal resolution, observation geometry and grid size. In this research, the effect of grid size is quantified for MODIS and VIIRS (at five EOS validation sites with distinct land covers), for the full range of view zenith angles, and at grid sizes of 232 m, 253 m, 309 m, 371 m, 397 m and 463 m. This allows us to compare MODIS and VIIRS gridded products for the same scenes, and to determine the grid size at which these products are most similar. Towards that end, simulated MODIS and VIIRS bands are generated from Landsat 8 surface reflectance images at each site and gridded products are then derived by using maximum obscov resampling. Then, for every grid size, the original Landsat 8 NDVI and the derived MODIS and VIIRS NDVI products are compared. This methodology can be applied to other bands and products, to determine which spatial aggregation overall is best suited for EOS to S-NPP product continuity. Results for MODIS (250 m bands) and VIIRS (375 m bands) NDVI products show that finer grid sizes tend to be better at preserving the original signal. Significant degradation for gridded NDVI occurs when grid size is larger then 253 m (MODIS) and 371 m (VIIRS). Our results suggest that current MODIS "500 m" (actually 463 m) grid size is best for product continuity. Note however, that up to that grid size value, MODIS gridded products are somewhat better at preserving the surface signal than VIIRS, except for at very high VZA.

  8. Local Data Integration in East Central Florida

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Manobianco, John T.

    1999-01-01

    The Applied Meteorology Unit has configured a Local Data Integration System (LDIS) for east central Florida which assimilates in-situ and remotely-sensed observational data into a series of high-resolution gridded analyses. The ultimate goal for running LDIS is to generate products that may enhance weather nowcasts and short-range (less than 6 h) forecasts issued in support of the 45th Weather Squadron (45 WS), Spaceflight Meteorology Group (SMG), and the Melbourne National Weather Service (NWS MLB) operational requirements. LDIS has the potential to provide added value for nowcasts and short-ten-n forecasts for two reasons. First, it incorporates all data operationally available in east central Florida. Second, it is run at finer spatial and temporal resolutions than current national-scale operational models such as the Rapid Update Cycle and Eta models. LDIS combines all available data to produce grid analyses of primary variables (wind, temperature, etc.) at specified temporal and spatial resolutions. These analyses of primary variables can be used to compute diagnostic quantities such as vorticity and divergence. This paper demonstrates the utility of LDIS over east central Florida for a warm season case study. The evolution of a significant thunderstorm outflow boundary is depicted through horizontal and vertical cross section plots of wind speed, divergence, and circulation. In combination with a suitable visualization too], LDIS may provide users with a more complete and comprehensive understanding of evolving mesoscale weather than could be developed by individually examining the disparate data sets over the same area and time.

  9. NASCAP simulation of PIX 2 experiments

    NASA Technical Reports Server (NTRS)

    Roche, J. C.; Mandell, M. J.

    1985-01-01

    The latest version of the NASCAP/LEO digital computer code used to simulate the PIX 2 experiment is discussed. NASCAP is a finite-element code and previous versions were restricted to a single fixed mesh size. As a consequence the resolution was dictated by the largest physical dimension to be modeled. The latest version of NASCAP/LEO can subdivide selected regions. This permitted the modeling of the overall Delta launch vehicle in the primary computational grid at a coarse resolution, with subdivided regions at finer resolution being used to pick up the details of the experiment module configuration. Langmuir probe data from the flight were used to estimate the space plasma density and temperature and the Delta ground potential relative to the space plasma. This information is needed for input to NASCAP. Because of the uncertainty or variability in the values of these parameters, it was necessary to explore a range around the nominal value in order to determine the variation in current collection. The flight data from PIX 2 were also compared with the results of the NASCAP simulation.

  10. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  11. SoilInfo App: global soil information on your palm

    NASA Astrophysics Data System (ADS)

    Hengl, Tomislav; Mendes de Jesus, Jorge

    2015-04-01

    ISRIC ' World Soil Information has released in 2014 and app for mobile de- vices called 'SoilInfo' (http://soilinfo-app.org) and which aims at providing free access to the global soil data. SoilInfo App (available for Android v.4.0 Ice Cream Sandwhich or higher, and Apple v.6.x and v.7.x iOS) currently serves the Soil- Grids1km data ' a stack of soil property and class maps at six standard depths at a resolution of 1 km (30 arc second) predicted using automated geostatistical mapping and global soil data models. The list of served soil data includes: soil organic carbon (), soil pH, sand, silt and clay fractions (%), bulk density (kg/m3), cation exchange capacity of the fine earth fraction (cmol+/kg), coarse fragments (%), World Reference Base soil groups, and USDA Soil Taxonomy suborders (DOI: 10.1371/journal.pone.0105992). New soil properties and classes will be continuously added to the system. SoilGrids1km are available for download under a Creative Commons non-commercial license via http://soilgrids.org. They are also accessible via a Representational State Transfer API (http://rest.soilgrids.org) service. SoilInfo App mimics common weather apps, but is also largely inspired by the crowdsourcing systems such as the OpenStreetMap, Geo-wiki and similar. Two development aspects of the SoilInfo App and SoilGrids are constantly being worked on: Data quality in terms of accuracy of spatial predictions and derived information, and Data usability in terms of ease of access and ease of use (i.e. flexibility of the cyberinfrastructure / functionalities such as the REST SoilGrids API, SoilInfo App etc). The development focus in 2015 is on improving the thematic and spatial accuracy of SoilGrids predictions, primarily by using finer resolution covariates (250 m) and machine learning algorithms (such as random forests) to improve spatial predictions.

  12. The National Map - Orthoimagery

    USGS Publications Warehouse

    Mauck, James; Brown, Kim; Carswell, William J.

    2009-01-01

    Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.

  13. Global spectroscopic survey of cloud thermodynamic phase at high spatial resolution, 2005-2015

    NASA Astrophysics Data System (ADS)

    Thompson, David R.; Kahn, Brian H.; Green, Robert O.; Chien, Steve A.; Middleton, Elizabeth M.; Tran, Daniel Q.

    2018-02-01

    The distribution of ice, liquid, and mixed phase clouds is important for Earth's planetary radiation budget, impacting cloud optical properties, evolution, and solar reflectivity. Most remote orbital thermodynamic phase measurements observe kilometer scales and are insensitive to mixed phases. This under-constrains important processes with outsize radiative forcing impact, such as spatial partitioning in mixed phase clouds. To date, the fine spatial structure of cloud phase has not been measured at global scales. Imaging spectroscopy of reflected solar energy from 1.4 to 1.8 µm can address this gap: it directly measures ice and water absorption, a robust indicator of cloud top thermodynamic phase, with spatial resolution of tens to hundreds of meters. We report the first such global high spatial resolution survey based on data from 2005 to 2015 acquired by the Hyperion imaging spectrometer onboard NASA's Earth Observer 1 (EO-1) spacecraft. Seasonal and latitudinal distributions corroborate observations by the Atmospheric Infrared Sounder (AIRS). For extratropical cloud systems, just 25 % of variance observed at GCM grid scales of 100 km was related to irreducible measurement error, while 75 % was explained by spatial correlations possible at finer resolutions.

  14. Impact of Resolution on Simulation of Closed Mesoscale Cellular Convection Identified by Dynamically Guided Watershed Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martini, Matus N.; Gustafson, William I.; Yang, Qing

    2014-11-18

    Organized mesoscale cellular convection (MCC) is a common feature of marine stratocumulus that forms in response to a balance between mesoscale dynamics and smaller scale processes such as cloud radiative cooling and microphysics. We use the Weather Research and Forecasting model with chemistry (WRF-Chem) and fully coupled cloud-aerosol interactions to simulate marine low clouds during the VOCALS-REx campaign over the southeast Pacific. A suite of experiments with 3- and 9-km grid spacing indicates resolution-dependent behavior. The simulations with finer grid spacing have smaller liquid water paths and cloud fractions, while cloud tops are higher. The observed diurnal cycle is reasonablymore » well simulated. To isolate organized MCC characteristics we develop a new automated method, which uses a variation of the watershed segmentation technique that combines the detection of cloud boundaries with a test for coincident vertical velocity characteristics. This ensures that the detected cloud fields are dynamically consistent for closed MCC, the most common MCC type over the VOCALS-REx region. We demonstrate that the 3-km simulation is able to reproduce the scaling between horizontal cell size and boundary layer height seen in satellite observations. However, the 9-km simulation is unable to resolve smaller circulations corresponding to shallower boundary layers, instead producing invariant MCC horizontal scale for all simulated boundary layers depths. The results imply that climate models with grid spacing of roughly 3 km or smaller may be needed to properly simulate the MCC structure in the marine stratocumulus regions.« less

  15. Does resolution of flow field observation influence apparent habitat use and energy expenditure in juvenile coho salmon?

    NASA Astrophysics Data System (ADS)

    Tullos, D. D.; Walter, C.; Dunham, J.

    2016-12-01

    This study investigated how the resolution of observation influences interpretation of how fish, juvenile Coho Salmon (Oncorhynchus kisutch), exploit the hydraulic environment in streams. Our objectives were to evaluate how spatial resolution of the flow field observation influenced: 1) the velocities considered to be representative of habitat units; 2) patterns of use of the hydraulic environment by fish; and 3) estimates of energy expenditure. We addressed these objectives using observations within a 1:1 scale physical model of a full-channel log jam in an outdoor experimental stream. Velocities were measured with Acoustic Doppler Velocimetry at a 10 cm grid spacing, whereas fish locations and tailbeat frequencies were documented over time using underwater videogrammetry. Results highlighted that resolution of observation did impact perceived habitat use and energy expenditure, as did the location of measurement within habitat units and the use of averaging to summarize velocities within a habitat unit. In this experiment, the range of velocities and energy expenditure estimates increased with coarsening resolution, reducing the likelihood of measuring the velocities locally experienced by fish. In addition, the coarser resolutions contributed to fish appearing to select velocities that were higher than what was measured at finer resolutions. These findings indicate the need for careful attention to and communication of resolution of observation in investigating the hydraulic environment and in determining the habitat needs and bioenergetics of aquatic biota.

  16. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  17. The Australian National Airborne Field Experiment 2005: Soil Moisture Remote Sensing at 60 Meter Resolution and Up

    NASA Technical Reports Server (NTRS)

    Kim, E. J.; Walker, J. P.; Panciera, R.; Kalma, J. D.

    2006-01-01

    Spatially-distributed soil moisture observations have applications spanning a wide range of spatial resolutions from the very local needs of individual farmers to the progressively larger areas of interest to weather forecasters, water resource managers, and global climate modelers. To date, the most promising approach for space-based remote sensing of soil moisture makes use of passive microwave emission radiometers at L-band frequencies (1-2 GHz). Several soil moisture-sensing satellites have been proposed in recent years, with the European Space Agency's Soil Moisture Ocean Salinity (SMOS) mission scheduled to be launched first in a couple years. While such a microwave-based approach has the advantage of essentially allweather operation, satellite size limits spatial resolution to 10's of km. Whether used at this native resolution or in conjunction with some type of downscaling technique to generate soil moisture estimates on a finer-scale grid, the effects of subpixel spatial variability play a critical role. The soil moisture variability is typically affected by factors such as vegetation, topography, surface roughness, and soil texture. Understanding and these factors is the key to achieving accurate soil moisture retrievals at any scale. Indeed, the ability to compensate for these factors ultimately limits the achievable spatial resolution and/or accuracy of the retrieval. Over the last 20 years, a series of airborne campaigns in the USA have supported the development of algorithms for spaceborne soil moisture retrieval. The most important observations involved imagery from passive microwave radiometers. The early campaigns proved that the retrieval worked for larger and larger footprints, up to satellite-scale footprints. These provided the solid basis for proposing the satellite missions. More recent campaigns have explored other aspects such as retrieval performance through greater amounts of vegetation. All of these campaigns featured extensive ground truth collection over a range of grid spacings, to provide a basis for examining the effects of subpixel variability. However, the native footprint size of the airborne L-band radiometers was always a few hundred meters. During the recently completed (November, 2005) National Airborne Field Experiment (NAFE) campaign in Australia, a compact L-band radiometer was deployed on a small aircraft. This new combination permitted routine observations at native resolutions as high as 60 meters, substantially finer than in previous airborne soil moisture campaigns, as well as satellite footprint areal coverage. The radiometer, the Polarimetric L-band Microwave Radiometer (PLMR) performed extremely well and operations included extensive calibration-related observations. Thus, along with the extensive fine-scale ground truth, the NAFE dataset includes all the ingredients for the first scaling studies involving very-high-native resolution soil moisture observations and the effects of vegetation, roughness, etc. A brief overview of the NAFE will be presented, then examples of the airborne observations with resolutions from 60 m to 1 km will be shown, and early results from scaling studies will be discussed.

  18. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  19. An Eulerian/Lagrangian method for computing blade/vortex impingement

    NASA Technical Reports Server (NTRS)

    Steinhoff, John; Senge, Heinrich; Yonghu, Wenren

    1991-01-01

    A combined Eulerian/Lagrangian approach to calculating helicopter rotor flows with concentrated vortices is described. The method computes a general evolving vorticity distribution without any significant numerical diffusion. Concentrated vortices can be accurately propagated over long distances on relatively coarse grids with cores only several grid cells wide. The method is demonstrated for a blade/vortex impingement case in 2D and 3D where a vortex is cut by a rotor blade, and the results are compared to previous 2D calculations involving a fifth-order Navier-Stokes solver on a finer grid.

  20. Investigation of Grid Adaptation to Reduce Computational Efforts for a 2-D Hydrogen-Fueled Dual-Mode Scramjet

    NASA Astrophysics Data System (ADS)

    Foo, Kam Keong

    A two-dimensional dual-mode scramjet flowpath is developed and evaluated using the ANSYS Fluent density-based flow solver with various computational grids. Results are obtained for fuel-off, fuel-on non-reacting, and fuel-on reacting cases at different equivalence ratios. A one-step global chemical kinetics hydrogen-air model is used in conjunction with the eddy-dissipation model. Coarse, medium and fine computational grids are used to evaluate grid sensitivity and to investigate a lack of grid independence. Different grid adaptation strategies are performed on the coarse grid in an attempt to emulate the solutions obtained from the finer grids. The goal of this study is to investigate the feasibility of using various mesh adaptation criteria to significantly decrease computational efforts for high-speed reacting flows.

  1. A dynamic subgrid-scale parameterization of the effective wall stress in atmospheric boundary layer flows over multiscale, fractal-like surfaces

    NASA Astrophysics Data System (ADS)

    Anderson, William; Meneveau, Charles

    2010-05-01

    A dynamic subgrid-scale (SGS) parameterization for hydrodynamic surface roughness is developed for large-eddy simulation (LES) of atmospheric boundary layer (ABL) flow over multiscale, fractal-like surfaces. The model consists of two parts. First, a baseline model represents surface roughness at horizontal length-scales that can be resolved in the LES. This model takes the form of a force using a prescribed drag coefficient. This approach is tested in LES of flow over cubes, wavy surfaces, and ellipsoidal roughness elements for which there are detailed experimental data available. Secondly, a dynamic roughness model is built, accounting for SGS surface details of finer resolution than the LES grid width. The SGS boundary condition is based on the logarithmic law of the wall, where the unresolved roughness of the surface is modeled as the product of local root-mean-square (RMS) of the unresolved surface height and an unknown dimensionless model coefficient. This coefficient is evaluated dynamically by comparing the plane-average hydrodynamic drag at two resolutions (grid- and test-filter scale, Germano et al., 1991). The new model is tested on surfaces generated through superposition of random-phase Fourier modes with prescribed, power-law surface-height spectra. The results show that the method yields convergent results and correct trends. Limitations and further challenges are highlighted. Supported by the US National Science Foundation (EAR-0609690).

  2. The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.

    2008-01-01

    Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.

  3. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  4. Mapping Mars' northern plains: origins, evolution and response to climate change - an overview of the grid mapping method.

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason; Balme, Matthew; Conway, Susan

    2015-04-01

    An International Space Science Institute (ISSI) team project has been convened to study the northern plains of Mars. The northern plains are younger and at lower elevation than the majority of the martian surface and are thought to be the remnants of an ancient ocean. Understanding the surface geology and geomorphology of the Northern Plains is complex, because the surface has been subtly modified many times, making traditional unit-boundaries hard to define. Our ISSI team project aims to answer the following questions: 1) "What is the distribution of ice-related landforms in the northern plains, and can it be related to distinct latitude bands or different geological or geomorphological units?" 2) "What is the relationship between the latitude dependent mantle (LDM; a draping unit believed to comprise of ice and dust thought to be deposited under periods of high axial obliquity) and (i) landforms indicative of ground ice, and (ii) other geological units in the northern plains?" 3) "What are the distributions and associations of recent landforms indicative of thaw of ice or snow?" With increasing coverage of high-resolution images of the surface of we are able to identify increasing numbers and varieties of small-scale landforms on Mars. Many such landforms are too small to represent on regional maps, yet determining their presence or absence across large areas can form the observational basis for developing hypotheses on the nature and history of an area. The combination of improved spatial resolution with near-continuous coverage increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre-scale landforms. Here, we describe an approach to mapping small features across large areas. Rather than traditional mapping with points, lines and polygons, we used a grid "tick box" approach to locate specific landforms. The mapping strips were divided into 15×150 grid of squares, each approximately 20×20 km, for each study area. Orbital images at 6-15m/pix were then viewed systematically for each grid square and the presence or absence of each of the basic suite of landforms recorded. The landforms were recorded as being either "present", "dominant", "possible", or "absent" in each grid square. The result is a series of coarse-resolution "rasters" showing the distribution of the different types of landforms across the strip. We have found this approach to be efficient, scalable and appropriate for teams of people mapping remotely. It is easily scalable because, carrying the "absent" values forward to finer grids from the larger grids would mean only areas with positive values for that landform would need to be examined to increase the resolution for the whole strip. As each sub-grid only requires the presence or absence of a landform ascertaining, it therefore removes an individual's decision as to where to draw boundaries, making the method efficient and repeatable.

  5. The 2017 México Tsunami Record, Numerical Modeling and Threat Assessment in Costa Rica

    NASA Astrophysics Data System (ADS)

    Chacón-Barrantes, Silvia

    2018-03-01

    An M w 8.2 earthquake and tsunami occurred offshore the Pacific coast of México on 2017-09-08, at 04:49 UTC. Costa Rican tide gauges have registered a total of 21 local, regional and far-field tsunamis. The Quepos gauge registered 12 tsunamis between 1960 and 2014 before it was relocated inside a harbor by late 2014, where it registered two more tsunamis. This paper analyzes the 2017 México tsunami as recorded by the Quepos gauge. It took 2 h for the tsunami to arrive to Quepos, with a first peak height of 9.35 cm and a maximum amplitude of 18.8 cm occurring about 6 h later. As a decision support tool, this tsunami was modeled for Quepos in real time using ComMIT (Community Model Interface for Tsunami) with the finer grid having a resolution of 1 arcsec ( 30 m). However, the model did not replicate the tsunami record well, probably due to the lack of a finer and more accurate bathymetry. In 2014, the National Tsunami Monitoring System of Costa Rica (SINAMOT) was created, acting as a national tsunami warning center. The occurrence of the 2017 México tsunami raised concerns about warning dissemination mechanisms for most coastal communities in Costa Rica, due to its short travel time.

  6. Evaluation of precipitation trends from high-resolution satellite precipitation products over Mainland China

    NASA Astrophysics Data System (ADS)

    Chen, Fengrui; Gao, Yongqi

    2018-01-01

    Many studies have reported the excellent ability of high-resolution satellite precipitation products (0.25° or finer) to capture the spatial distribution of precipitation. However, it is not known whether the precipitation trends derived from them are reliable. For the first time, we have evaluated the annual and seasonal precipitation trends from two typical sources of high-resolution satellite-gauge products, TRMM 3B43 and PERSIANN-CDR, using rain gauge observations over China, and they were also compared with those from gauge-only products (0.25° and 0.5° precipitation products, hereafter called CN25 and CN50). The evaluation focused mainly on the magnitude, significance, sign, and relative order of the precipitation trends, and was conducted at gridded and regional scales. The following results were obtained: (1) at the gridded scale, neither satellite-gauge products precisely measure the magnitude of precipitation trends but they do reproduce their sign and relative order; regarding capturing the significance of trends, they exhibit relatively acceptable performance only over regions with a sufficient amount of significant precipitation trends; (2) at the regional scale, both satellite-gauge products generally provide reliable precipitation trends, although they do not reproduce the magnitude of trends in winter precipitation; and (3) overall, CN50 and TRMM 3B43 outperform others in reproducing all four aspects of the precipitation trends. Compared with CN25, PERSIANN-CDR performs better in determining the magnitude of precipitation trends but marginally worse in reproducing their sign and relative order; moreover, both of them are at a level in capturing the significance of precipitation trends.

  7. Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step

    NASA Astrophysics Data System (ADS)

    Jayakumar, J. S.; Kumar, Inder; Eswaran, V.

    2010-12-01

    Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-ω. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.

  8. Assessment of the effects of horizontal grid resolution on long ...

    EPA Pesticide Factsheets

    The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment.

  9. Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach

    NASA Astrophysics Data System (ADS)

    Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.

    2017-08-01

    Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a realistic coastal region.

  10. Hierarchical nucleus segmentation in digital pathology images

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Ratner, Vadim; Zhu, Liangjia; Diprima, Tammy; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel

    2016-03-01

    Extracting nuclei is one of the most actively studied topic in the digital pathology researches. Most of the studies directly search the nuclei (or seeds for the nuclei) from the finest resolution available. While the richest information has been utilized by such approaches, it is sometimes difficult to address the heterogeneity of nuclei in different tissues. In this work, we propose a hierarchical approach which starts from the lower resolution level and adaptively adjusts the parameters while progressing into finer and finer resolution. The algorithm is tested on brain and lung cancers images from The Cancer Genome Atlas data set.

  11. Examining Extreme Events Using Dynamically Downscaled 12-km WRF Simulations

    EPA Science Inventory

    Continued improvements in the speed and availability of computational resources have allowed dynamical downscaling of global climate model (GCM) projections to be conducted at increasingly finer grid scales and over extended time periods. The implementation of dynamical downscal...

  12. On Improving 4-km Mesoscale Model Simulations

    NASA Astrophysics Data System (ADS)

    Deng, Aijun; Stauffer, David R.

    2006-03-01

    A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6) Although use of current CPSs may violate their underlying assumptions related to the size of the convective element relative to the grid size, the gridpoint storm problem was greatly reduced by applying a CPS to the 4-km grid.

  13. HiPS - Hierarchical Progressive Survey Version 1.0

    NASA Astrophysics Data System (ADS)

    Fernique, Pierre; Allen, Mark; Boch, Thomas; Donaldson, Tom; Durand, Daniel; Ebisawa, Ken; Michel, Laurent; Salgado, Jesus; Stoehr, Felix; Fernique, Pierre

    2017-05-01

    This document presents HiPS, a hierarchical scheme for the description, storage and access of sky survey data. The system is based on hierarchical tiling of sky regions at finer and finer spatial resolution which facilitates a progressive view of a survey, and supports multi-resolution zooming and panning. HiPS uses the HEALPix tessellation of the sky as the basis for the scheme and is implemented as a simple file structure with a direct indexing scheme that leads to practical implementations.

  14. Wiener-matrix image restoration beyond the sampling passband

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    A finer-than-sampling-lattice resolution image can be obtained using multiresponse image gathering and Wiener-matrix restoration. The multiresponse image gathering weighs the within-passband and aliased signal components differently, allowing the Wiener-matrix restoration filter to unscramble these signal components and restore spatial frequencies beyond the sampling passband of the photodetector array. A multiresponse images can be reassembled into a single minimum mean square error image with a resolution that is sq rt A times finer than the photodetector-array sampling lattice.

  15. The influence of model spatial resolution on simulated ozone and fine particulate matter for Europe: implications for health impact assessments

    NASA Astrophysics Data System (ADS)

    Fenech, Sara; Doherty, Ruth M.; Heaviside, Clare; Vardoulakis, Sotiris; Macintyre, Helen L.; O'Connor, Fiona M.

    2018-04-01

    We examine the impact of model horizontal resolution on simulated concentrations of surface ozone (O3) and particulate matter less than 2.5 µm in diameter (PM2.5), and the associated health impacts over Europe, using the HadGEM3-UKCA chemistry-climate model to simulate pollutant concentrations at a coarse (˜ 140 km) and a finer (˜ 50 km) resolution. The attributable fraction (AF) of total mortality due to long-term exposure to warm season daily maximum 8 h running mean (MDA8) O3 and annual-average PM2.5 concentrations is then calculated for each European country using pollutant concentrations simulated at each resolution. Our results highlight a seasonal variation in simulated O3 and PM2.5 differences between the two model resolutions in Europe. Compared to the finer resolution results, simulated European O3 concentrations at the coarse resolution are higher on average in winter and spring (˜ 10 and ˜ 6 %, respectively). In contrast, simulated O3 concentrations at the coarse resolution are lower in summer and autumn (˜ -1 and ˜ -4 %, respectively). These differences may be partly explained by differences in nitrogen dioxide (NO2) concentrations simulated at the two resolutions. Compared to O3, we find the opposite seasonality in simulated PM2.5 differences between the two resolutions. In winter and spring, simulated PM2.5 concentrations are lower at the coarse compared to the finer resolution (˜ -8 and ˜ -6 %, respectively) but higher in summer and autumn (˜ 29 and ˜ 8 %, respectively). Simulated PM2.5 values are also mostly related to differences in convective rainfall between the two resolutions for all seasons. These differences between the two resolutions exhibit clear spatial patterns for both pollutants that vary by season, and exert a strong influence on country to country variations in estimated AF for the two resolutions. Warm season MDA8 O3 levels are higher in most of southern Europe, but lower in areas of northern and eastern Europe when simulated at the coarse resolution compared to the finer resolution. Annual-average PM2.5 concentrations are higher across most of northern and eastern Europe but lower over parts of southwest Europe at the coarse compared to the finer resolution. Across Europe, differences in the AF associated with long-term exposure to population-weighted MDA8 O3 range between -0.9 and +2.6 % (largest positive differences in southern Europe), while differences in the AF associated with long-term exposure to population-weighted annual mean PM2.5 range from -4.7 to +2.8 % (largest positive differences in eastern Europe) of the total mortality. Therefore this study, with its unique focus on Europe, demonstrates that health impact assessments calculated using modelled pollutant concentrations, are sensitive to a change in model resolution by up to ˜ ±5 % of the total mortality across Europe.

  16. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.

    2011-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.

  17. GEWEX SRB Shortwave Release 4

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Stackhouse, P. W., Jr.; Mikovitz, J. C.; Zhang, T.

    2017-12-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The new Release 4 uses the newly processed ISCCP HXS product as its primary input for cloud and radiance data. The ninefold increase in pixel number compared to the previous ISCCP DX allows finer gradations in cloud fraction in each grid box. It will also allow higher spatial resolutions (0.5 degree) in future releases. In addition to the input data improvements, several important algorithm improvements have been made since Release 3. These include recalculated atmospheric transmissivities and reflectivities yielding a less transmissive atmosphere. The calculations also include variable aerosol composition, allowing for the use of a detailed aerosol history from the Max Planck Institut Aerosol Climatology (MAC). Ocean albedo and snow/ice albedo are also improved from Release 3. Total solar irradiance is now variable, averaging 1361 Wm-2. Water vapor is taken from ISCCP's nnHIRS product. Results from GSW Release 4 are presented and analyzed. Early comparison to surface measurements show improved agreement.

  18. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  19. Simulation of Boundary-Layer Cumulus and Stratocumulus Clouds using a Cloud-Resolving Model With Low- and Third-Order Turbulence Closures

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Cheng, Anning

    2007-01-01

    The effects of subgrid-scale condensation and transport become more important as the grid spacings increase from those typically used in large-eddy simulation (LES) to those typically used in cloud-resolving models (CRMs). Incorporation of these effects can be achieved by a joint probability density function approach that utilizes higher-order moments of thermodynamic and dynamic variables. This study examines how well shallow cumulus and stratocumulus clouds are simulated by two versions of a CRM that is implemented with low-order and third-order turbulence closures (LOC and TOC) when a typical CRM horizontal resolution is used and what roles the subgrid-scale and resolved-scale processes play as the horizontal grid spacing of the CRM becomes finer. Cumulus clouds were mostly produced through subgrid-scale transport processes while stratocumulus clouds were produced through both subgrid-scale and resolved-scale processes in the TOC version of the CRM when a typical CRM grid spacing is used. The LOC version of the CRM relied upon resolved-scale circulations to produce both cumulus and stratocumulus clouds, due to small subgrid-scale transports. The mean profiles of thermodynamic variables, cloud fraction and liquid water content exhibit significant differences between the two versions of the CRM, with the TOC results agreeing better with the LES than the LOC results. The characteristics, temporal evolution and mean profiles of shallow cumulus and stratocumulus clouds are weakly dependent upon the horizontal grid spacing used in the TOC CRM. However, the ratio of the subgrid-scale to resolved-scale fluxes becomes smaller as the horizontal grid spacing decreases. The subcloud-layer fluxes are mostly due to the resolved scales when a grid spacing less than or equal to 1 km is used. The overall results of the TOC simulations suggest that a 1-km grid spacing is a good choice for CRM simulation of shallow cumulus and stratocumulus.

  20. Analysis of Surface Heterogeneity Effects with Mesoscale Terrestrial Modeling Platforms

    NASA Astrophysics Data System (ADS)

    Simmer, C.

    2015-12-01

    An improved understanding of the full variability in the weather and climate system is crucial for reducing the uncertainty in weather forecasting and climate prediction, and to aid policy makers to develop adaptation and mitigation strategies. A yet unknown part of uncertainty in the predictions from the numerical models is caused by the negligence of non-resolved land surface heterogeneity and the sub-surface dynamics and their potential impact on the state of the atmosphere. At the same time, mesoscale numerical models using finer horizontal grid resolution [O(1)km] can suffer from inconsistencies and neglected scale-dependencies in ABL parameterizations and non-resolved effects of integrated surface-subsurface lateral flow at this scale. Our present knowledge suggests large-eddy-simulation (LES) as an eventual solution to overcome the inadequacy of the physical parameterizations in the atmosphere in this transition scale, yet we are constrained by the computational resources, memory management, big-data, when using LES for regional domains. For the present, there is a need for scale-aware parameterizations not only in the atmosphere but also in the land surface and subsurface model components. In this study, we use the recently developed Terrestrial Systems Modeling Platform (TerrSysMP) as a numerical tool to analyze the uncertainty in the simulation of surface exchange fluxes and boundary layer circulations at grid resolutions of the order of 1km, and explore the sensitivity of the atmospheric boundary layer evolution and convective rainfall processes on land surface heterogeneity.

  1. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Thomas, James L.; Nishikawa, Hiroaki; Diskin, Boris

    2009-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and highly stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Actual cycle results are verified using quantitative analysis methods in which parts of the cycle are replaced by their idealized counterparts.

  2. Generation of High Resolution Land Surface Parameters in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ke, Y.; Coleman, A. M.; Wigmosta, M. S.; Leung, L.; Huang, M.; Li, H.

    2010-12-01

    The Community Land Model (CLM) is the land surface model used for the Community Atmosphere Model (CAM) and the Community Climate System Model (CCSM). It examines the physical, chemical, and biological processes across a variety of spatial and temporal scales. Currently, efforts are being made to improve the spatial resolution of the CLM, in part, to represent finer scale hydrologic characteristics. Current land surface parameters of CLM4.0, in particular plant functional types (PFT) and leaf area index (LAI), are generated from MODIS and calculated at a 0.05 degree resolution. These MODIS-derived land surface parameters have also been aggregated to coarser resolutions (e.g., 0.5, 1.0 degrees). To evaluate the response of CLM across various spatial scales, higher spatial resolution land surface parameters need to be generated. In this study we examine the use of Landsat TM/ETM+ imagery and data fusion techniques for generating land surface parameters at a 1km resolution within the Pacific Northwest United States. . Land cover types and PFTs are classified based on Landsat multi-season spectral information, DEM, National Land Cover Database (NLCD) and the USDA-NASS Crop Data Layer (CDL). For each PFT, relationships between MOD15A2 high quality LAI values, Landsat-based vegetation indices, climate variables, terrain, and laser-altimeter derived vegetation height are used to generate monthly LAI values at a 30m resolution. The high-resolution PFT and LAI data are aggregated to create a 1km model grid resolution. An evaluation and comparison of CLM land surface response at both fine and moderate scale is presented.

  3. Three-dimensional fusion of spaceborne and ground radar reflectivity data using a neural network-based approach

    NASA Astrophysics Data System (ADS)

    Kou, Leilei; Wang, Zhuihui; Xu, Fen

    2018-03-01

    The spaceborne precipitation radar onboard the Tropical Rainfall Measuring Mission satellite (TRMM PR) can provide good measurement of the vertical structure of reflectivity, while ground radar (GR) has a relatively high horizontal resolution and greater sensitivity. Fusion of TRMM PR and GR reflectivity data may maximize the advantages from both instruments. In this paper, TRMM PR and GR reflectivity data are fused using a neural network (NN)-based approach. The main steps included are: quality control of TRMM PR and GR reflectivity data; spatiotemporal matchup; GR calibration bias correction; conversion of TRMM PR data from Ku to S band; fusion of TRMM PR and GR reflectivity data with an NN method; interpolation of reflectivity data that are below PR's sensitivity; blind areas compensation with a distance weighting-based merging approach; combination of three types of data: data with the NN method, data below PR's sensitivity and data within compensated blind areas. During the NN fusion step, the TRMM PR data are taken as targets of the training NNs, and gridded GR data after horizontal downsampling at different heights are used as the input. The trained NNs are then used to obtain 3D high-resolution reflectivity from the original GR gridded data. After 3D fusion of the TRMM PR and GR reflectivity data, a more complete and finer-scale 3D radar reflectivity dataset incorporating characteristics from both the TRMM PR and GR observations can be obtained. The fused reflectivity data are evaluated based on a convective precipitation event through comparison with the high resolution TRMM PR and GR data with an interpolation algorithm.

  4. The effects of digital elevation model resolution on the calculation and predictions of topographic wetness indices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drover, Damion, Ryan

    2011-12-01

    One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would thereforemore » be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a laser altimetry remote sensing method, obtained from the USDA Forest Service at Savannah River Site. The specific DEM resolutions were chosen because they are common grid cell sizes (10m, 30m, and 50m) used in mapping for management applications and in research. The finer resolutions (2m and 5m) were chosen for the purpose of determining how finer resolutions performed compared with coarser resolutions at predicting wetness and related soil attributes. The wetness indices were compared across DEMs and with each other in terms of quantile and distribution differences, then in terms of how well they each correlated with measured soil attributes. Spatial and non-spatial analyses were performed, and predictions using regression and geostatistics were examined for efficacy relative to each DEM resolution. Trends in the raw data and analysis results were also revealed.« less

  5. Observation-Corrected Precipitation Estimates in GEOS-5

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Liu, Qing

    2014-01-01

    Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.

  6. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  7. Influence of resolution in irrigated area mapping and area estimation

    USGS Publications Warehouse

    Velpuri, N.M.; Thenkabail, P.S.; Gumma, M.K.; Biradar, C.; Dheeravath, V.; Noojipady, P.; Yuanjie, L.

    2009-01-01

    The overarching goal of this paper was to determine how irrigated areas change with resolution (or scale) of imagery. Specific objectives investigated were to (a) map irrigated areas using four distinct spatial resolutions (or scales), (b) determine how irrigated areas change with resolutions, and (c) establish the causes of differences in resolution-based irrigated areas. The study was conducted in the very large Krishna River basin (India), which has a high degree of formal contiguous, and informal fragmented irrigated areas. The irrigated areas were mapped using satellite sensor data at four distinct resolutions: (a) NOAA AVHRR Pathfinder 10,000 m, (b) Terra MODIS 500 m, (c) Terra MODIS 250 m, and (d) Landsat ETM+ 30 m. The proportion of irrigated areas relative to Landsat 30 m derived irrigated areas (9.36 million hectares for the Krishna basin) were (a) 95 percent using MODIS 250 m, (b) 93 percent using MODIS 500 m, and (c) 86 percent using AVHRR 10,000 m. In this study, it was found that the precise location of the irrigated areas were better established using finer spatial resolution data. A strong relationship (R2 = 0.74 to 0.95) was observed between irrigated areas determined using various resolutions. This study proved the hypotheses that "the finer the spatial resolution of the sensor used, greater was the irrigated area derived," since at finer spatial resolutions, fragmented areas are detected better. Accuracies and errors were established consistently for three classes (surface water irrigated, ground water/conjunctive use irrigated, and nonirrigated) across the four resolutions mentioned above. The results showed that the Landsat data provided significantly higher overall accuracies (84 percent) when compared to MODIS 500 m (77 percent), MODIS 250 m (79 percent), and AVHRR 10,000 m (63 percent). ?? 2009 American Society for Photogrammetry and Remote Sensing.

  8. PDF added value of a high resolution climate simulation for precipitation

    NASA Astrophysics Data System (ADS)

    Soares, Pedro M. M.; Cardoso, Rita M.

    2015-04-01

    General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.

  9. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  10. Three-dimensional hydrodynamic Bondi-Hoyle accretion. 2: Homogeneous medium at Mach 3 with gamma = 5/3

    NASA Technical Reports Server (NTRS)

    Ruffert, Maximilian; Arnett, David

    1994-01-01

    We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. Totally absorbing spheres of varying sizes (from 10 down to 0.01 accretion radii) move at Mach 3 relative to a homogeneous and slightly perturbed medium, which is taken to be an ideal gas (gamma = 5/3). To accommodate the long-range gravitational forces, the extent of the computational volume is 32(exp 3) accretion radii. We examine the influence of numerical procedure on physical behavior. The hydrodynamics is modeled by the 'piecewise parabolic method.' No energy sources (nuclear burning) or sinks (radiation, conduction) are included. The resolution in the vicinity of the accretor is increased by multiply nesting several (5-10) grids around the sphere, each finer grid being a factor of 2 smaller in zone dimension that the next coarser grid. The largest dynamic range (ratio of size of the largest grid to size of the finest zone) is 16,384. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid, while at the same time evolving the gas on the coarser grids. Initially (at time t = 0-10), a shock front is set up, a Mach cone develops, and the accretion column is observable. Eventually the flow becomes unstable, destroying axisymmetry. This happens approximately when the mass accretion rate reaches the values (+/- 10%) predicted by the Bondi-Hoyle accretion formula (factor of 2 included). However, our three-dimensional models do not show the highly dynamic flip-flop flow so prominent in two-dimensional calculations performed by other authors. The flow, and thus the accretion rate of all quantities, shows quasi-periodic (P approximately equals 5) cycles between quiescent and active states. The interpolation formula proposed in an accompanying paper is found to follow the collected numerical data to within approximately 30%. The specific angular momentum accreted is of the same order of magnitude as the values previously found for two-dimensional flows.

  11. LLNL Scientists Use NERSC to Advance Global Aerosol Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, D J; Chuang, C; Rotman, D

    2004-10-13

    While ''greenhouse gases'' have been the focus of climate change research for a number of years, DOE's ''Aerosol Initiative'' is now examining how aerosols (small particles of approximately micron size) affect the climate on both a global and regional scale. Scientists in the Atmospheric Science Division at Lawrence Livermore National Laboratory (LLNL) are using NERSC's IBM supercomputer and LLNL's IMPACT (atmospheric chemistry) model to perform simulations showing the historic effects of sulfur aerosols at a finer spatial resolution than ever done before. Simulations were carried out for five decades, from the 1950s through the 1990s. The results clearly show themore » effects of the changing global pattern of sulfur emissions. Whereas in 1950 the United States emitted 41 percent of the world's sulfur aerosols, this figure had dropped to 15 percent by 1990, due to conservation and anti-pollution policies. By contrast, the fraction of total sulfur emissions of European origin has only dropped by a factor of 2 and the Asian emission fraction jumped six fold during the same time, from 7 percent in 1950 to 44 percent in 1990. Under a special allocation of computing time provided by the Office of Science INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program, Dan Bergmann, working with a team of LLNL scientists including Cathy Chuang, Philip Cameron-Smith, and Bala Govindasamy, was able to carry out a large number of calculations during the past month, making the aerosol project one of the largest users of NERSC resources. The applications ran on 128 and 256 processors. The objective was to assess the effects of anthropogenic (man-made) sulfate aerosols. The IMPACT model calculates the rate at which SO{sub 2} (a gas emitted by industrial activity) is oxidized and forms particles known as sulfate aerosols. These particles have a short lifespan in the atmosphere, often washing out in about a week. This means that their effects on climate tend to be more regional, occurring near the area where the SO{sub 2} is emitted. To accurately study these regional effects, Bergmann needed to run the simulations at a finer horizontal resolution, as the coarser resolution (typically 300km by 300km) of other climate models are insufficient for studying changes on a regional scale. Livermore's use of CAM3, the Community Atmospheric Model which is a high-resolution climate model developed at NCAR (with collaboration from DOE), allows a 100km by 100km grid to be applied. NERSC's terascale computing capability provided the needed computational horsepower to run the application at the finer level.« less

  12. Mapping implicit spectral methods to distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea L.; Vanrosendale, John

    1991-01-01

    Spectral methods were proven invaluable in numerical simulation of PDEs (Partial Differential Equations), but the frequent global communication required raises a fundamental barrier to their use on highly parallel architectures. To explore this issue, a 3-D implicit spectral method was implemented on an Intel hypercube. Utilization of about 50 percent was achieved on a 32 node iPSC/860 hypercube, for a 64 x 64 x 64 Fourier-spectral grid; finer grids yield higher utilizations. Chebyshev-spectral grids are more problematic, since plane-relaxation based multigrid is required. However, by using a semicoarsening multigrid algorithm, and by relaxing all multigrid levels concurrently, relatively high utilizations were also achieved in this harder case.

  13. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  14. Modelling the urban air quality in Hamburg with the new city-scale chemistry transport model CityChem

    NASA Astrophysics Data System (ADS)

    Karl, Matthias; Ramacher, Martin; Aulinger, Armin; Matthias, Volker; Quante, Markus

    2017-04-01

    Air quality modelling plays an important role by providing guidelines for efficient air pollution abatement measures. Currently, most urban dispersion models treat air pollutants as passive tracer substances or use highly simplified chemistry when simulating air pollutant concentrations on the city-scale. The newly developed urban chemistry-transport model CityChem has the capability of modelling the photochemical transformation of multiple pollutants along with atmospheric diffusion to produce pollutant concentration fields for the entire city on a horizontal resolution of 100 m or even finer and a vertical resolution of 24 layers up to 4000 m height. CityChem is based on the Eulerian urban dispersion model EPISODE of the Norwegian Institute for Air Research (NILU). CityChem treats the complex photochemistry in cities using detailed EMEP chemistry on an Eulerian 3-D grid, while using simple photo-stationary equilibrium on a much higher resolution grid (receptor grid), i.e. close to industrial point sources and traffic sources. The CityChem model takes into account that long-range transport contributes to urban pollutant concentrations. This is done by using 3-D boundary concentrations for the city domain derived from chemistry-transport simulations with the regional air quality model CMAQ. For the study of the air quality in Hamburg, CityChem was set-up with a main grid of 30×30 grid cells of 1×1 km2 each and a receptor grid of 300×300 grid cells of 100×100 m2. The CityChem model was driven with meteorological data generated by the prognostic meteorology component of the Australian chemistry-transport model TAPM. Bottom-up inventories of emissions from traffic, industry, households were based on data of the municipality of Hamburg. Shipping emissions for the port of Hamburg were taken from the Clean North Sea Shipping project. Episodes with elevated ozone (O3) were of specific interest for this study, as these are associated with exceedances of the World Health Organization (WHO) guideline concentration limits for O3 and of the regulatory limits for NO2. Model tests were performed with CityChem to study the ozone formation rate with simultaneous variation of emissions of nitrogen oxides (NOx) and volatile organic compounds (VOC). Emissions of VOC in urban areas are not well quantified as they may originate from various sources, including solvent usage, industry, combustion plants and vehicular traffic. The employed chemical mechanism contains large uncertainties with respect to ozone formation. Observed high-O3 episodes were analyzed by comparing modelled pollutant concentrations with concentration data from the Hamburg air quality surveillance network (http://luft.hamburg.de/). The analysis inspected possible reasons for too low modelled O3 in summer such as missing emissions of VOC from natural sources like green parks and the vertical exchange of O3 towards the surface.

  15. Weather Observation Systems and Efficiency of Fighting Forest Fires

    NASA Astrophysics Data System (ADS)

    Khabarov, N.; Moltchanova, E.; Obersteiner, M.

    2007-12-01

    Weather observation is an essential component of modern forest fire management systems. Satellite and in-situ based weather observation systems might help to reduce forest loss, human casualties and destruction of economic capital. In this paper, we develop and apply a methodology to assess the benefits of various weather observation systems on reductions of burned area due to early fire detection. In particular, we consider a model where the air patrolling schedule is determined by a fire hazard index. The index is computed from gridded daily weather data for the area covering parts Spain and Portugal. We conduct a number of simulation experiments. First, the resolution of the original data set is artificially reduced. The reduction of the total forest burned area associated with air patrolling based on a finer weather grid indicates the benefit of using higher spatially resolved weather observations. Second, we consider a stochastic model to simulate forest fires and explore the sensitivity of the model with respect to the quality of input data. The analysis of combination of satellite and ground monitoring reveals potential cost saving due to a "system of systems effect" and substantial reduction in burned area. Finally, we estimate the marginal improvement schedule for loss of life and economic capital as a function of the improved fire observing system.

  16. Data for Figures and Tables in Journal Article Assessment of the Effects of Horizontal Grid Resolution on Long-Term Air Quality Trends using Coupled WRF-CMAQ Simulations, doi:10.1016/j.atmosenv.2016.02.036

    EPA Pesticide Factsheets

    The dataset represents the data depicted in the Figures and Tables of a Journal Manuscript with the following abstract: The objective of this study is to determine the adequacy of using a relatively coarse horizontal resolution (i.e. 36 km) to simulate long-term trends of pollutant concentrations and radiation variables with the coupled WRF-CMAQ model. WRF-CMAQ simulations over the continental United State are performed over the 2001 to 2010 time period at two different horizontal resolutions of 12 and 36 km. Both simulations used the same emission inventory and model configurations. Model results are compared both in space and time to assess the potential weaknesses and strengths of using coarse resolution in long-term air quality applications. The results show that the 36 km and 12 km simulations are comparable in terms of trends analysis for both pollutant concentrations and radiation variables. The advantage of using the coarser 36 km resolution is a significant reduction of computational cost, time and storage requirement which are key considerations when performing multiple years of simulations for trend analysis. However, if such simulations are to be used for local air quality analysis, finer horizontal resolution may be beneficial since it can provide information on local gradients. In particular, divergences between the two simulations are noticeable in urban, complex terrain and coastal regions.This dataset is associated with the following publication

  17. In need of combined topography and bathymetry DEM

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Hilde, T.

    2003-04-01

    In many geoscience applications, digital elevation models (DEMs) are now more commonly used at different scales and greater resolution due to the great advancement in computer technology. Increasing the accuracy/resolution of the model and the coverage of the terrain (global model) has been the goal of users as mapping technology has improved and computers get faster and cheaper. The ETOPO5 (5 arc minutes spatial resolution land and seafloor model), initially developed in 1988 by Margo Edwards, then at Washington University, St. Louis, MO, has been the only global terrain model for a long time, and it is now being replaced by three new topographic and bathymetric DEMs, i.e.; the ETOPO2 (2 arc minutes spatial resolution land and seafloor model), the GTOPO30 land model with a spatial resolution of 30 arc seconds (c.a. 1km at equator) and the 'GEBCO 1-MINUTE GLOBAL BATHYMETRIC GRID' ocean floor model with a spatial resolution of 1 arc minute (c.a. 2 km at equator). These DEMs are products of projects through which compilation and reprocessing of existing and/or new datasets were made to meet user's new requirements. These ongoing efforts are valuable and support should be continued to refine and update these DEMs. On the other hand, a different approach to create a global bathymetric (seafloor) database exists. A method to estimate the seafloor topography from satellite altimetry combined with existing ships' conventional sounding data was devised and a beautiful global seafloor database created and made public by W.H. Smith and D.T. Sandwell in 1997. The big advantage of this database is the uniformity of coverage, i.e. there is no large area where depths are missing. It has a spatial resolution of 2 arc minute. Another important effort is found in making regional, not global, seafloor databases with much finer resolutions in many countries. The Japan Hydrographic Department has compiled and released a 500m-grid topography database around Japan, J-EGG500, in 1999. Although the coverage of this database is only a small portion of the Earth, the database has been highly appreciated in the academic community, and accepted in surprise by the general public when the database was displayed in 3D imagery to show its quality. This database could be rather smoothly combined with the finer land DEM of 250m spatial resolution (Japan250m.grd, K. Kisimoto, 2000). One of the most important applications of this combined DEM of topography and bathymetry is tsunami modeling. Understanding of the coastal environment, management and development of the coastal region are other fields in need of these data. There is, however, an important issue to consider when we create a combined DEM of topography and bathymetry in finer resolutions. The problem arises from the discrepancy of the standard datum planes or reference levels used for topographic leveling and bathymetric sounding. Land topography (altitude) is defined by leveling from the single reference point determined by average mean sea level, in other words, land height is measured from the geoid. On the other hand, depth charts are made based on depth measured from locally determined reference sea surface level, and this value of sea surface level is taken from the long term average of the lowest tidal height. So, to create a combined DEM of topography and bathymetry in very fine scale, we need to avoid this inconsistency between height and depth across the coastal region. Height and depth should be physically continuous relative to a single reference datum across the coast within such new high resolution DEMs. (N.B. Coast line is not equal to 'altitude-zero line' nor 'depth-zero line'. It is defined locally as the long term average of the highest tide level.) All of this said, we still need a lot of work on the ocean side. Global coverage with detailed bathymetric mapping is still poor. Seafloor imaging and other geophysical measurements/experiments should be organized and conducted internationally and interdisciplinary ways more than ever. We always need greater technological advancement and application of this technology in marine sciences, and more enthusiastic minds of seagoing researchers as well. Recent seafloor mapping technology/quality both in bathymetry and imagery is very promising and even favorably compared with the terrain mapping. We discuss and present on recent achievement and needs on the seafloor mapping using several most up-to-date global- and regional- DEMs available for science community at the poster session.

  18. CFD Simulation On The Pressure Distribution For An Isolated Single-Story House With Extension: Grid Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.

    2018-04-01

    Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.

  19. Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression

    NASA Astrophysics Data System (ADS)

    Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang

    2018-02-01

    Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.

  20. Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression

    NASA Astrophysics Data System (ADS)

    Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang

    2018-05-01

    Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.

  1. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  2. How Much Can Remotely-Sensed Natural Resource Inventories Benefit from Finer Spatial Resolutions?

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Xu, Q.; McRoberts, R. E.; Ståhl, G.; Greenberg, J. A.

    2017-12-01

    For remote sensing facilitated natural resource inventories, the effects of spatial resolution in the form of pixel size and the effects of subpixel information on estimates of population parameters were evaluated by comparing results obtained using Landsat 8 and RapidEye auxiliary imagery. The study area was in Burkina Faso, and the variable of interest was the stem volume (m3/ha) convertible to the woodland aboveground biomass. A sample consisting of 160 field plots was selected and measured from the population following a two-stage sampling design. Models were fit using weighted least squares; the population mean, mu, and the variance of the estimator of the population mean, Var(mu.hat), were estimated in two inferential frameworks, model-based and model-assisted, and compared; for each framework, Var(mu.hat) was estimated both analytically and empirically. Empirical variances were estimated with bootstrapping that for resampling takes clustering effects into account. The primary results were twofold. First, for the effects of spatial resolution and subpixel information, four conclusions are relevant: (1) finer spatial resolution imagery indeed contributes to greater precision for estimators of population parameter, but this increase is slight at a maximum rate of 20% considering that RapidEye data are 36 times finer resolution than Landsat 8 data; (2) subpixel information on texture is marginally beneficial when it comes to making inference for population of large areas; (3) cost-effectiveness is more favorable for the free of charge Landsat 8 imagery than RapidEye imagery; and (4) for a given plot size, candidate remote sensing auxiliary datasets are more cost-effective when their spatial resolutions are similar to the plot size than with much finer alternatives. Second, for the comparison between estimators, three conclusions are relevant: (1) model-based variance estimates are consistent with each other and about half as large as stabilized model-assisted estimates, suggesting superior effectiveness of model-based inference to model-assisted inference; (2) bootstrapping is an effective alternative to analytical variance estimators; and (3) prediction accuracy expressed by RMSE is useful for screening candidate models to be used for population inferences.

  3. Stochastic sampling of quadrature grids for the evaluation of vibrational expectation values

    NASA Astrophysics Data System (ADS)

    López Ríos, Pablo; Monserrat, Bartomeu; Needs, Richard J.

    2018-02-01

    The thermal lines method for the evaluation of vibrational expectation values of electronic observables [B. Monserrat, Phys. Rev. B 93, 014302 (2016), 10.1103/PhysRevB.93.014302] was recently proposed as a physically motivated approximation offering balance between the accuracy of direct Monte Carlo integration and the low computational cost of using local quadratic approximations. In this paper we reformulate thermal lines as a stochastic implementation of quadrature-grid integration, analyze the analytical form of its bias, and extend the method to multiple-point quadrature grids applicable to any factorizable harmonic or anharmonic nuclear wave function. The bias incurred by thermal lines is found to depend on the local form of the expectation value, and we demonstrate that the use of finer quadrature grids along selected modes can eliminate this bias, while still offering an ˜30 % lower computational cost than direct Monte Carlo integration in our tests.

  4. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  5. Using adaptive-mesh refinement in SCFT simulations of surfactant adsorption

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Kumar, Rajeev; Jamroz, Ben; Crockett, Robert; Pletzer, Alex

    2013-03-01

    Adsorption of surfactants at interfaces is relevant to many applications such as detergents, adhesives, emulsions and ferrofluids. Atomistic simulations of interface adsorption are challenging due to the difficulty of modeling the wide range of length scales in these problems: the thin interface region in equilibrium with a large bulk region that serves as a reservoir for the adsorbed species. Self-consistent field theory (SCFT) has been extremely useful for studying the morphologies of dense block copolymer melts. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. However, even SCFT methods can be difficult to apply to systems in which small spatial regions might require finer resolution than most of the simulation grid (eg. interface adsorption and confinement). We will present results on interface adsorption simulations using PolySwift++, an object-oriented, polymer SCFT simulation code aided by the Tech-X Chompst library that enables via block-structured AMR calculations with PETSc.

  6. High-resolution mapping of motor vehicle carbon dioxide emissions

    NASA Astrophysics Data System (ADS)

    McDonald, Brian C.; McBride, Zoe C.; Martin, Elliot W.; Harley, Robert A.

    2014-05-01

    A fuel-based inventory for vehicle emissions is presented for carbon dioxide (CO2) and mapped at various spatial resolutions (10 km, 4 km, 1 km, and 500 m) using fuel sales and traffic count data. The mapping is done separately for gasoline-powered vehicles and heavy-duty diesel trucks. Emission estimates from this study are compared with the Emissions Database for Global Atmospheric Research (EDGAR) and VULCAN. All three inventories agree at the national level within 5%. EDGAR uses road density as a surrogate to apportion vehicle emissions, which leads to 20-80% overestimates of on-road CO2 emissions in the largest U.S. cities. High-resolution emission maps are presented for Los Angeles, New York City, San Francisco-San Jose, Houston, and Dallas-Fort Worth. Sharp emission gradients that exist near major highways are not apparent when emissions are mapped at 10 km resolution. High CO2 emission fluxes over highways become apparent at grid resolutions of 1 km and finer. Temporal variations in vehicle emissions are characterized using extensive day- and time-specific traffic count data and are described over diurnal, day of week, and seasonal time scales. Clear differences are observed when comparing light- and heavy-duty vehicle traffic patterns and comparing urban and rural areas. Decadal emission trends were analyzed from 2000 to 2007 when traffic volumes were increasing and a more recent period (2007-2010) when traffic volumes declined due to recession. We found large nonuniform changes in on-road CO2 emissions over a period of 5 years, highlighting the importance of timely updates to motor vehicle emission inventories.

  7. A New Stellar Atmosphere Grid and Comparisons with HST /STIS CALSPEC Flux Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohlin, Ralph C.; Fleming, Scott W.; Gordon, Karl D.

    The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli and Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanzmore » and Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T {sub eff} = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope . Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.« less

  8. A New Stellar Atmosphere Grid and Comparisons with HST/STIS CALSPEC Flux Distributions

    NASA Astrophysics Data System (ADS)

    Bohlin, Ralph C.; Mészáros, Szabolcs; Fleming, Scott W.; Gordon, Karl D.; Koekemoer, Anton M.; Kovács, József

    2017-05-01

    The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli & Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanz & Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T eff = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope. Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.

  9. Modeling and assessment of civil aircraft evacuation based on finer-grid

    NASA Astrophysics Data System (ADS)

    Fang, Zhi-Ming; Lv, Wei; Jiang, Li-Xue; Xu, Qing-Feng; Song, Wei-Guo

    2016-04-01

    Studying civil aircraft emergency evacuation process by using computer model is an effective way. In this study, the evacuation of Airbus A380 is simulated using a Finer-Grid Civil Aircraft Evacuation (FGCAE) model. In this model, the effect of seat area and others on escape process and pedestrian's "hesitation" before leaving exits are considered, and an optimized rule of exit choice is defined. Simulations reproduce typical characteristics of aircraft evacuation, such as the movement synchronization between adjacent pedestrians, route choice and so on, and indicate that evacuation efficiency will be determined by pedestrian's "preference" and "hesitation". Based on the model, an assessment procedure of aircraft evacuation safety is presented. The assessment and comparison with the actual evacuation test demonstrate that the available exit setting of "one exit from each exit pair" used by practical demonstration test is not the worst scenario. The half exits of one end of the cabin are all unavailable is the worst one, that should be paid more attention to, and even be adopted in the certification test. The model and method presented in this study could be useful for assessing, validating and improving the evacuation performance of aircraft.

  10. Satellite microwave detection of contrasting changes in surface inundation across pan-Arctic permafrost zones

    NASA Astrophysics Data System (ADS)

    Watts, J.; Kimball, J. S.; Jones, L. A.; Schroeder, R.; McDonald, K. C.

    2012-12-01

    Surface water inundation in the Arctic is concomitant with soil permafrost and strongly influences land-atmosphere water, energy and carbon (CO2, CH4) exchange, and plant community structure. We examine recent (2003-2010) surface water inundation patterns across the pan-Arctic (≥ 50 deg.N) and within major permafrost zones using satellite passive microwave remote sensing retrievals of fractional open water extent (Fw) derived from Advanced Microwave Scanning Radiometer for EOS (AMSR-E) 18.7 and 23.8 GHz brightness temperatures. The AMSR-E Fw retrievals are insensitive to atmosphere contamination and solar illumination effects, enabling daily Fw monitoring across the Arctic. The Fw retrievals are sensitive to sub-grid scale open water inundation area, including lakes and wetlands, within the relatively coarse (~25-km resolution) satellite footprint. A forward model error sensitivity analysis indicates that total Fw retrieval uncertainty is within ±4.1% (RMSE), and AMSR-E Fw compares favorably (0.71 < R2 < 0.84) with alternative static open water maps derived from finer scale (30-m to 250-m resolution) Landsat, MODIS and SRTM radar-based products. The Fw retrievals also show dynamic seasonal and annual variability in surface inundation that corresponds (0.71 < R < 0.87) with regional wet/dry cycles inferred from basin discharge records, including Yukon, Mackenzie, Ob, Yenisei, and Lena basins. A regional change analysis of the 8-yr AMSR-E record shows no significant trend in pan-Arctic wide Fw, and instead reveals contrasting inundation changes within permafrost zones. Widespread Fw wetting is observed within continuous (92% of grid cells with significant trend show wetting; p < 0.1) and discontinuous (82%) permafrost zones, while areas with sporadic/isolated permafrost show widespread (71%) Fw drying. These results are consistent with previous studies showing evidence of changes in regional surface hydrology influenced by permafrost degradation under recent climate warming. Changes in Fw may also be linked to shifts in regional precipitation patterns and a lengthening non-frozen season. Regional changes observed in the AMSR-E Fw record compliment finer-scale permafrost monitoring efforts and documented variability in surface inundation extent may help constrain pan-Arctic lake and wetland CO2, CH4 emission estimates. This work was supported under the Jet Propulsion Laboratory, California Institute of Technology under contract to the National Aeronautics and Space Administration, NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) programs.

  11. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guba, O.; Taylor, M. A.; Ullrich, P. A.

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  12. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  13. Influence of air quality model resolution on uncertainty associated with health impacts

    NASA Astrophysics Data System (ADS)

    Thompson, T. M.; Selin, N. E.

    2012-06-01

    We use regional air quality modeling to evaluate the impact of model resolution on uncertainty associated with the human health benefits resulting from proposed air quality regulations. Using a regional photochemical model (CAMx), we ran a modeling episode with meteorological inputs representing conditions as they occurred during August through September 2006, and two emissions inventories (a 2006 base case and a 2018 proposed control scenario, both for Houston, Texas) at 36, 12, 4 and 2 km resolution. The base case model performance was evaluated for each resolution against daily maximum 8-h averaged ozone measured at monitoring stations. Results from each resolution were more similar to each other than they were to measured values. Population-weighted ozone concentrations were calculated for each resolution and applied to concentration response functions (with 95% confidence intervals) to estimate the health impacts of modeled ozone reduction from the base case to the control scenario. We found that estimated avoided mortalities were not significantly different between 2, 4 and 12 km resolution runs, but 36 km resolution may over-predict some potential health impacts. Given the cost/benefit analysis requirements of the Clean Air Act, the uncertainty associated with human health impacts and therefore the results reported in this study, we conclude that health impacts calculated from population weighted ozone concentrations obtained using regional photochemical models at 36 km resolution fall within the range of values obtained using fine (12 km or finer) resolution modeling. However, in some cases, 36 km resolution may not be fine enough to statistically replicate the results achieved using 2 and 4 km resolution. On average, when modeling at 36 km resolution, 7 deaths per ozone month were avoided because of ozone reductions resulting from the proposed emissions reductions (95% confidence interval was 2-9). When modeling at 2, 4 or 12 km finer scale resolution, on average 5 deaths were avoided due to the same reductions (95% confidence interval was 2-7). Initial results for this specific region show that modeling at a resolution finer than 12 km is unlikely to improve uncertainty in benefits analysis. We suggest that 12 km resolution may be appropriate for uncertainty analyses in areas with similar chemistry, but that resolution requirements should be assessed on a case-by-case basis and revised as confidence intervals for concentration-response functions are updated.

  14. Microbialite Morphologies and Distributions-Geoacoustic Survey with an AUV of Pavilion Lake, British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    Gutsche, J. R.; Trembanis, A. C.

    2010-12-01

    With advances in lake bottom mapping it has been observed that modern microbialites, much like the ancient stromatolites, thrive in freshwater lake environments. Previously collected data shows that a diverse community of living stromatolites are present within Pavilion Lake (Laval et al., 2000, Lim et al., 2009). An additional comprehensive data set was collected in June-July 2010. By building on the previous dataset it is possible to compare two high-resolution geoacoustic datasets. Using Autonomous Underwater Vehicles (AUVs) as exploration platforms to conduct surveys of the lake bottom, very high-resolution sonar data has been collected. The data collected in June-July 2010 is composed of 125 km of AUV trackline. This length of trackline allowed for survey coverage of nearly the entire lake bottom. The Gavia AUV used for this survey collected bathymetry data collocated with backscatter information. The data has been processed and gridded to 1m, with specific high value areas gridded to a finer 0.5m. The bathymetric data was compiled to create a base map of the floor of Pavilion Lake. Backscatter data was also collected and processed using the same 1m grid resolution. After the backscatter data was processed, it was draped over the bathymetry map of Pavilion Lake. The tools offered within the Fledermaus software package allow for the bathymetry data to be analyzed with respect to slope and rugosity. By analyzing this dense phase measuring bathymetric sonar of the lake bottom, with respect to slope and rugosity, it is possible to map the morphological trends of the stromatolites. Additionally, the ability to compare two datasets allows for erosional changes in the lake bottom to be identified. The bathymetry data allows for the quantitative analysis of bed forms within Pavilion Lake, allowing for a better understanding of microbialite morphologies. The backscatter data is increasingly important to the Pavilion Lake project because of the location and general surroundings of the lake. The lake itself is located in a limestone canyon, which frequently sustains erosional episodes. The backscatter data allows for the differentiation between erosional deposits and microbial mounds. The combination of backscatter and bathymetry allows for a further understanding of bedforms and microbialite growth patterns.

  15. Analysis of the sweeped actuator line method

    DOE PAGES

    Nathan, Jörn; Masson, Christian; Dufresne, Louis; ...

    2015-10-16

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  16. Analysis of the sweeped actuator line method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nathan, Jörn; Masson, Christian; Dufresne, Louis

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  17. WRF-Fire: coupled weather-wildland fire modeling with the weather research and forecasting model

    Treesearch

    Janice L. Coen; Marques Cameron; John Michalakes; Edward G. Patton; Philip J. Riggan; Kara M. Yedinak

    2012-01-01

    A wildland fire behavior module (WRF-Fire) was integrated into the Weather Research and Forecasting (WRF) public domain numerical weather prediction model. The fire module is a surface fire behavior model that is two-way coupled with the atmospheric model. Near-surface winds from the atmospheric model are interpolated to a finer fire grid and used, with fuel properties...

  18. California's Snow Gun and its implications for mass balance predictions under greenhouse warming

    NASA Astrophysics Data System (ADS)

    Howat, I.; Snyder, M.; Tulaczyk, S.; Sloan, L.

    2003-12-01

    Precipitation has received limited treatment in glacier and snowpack mass balance models, largely due to the poor resolution and confidence of precipitation predictions relative to temperature predictions derived from atmospheric models. Most snow and glacier mass balance models rely on statistical or lapse rate-based downscaling of general or regional circulation models (GCM's and RCM's), essentially decoupling sub-grid scale, orographically-driven evolution of atmospheric heat and moisture. Such models invariably predict large losses in the snow and ice volume under greenhouse warming. However, positive trends in the mass balance of glaciers in some warming maritime climates, as well as at high elevations of the Greenland Ice Sheet, suggest that increased precipitation may play an important role in snow- and glacier-climate interactions. Here, we present a half century of April snowpack data from the Sierra Nevada and Cascade mountains of California, USA. This high-density network of snow-course data indicates that a gain in winter snow accumulation at higher elevations has compensated loss in snow volume at lower elevations by over 50% and has led to glacier expansion on Mt. Shasta. These trends are concurrent with a region-wide increase in winter temperatures up to 2° C. They result from the orographic lifting and saturation of warmer, more humid air leading to increased precipitation at higher elevations. Previous studies have invoked such a "Snow Gun" effect to explain contemporaneous records of Tertiary ocean warming and rapid glacial expansion. A climatological context of the California's "snow gun" effect is elucidated by correlation between the elevation distribution of April SWE observations and the phase of the Pacific Decadal Oscillation and the El Nino Southern Oscillation, both controlling the heat and moisture delivered to the U.S. Pacific coast. The existence of a significant "Snow Gun" effect presents two challenges to snow and glacier mass balance modeling. Firstly, the link between amplification of orographic precipitation and the temporal evolution of ocean-climate oscillations indicates that prediction of future mass balance trends requires consideration of the timing and amplitude of such oscillations. Only recently have ocean-atmosphere models begun to realistically produce such temporal variability. Secondly, the steepening snow mass-balance elevation-gradient associated with the "Snow Gun" implies greater spatial variability in balance with warming. In a warming climate, orographic processes at a scale finer that the highest resolution RCM (>20km grid) become increasingly important and predictions based on lower elevations become increasingly inaccurate for higher elevations. Therefore, thermodynamic interaction between atmospheric heat, moisture and topography must be included in downscaling techniques. In order to demonstrate the importance of the thermodynamic downscaling in mass balance predictions, we nest a high-resolution (100m grid), coupled Orographic Precipitation and Surface Energy balance Model (OPSEM) into the RegC2.5 RCM (40 km grid) and compare results. We apply this nesting technique to Mt. Shasta, California, an area of high topography (~4000m) relative to its RegCM2.5 grid elevation (1289m). These models compute average April snow volume under present and doubled-present Atmospheric CO2 concentrations. While the RegCM2.5 regional model predicts an 83% decrease in April SWE, OPSEM predicts a 16% increase. These results indicate that thermodynamic interactions between the atmosphere and topography at sub- RCM grid resolution must be considered in mass balance models.

  19. Development and Application of a Soil Moisture Downscaling Method for Mobility Assessment

    DTIC Science & Technology

    2011-05-01

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...REPORT Development and Application of a Soil Moisture Downscaling Method for Mobility Assessment 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Soil...cells). Thus, a method is required to downscale intermediate-resolution patterns to finer resolutions. Fortunately, fine-resolution variations in

  20. Importance of Grid Center Arrangement

    NASA Astrophysics Data System (ADS)

    Pasaogullari, O.; Usul, N.

    2012-12-01

    In Digital Elevation Modeling, grid size is accepted to be the most important parameter. Despite the point density and/or scale of the source data, it is freely decided by the user. Most of the time, arrangement of the grid centers are ignored, even most GIS packages omit the choice of grid center coordinate selection. In our study; importance of the arrangement of grid centers is investigated. Using the analogy between "Raster Grid DEM" and "Bitmap Image", importance of placement of grid centers in DEMs are measured. The study has been conducted on four different grid DEMs obtained from a half ellipsoid. These grid DEMs are obtained in such a way that they are half grid size apart from each other. Resulting grid DEMs are investigated through similarity measures. Image processing scientists use different measures to investigate the dis/similarity between the images and the amount of different information they carry. Grid DEMs are projected to a finer grid in order to co-center. Similarity measures are then applied to each grid DEM pairs. These similarity measures are adapted to DEM with band reduction and real number operation. One of the measures gives function graph and the others give measure matrices. Application of similarity measures to six grid DEM pairs shows interesting results. These four different grid DEMs are created with the same method for the same area, surprisingly; thirteen out of 14 measures state that, the half grid size apart grid DEMs are different from each other. The results indicated that although grid DEMs carry mutual information, they have also additional individual information. In other words, half grid size apart constructed grid DEMs have non-redundant information.; Joint Probability Distributions Function Graphs

  1. A review of potential image fusion methods for remote sensing-based irrigation management: Part II

    USDA-ARS?s Scientific Manuscript database

    Satellite-based sensors provide data at either greater spectral and coarser spatial resolutions, or lower spectral and finer spatial resolutions due to complementary spectral and spatial characteristics of optical sensor systems. In order to overcome this limitation, image fusion has been suggested ...

  2. Improvements and Lingering Challenges with Modeling Low-Level Winds Over Complex Terrain during the Wind Forecast Improvement Project 2

    NASA Astrophysics Data System (ADS)

    Olson, J.; Kenyon, J.; Brown, J. M.; Angevine, W. M.; Marquis, M.; Pichugina, Y. L.; Choukulkar, A.; Bonin, T.; Banta, R. M.; Bianco, L.; Djalalova, I.; McCaffrey, K.; Wilczak, J. M.; Lantz, K. O.; Long, C. N.; Redfern, S.; McCaa, J. R.; Stoelinga, M.; Grimit, E.; Cline, J.; Shaw, W. J.; Lundquist, J. K.; Lundquist, K. A.; Kosovic, B.; Berg, L. K.; Kotamarthi, V. R.; Sharp, J.; Jiménez, P.

    2017-12-01

    The Rapid Refresh (RAP) and High-Resolution Rapid Refresh (HRRR) are NOAA real-time operational hourly updating forecast systems run at 13- and 3-km grid spacing, respectively. Both systems use the Advanced Research version of the Weather Research and Forecasting (WRF-ARW) as the model component of the forecast system. During the second installment of the Wind Forecast Improvement Project (WFIP 2), the RAP/HRRR have been targeted for the improvement of low-level wind forecasts in the complex terrain within the Columbia River Basin (CRB), which requires much finer grid spacing to resolve important terrain peaks in the Cascade Mountains as well as the Columbia River Gorge. Therefore, this project provides a unique opportunity to test and develop the RAP/HRRR physics suite within a very high-resolution nest (Δx = 750 m) over the northwestern US. Special effort is made to incorporate scale-aware aspects into the model physical parameterizations to improve RAP/HRRR wind forecasts for any application at any grid spacing. Many wind profiling and scanning instruments have been deployed in the CRB in support the WFIP 2 field project, which spanned 01 October 2015 to 31 March 2017. During the project, several forecast error modes were identified, such as: (1) too-shallow cold pools during the cool season, which can mix-out more frequently than observed and (2) the low wind speed bias in thermal trough-induced gap flows during the warm season. Development has been focused on the column-based turbulent mixing scheme to improve upon these biases, but investigating the effects of horizontal (and 3D) mixing has also helped improve some of the common forecast failure modes. This presentation will highlight the testing and development of various model components, showing the improvements over original versions for temperature and wind profiles. Examples of case studies and retrospective periods will be presented to illustrate the improvements. We will demonstrate that the improvements made in WFIP 2 will be extendable to other regions, complex or flat terrain. Ongoing and future challenges in RAP/HRRR physics development will be touched upon.

  3. Mapping and detecting bark beetle-caused tree mortality in the western United States

    NASA Astrophysics Data System (ADS)

    Meddens, Arjan J. H.

    Recently, insect outbreaks across North America have dramatically increased and the forest area affected by bark beetles is similar to that affected by fire. Remote sensing offers the potential to detect insect outbreaks with high accuracy. Chapter one involved detection of insect-caused tree mortality on the tree level for a 90km2 area in northcentral Colorado. Classes of interest included green trees, multiple stages of post-insect attack tree mortality including dead trees with red needles ("red-attack") and dead trees without needles ("gray-attack"), and non-forest. The results illustrated that classification of an image with a spatial resolution similar to the area of a tree crown outperformed that from finer and coarser resolution imagery for mapping tree mortality and non-forest classes. I also demonstrated that multispectral imagery could be used to separate multiple postoutbreak attack stages (i.e., red-attack and gray-attack) from other classes in the image. In Chapter 2, I compared and improved methods for detecting bark beetle-caused tree mortality using medium-resolution satellite data. I found that overall classification accuracy was similar between single-date and multi-date classification methods. I developed regression models to predict percent red attack within a 30-m grid cell and these models explained >75% of the variance using three Landsat spectral explanatory variables. Results of the final product showed that approximately 24% of the forest within the Landsat scene was comprised of tree mortality caused by bark beetles. In Chapter 3, I developed a gridded data set with 1-km2 resolution using aerial survey data and improved estimates of tree mortality across the western US and British Columbia. In the US, I also produced an upper estimate by forcing the mortality area to match that from high-resolution imagery in Idaho, Colorado, and New Mexico. Cumulative mortality area from all bark beetles was 5.46 Mha in British Columbia in 2001-2010 and 0.47-5.37 Mha (lower and upper estimate) in the western conterminous US during 1997-2010. Improved methods for detection and mapping of insect outbreak areas will lead to improved assessments of the effects of these forest disturbances on the economy, carbon cycle (and feedback to climate change), fuel loads, hydrology and forest ecology.

  4. Scanning Backscatter Lidar Observations for Characterizing 4-D Cloud and Aerosol Fields to Improve Radiative Transfer Parameterizations

    NASA Technical Reports Server (NTRS)

    Schwemmer, Geary K.; Miller, David O.

    2005-01-01

    Clouds have a powerful influence on atmospheric radiative transfer and hence are crucial to understanding and interpreting the exchange of radiation between the Earth's surface, the atmosphere, and space. Because clouds are highly variable in space, time and physical makeup, it is important to be able to observe them in three dimensions (3-D) with sufficient resolution that the data can be used to generate and validate parameterizations of cloud fields at the resolution scale of global climate models (GCMs). Simulation of photon transport in three dimensionally inhomogeneous cloud fields show that spatial inhomogeneities tend to decrease cloud reflection and absorption and increase direct and diffuse transmission, Therefore it is an important task to characterize cloud spatial structures in three dimensions on the scale of GCM grid elements. In order to validate cloud parameterizations that represent the ensemble, or mean and variance of cloud properties within a GCM grid element, measurements of the parameters must be obtained on a much finer scale so that the statistics on those measurements are truly representative. High spatial sampling resolution is required, on the order of 1 km or less. Since the radiation fields respond almost instantaneously to changes in the cloud field, and clouds changes occur on scales of seconds and less when viewed on scales of approximately 100m, the temporal resolution of cloud properties should be measured and characterized on second time scales. GCM time steps are typically on the order of an hour, but in order to obtain sufficient statistical representations of cloud properties in the parameterizations that are used as model inputs, averaged values of cloud properties should be calculated on time scales on the order of 10-100 s. The Holographic Airborne Rotating Lidar Instrument Experiment (HARLIE) provides exceptional temporal (100 ms) and spatial (30 m) resolution measurements of aerosol and cloud backscatter in three dimensions. HARLIE was used in a ground-based configuration in several recent field campaigns. Principal data products include aerosol backscatter profiles, boundary layer heights, entrainment zone thickness, cloud fraction as a function of altitude and horizontal wind vector profiles based on correlating the motions of clouds and aerosol structures across portions of the scan. Comparisons will be made between various cloud detecting instruments to develop a baseline performance metric.

  5. Hybrid finite difference/finite element immersed boundary method.

    PubMed

    E Griffith, Boyce; Luo, Xiaoyu

    2017-12-01

    The immersed boundary method is an approach to fluid-structure interaction that uses a Lagrangian description of the structural deformations, stresses, and forces along with an Eulerian description of the momentum, viscosity, and incompressibility of the fluid-structure system. The original immersed boundary methods described immersed elastic structures using systems of flexible fibers, and even now, most immersed boundary methods still require Lagrangian meshes that are finer than the Eulerian grid. This work introduces a coupling scheme for the immersed boundary method to link the Lagrangian and Eulerian variables that facilitates independent spatial discretizations for the structure and background grid. This approach uses a finite element discretization of the structure while retaining a finite difference scheme for the Eulerian variables. We apply this method to benchmark problems involving elastic, rigid, and actively contracting structures, including an idealized model of the left ventricle of the heart. Our tests include cases in which, for a fixed Eulerian grid spacing, coarser Lagrangian structural meshes yield discretization errors that are as much as several orders of magnitude smaller than errors obtained using finer structural meshes. The Lagrangian-Eulerian coupling approach developed in this work enables the effective use of these coarse structural meshes with the immersed boundary method. This work also contrasts two different weak forms of the equations, one of which is demonstrated to be more effective for the coarse structural discretizations facilitated by our coupling approach. © 2017 The Authors International  Journal  for  Numerical  Methods  in  Biomedical  Engineering Published by John Wiley & Sons Ltd.

  6. Horizontal Residual Mean Circulation: Evaluation of Spatial Correlations in Coarse Resolution Ocean Models

    NASA Astrophysics Data System (ADS)

    Li, Y.; McDougall, T. J.

    2016-02-01

    Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.

  7. Recent variations in seasonality of temperature and precipitation in Canada, 1976-95

    NASA Astrophysics Data System (ADS)

    Whitfield, Paul H.; Bodtker, Karin; Cannon, Alex J.

    2002-11-01

    A previously reported analysis of rehabilitated monthly temperature and precipitation time series for several hundred stations across Canada showed generally spatially coherent patterns of variation between two decades (1976-85 and 1986-95). The present work expands that analysis to finer time scales and a greater number of stations. We demonstrate how the finer temporal resolution, at 5 day or 11 day intervals, increases the separation between clusters of recent variations in seasonal patterns of temperature and precipitation. We also expand the analysis by increasing the number of stations from only rehabilitated monthly data sets to rehabilitated daily sets, then to approximately 1500 daily observation stations. This increases the spatial density of data and allows a finer spatial resolution of patterns between the two decades. We also examine the success of clustering partial records, i.e. sites where the data record is incomplete. The intent of this study was to be consistent with previous work and explore how greater temporal and spatial detail in the climate data affects the resolution of patterns of recent climate variations. The variations we report for temperature and precipitation are taking place at different temporal and spatial scales. Further, the spatial patterns are much broader than local climate regions and ecozones, indicating that the differences observed may be the result of variations in atmospheric circulation.

  8. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  9. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  10. The Kain-Fritsch Scheme: Science Updates & Revisiting Gray-Scale Issues from the NWP & Regional Climatae Perspectives

    EPA Science Inventory

    It’s just a matter of time before we see global climate models increasing their spatial resolution to that now typical of regional models. This encroachment brings in an urgent need for making regional NWP and climate models applicable at certain finer resolutions. One of the hin...

  11. Cloud-Free Satellite Image Mosaics with Regression Trees and Histogram Matching.

    Treesearch

    E.H. Helmer; B. Ruefenacht

    2005-01-01

    Cloud-free optical satellite imagery simplifies remote sensing, but land-cover phenology limits existing solutions to persistent cloudiness to compositing temporally resolute, spatially coarser imagery. Here, a new strategy for developing cloud-free imagery at finer resolution permits simple automatic change detection. The strategy uses regression trees to predict...

  12. Unweighted least squares phase unwrapping by means of multigrid techniques

    NASA Astrophysics Data System (ADS)

    Pritt, Mark D.

    1995-11-01

    We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.

  13. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  14. Solar-induced chlorophyll fluorescence is strongly correlated with terrestrial photosynthesis for a wide variety of biomes: First global analysis based on OCO-2 and flux tower observations.

    PubMed

    Li, Xing; Xiao, Jingfeng; He, Binbin; Arain, M Altaf; Beringer, Jason; Desai, Ankur R; Emmel, Carmen; Hollinger, David Y; Krasnova, Alisa; Mammarella, Ivan; Noe, Steffen M; Serrano Ortiz, Penélope; Rey-Sanchez, Camilo; Rocha, Adrian V; Varlagin, Andrej

    2018-05-07

    Solar-induced chlorophyll fluorescence (SIF) has been increasingly used as a proxy for terrestrial gross primary productivity (GPP). Previous work mainly evaluated the relationship between satellite-observed SIF and gridded GPP products both based on coarse spatial resolutions. Finer-resolution SIF (1.3 km × 2.25 km) measured from the Orbiting Carbon Observatory-2 (OCO-2) provides the first opportunity to examine the SIF-GPP relationship at the ecosystem scale using flux tower GPP data. However, it remains unclear how strong the relationship is for each biome and whether a robust, universal relationship exists across a variety of biomes. Here we conducted the first global analysis of the relationship between OCO-2 SIF and tower GPP for a total of 64 flux sites across the globe encompassing eight major biomes. OCO-2 SIF showed strong correlations with tower GPP at both mid-day and daily timescales, with the strongest relationship observed for daily SIF at the 757 nm (R 2 =0.72, p<0.0001). Strong linear relationships between SIF and GPP were consistently found for all biomes (R 2 =0.57-0.79, p<0.0001) except for evergreen broadleaf forests (R 2 =0.16, p<0.05) at the daily timescale. A higher slope was found for C 4 grasslands and croplands than for C 3 ecosystems. The generally consistent slope of the relationship among biomes suggests a nearly universal rather than biome-specific SIF-GPP relationship, and this finding is an important distinction and simplification compared to previous results. OCO-2 SIF generally had a better performance for predicting GPP than satellite-derived vegetation indices and a light use efficiency model. The universal SIF-GPP relationship can potentially lead to more accurate GPP estimates regionally or globally. Our findings revealed the remarkable ability of finer-resolution SIF observations from OCO-2 and other new or future missions (e.g., TROPOMI, FLEX) for estimating terrestrial photosynthesis across a wide variety of biomes and identified their potential and limitations for ecosystem functioning and carbon cycle studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Estimating Top-of-Atmosphere Thermal Infrared Radiance Using MERRA-2 Atmospheric Data

    NASA Astrophysics Data System (ADS)

    Kleynhans, Tania

    Space borne thermal infrared sensors have been extensively used for environmental research as well as cross-calibration of other thermal sensing systems. Thermal infrared data from satellites such as Landsat and Terra/MODIS have limited temporal resolution (with a repeat cycle of 1 to 2 days for Terra/MODIS, and 16 days for Landsat). Thermal instruments with finer temporal resolution on geostationary satellites have limited utility for cross-calibration due to their large view angles. Reanalysis atmospheric data is available on a global spatial grid at three hour intervals making it a potential alternative to existing satellite image data. This research explores using the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product to predict top-of-atmosphere (TOA) thermal infrared radiance globally at time scales finer than available satellite data. The MERRA-2 data product provides global atmospheric data every three hours from 1980 to the present. Due to the high temporal resolution of the MERRA-2 data product, opportunities for novel research and applications are presented. While MERRA-2 has been used in renewable energy and hydrological studies, this work seeks to leverage the model to predict TOA thermal radiance. Two approaches have been followed, namely physics-based approach and a supervised learning approach, using Terra/MODIS band 31 thermal infrared data as reference. The first physics-based model uses forward modeling to predict TOA thermal radiance. The second model infers the presence of clouds from the MERRA-2 atmospheric data, before applying an atmospheric radiative transfer model. The last physics-based model parameterized the previous model to minimize computation time. The second approach applied four different supervised learning algorithms to the atmospheric data. The algorithms included a linear least squares regression model, a non-linear support vector regression (SVR) model, a multi-layer perceptron (MLP), and a convolutional neural network (CNN). This research found that the multi-layer perceptron model produced the lowest error rates overall, with an RMSE of 1.22W / m2 sr mum when compared to actual Terra/MODIS band 31 image data. This research further aimed to characterize the errors associated with each method so that any potential user will have the best information available should they wish to apply these methods towards their own application.

  16. Demand Response Potential for California SubLAPs and Local Capacity Planning Areas: An Addendum to the 2025 California Demand Response Potential Study – Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alstone, Peter; Potter, Jennifer; Piette, Mary Ann

    The 2025 California Demand Response Potential Study Phase 2 Report1 was released on March 1, 2017, and described a range of pathways for Demand Response (DR) to support a clean, stable, and cost-effective electric grid for California. One of the Report’s key findings was that while there appears to be very low future value for untargeted DR Shed aimed at system-wide peak load conditions, there could be significant value for locally focused Shed resources. Although the dynamics of renewable capacity expansion have reduced the pressure to build new thermal generation in general, there are still transmission-constrained areas of the statemore » where load growth needs to be managed with the addition of new local capacity, which could include DERs and/or DR. This Addendum to the Phase 2 Report presents a breakdown of the expected future “Local Shed” DR potential at a finer geographic resolution than what is available in the original report, with results summarized by SubLAP and Local Capacity Area (LCA).« less

  17. A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.

    2000-01-01

    The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.

  18. Unstructured grid research and use at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Potapczuk, Mark G.

    1993-01-01

    Computational fluid dynamics applications of grid research at LRC include inlets, nozzles, and ducts; turbomachinery; propellers - ducted and unducted; and aircraft icing. Some issues related to internal flow grid generation are resolution requirements on several boundaries, shock resolution vs. grid periodicity, grid spacing at blade/shroud gap, grid generation in turbine blade passages, and grid generation for inlet/nozzle geometries. Aircraft icing grid generation issues include (1) small structures relative to airfoil chord must be resolved; (2) excessive number of grid points in far-field using structured grid; and (3) grid must be recreated as ice shape grows.

  19. SHORT RANGE ENSEMBLE Products

    Science.gov Websites

    - CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic Multiscale Model on the B grid AWIPS grid 212 Regional - CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic 132 - Double Resolution (Lambert Conformal - 16km) NEMS Non-hydrostatic Multiscale Model on the B grid

  20. Two decades [1992-2012] of surface wind analyses based on satellite scatterometer observations

    NASA Astrophysics Data System (ADS)

    Desbiolles, Fabien; Bentamy, Abderrahim; Blanke, Bruno; Roy, Claude; Mestas-Nuñez, Alberto M.; Grodsky, Semyon A.; Herbette, Steven; Cambon, Gildas; Maes, Christophe

    2017-04-01

    Surface winds (equivalent neutral wind velocities at 10 m) from scatterometer missions since 1992 have been used to build up a 20-year climate series. Optimal interpolation and kriging methods have been applied to continuously provide surface wind speed and direction estimates over the global ocean on a regular grid in space and time. The use of other data sources such as radiometer data (SSM/I) and atmospheric wind reanalyses (ERA-Interim) has allowed building a blended product available at 1/4° spatial resolution and every 6 h from 1992 to 2012. Sampling issues throughout the different missions (ERS-1, ERS-2, QuikSCAT, and ASCAT) and their possible impact on the homogeneity of the gridded product are discussed. In addition, we assess carefully the quality of the blended product in the absence of scatterometer data (1992 to 1999). Data selection experiments show that the description of the surface wind is significantly improved by including the scatterometer winds. The blended winds compare well with buoy winds (1992-2012) and they resolve finer spatial scales than atmospheric reanalyses, which make them suitable for studying air-sea interactions at mesoscale. The seasonal cycle and interannual variability of the product compare well with other long-term wind analyses. The product is used to calculate 20-year trends in wind speed, as well as in zonal and meridional wind components. These trends show an important asymmetry between the southern and northern hemispheres, which may be an important issue for climate studies.

  1. Modeling and simulation of storm surge on Staten Island to understand inundation mitigation strategies

    USGS Publications Warehouse

    Kress, Michael E.; Benimoff, Alan I.; Fritz, William J.; Thatcher, Cindy A.; Blanton, Brian O.; Dzedzits, Eugene

    2016-01-01

    Hurricane Sandy made landfall on October 29, 2012, near Brigantine, New Jersey, and had a transformative impact on Staten Island and the New York Metropolitan area. Of the 43 New York City fatalities, 23 occurred on Staten Island. The borough, with a population of approximately 500,000, experienced some of the most devastating impacts of the storm. Since Hurricane Sandy, protective dunes have been constructed on the southeast shore of Staten Island. ADCIRC+SWAN model simulations run on The City University of New York's Cray XE6M, housed at the College of Staten Island, using updated topographic data show that the coast of Staten Island is still susceptible to tidal surge similar to those generated by Hurricane Sandy. Sandy hindcast simulations of storm surges focusing on Staten Island are in good agreement with observed storm tide measurements. Model results calculated from fine-scaled and coarse-scaled computational grids demonstrate that finer grids better resolve small differences in the topography of critical hydraulic control structures, which affect storm surge inundation levels. The storm surge simulations, based on post-storm topography obtained from high-resolution lidar, provide much-needed information to understand Staten Island's changing vulnerability to storm surge inundation. The results of fine-scale storm surge simulations can be used to inform efforts to improve resiliency to future storms. For example, protective barriers contain planned gaps in the dunes to provide for beach access that may inadvertently increase the vulnerability of the area.

  2. Importance of Winds and Soil Moistures to the US Summertime Drought of 1988: A GCM Simulation Study

    NASA Technical Reports Server (NTRS)

    Mocko, David M.; Sud, Y. C.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    The climate version of NASA's GEOS 2 GCM did not simulate a realistic 1988 summertime drought in the central United States (Mocko et al., 1999). Despite several new upgrades to the model's parameterizations, as well as finer grid spacing from 4x5 degrees to 2x2.5 degrees, no significant improvements were noted in the model's simulation of the U.S. drought.

  3. A neural-network approach to robotic control

    NASA Technical Reports Server (NTRS)

    Graham, D. P. W.; Deleuterio, G. M. T.

    1993-01-01

    An artificial neural-network paradigm for the control of robotic systems is presented. The approach is based on the Cerebellar Model Articulation Controller created by James Albus and incorporates several extensions. First, recognizing the essential structure of multibody equations of motion, two parallel modules are used that directly reflect the dynamical characteristics of multibody systems. Second, the architecture of the proposed network is imbued with a self-organizational capability which improves efficiency and accuracy. Also, the networks can be arranged in hierarchical fashion with each subsequent network providing finer and finer resolution.

  4. Running GCM physics and dynamics on different grids: Algorithm and tests

    NASA Astrophysics Data System (ADS)

    Molod, A.

    2006-12-01

    The major drawback in the use of sigma coordinates in atmospheric GCMs, namely the error in the pressure gradient term near sloping terrain, leaves the use of eta coordinates an important alternative. A central disadvantage of an eta coordinate, the inability to retain fine resolution in the vertical as the surface rises above sea level, is addressed here. An `alternate grid' technique is presented which allows the tendencies of state variables due to the physical parameterizations to be computed on a vertical grid (the `physics grid') which retains fine resolution near the surface, while the remaining terms in the equations of motion are computed using an eta coordinate (the `dynamics grid') with coarser vertical resolution. As a simple test of the technique a set of perpetual equinox experiments using a simplified lower boundary condition with no land and no topography were performed. The results show that for both low and high resolution alternate grid experiments, much of the benefit of increased vertical resolution for the near surface meridional wind (and mass streamfield) can be realized by enhancing the vertical resolution of the `physics grid' in the manner described here. In addition, approximately half of the increase in zonal jet strength seen with increased vertical resolution can be realized using the `alternate grid' technique. A pair of full GCM experiments with realistic lower boundary conditions and topography were also performed. It is concluded that the use of the `alternate grid' approach offers a promising way forward to alleviate a central problem associated with the use of the eta coordinate in atmospheric GCMs.

  5. On the downscaling of actual evapotranspiration maps based on combination of MODIS and landsat-based actual evapotranspiration estimates

    USGS Publications Warehouse

    Singh, Ramesh K.; Senay, Gabriel B.; Velpuri, Naga Manohar; Bohms, Stefanie; Verdin, James P.

    2014-01-01

     Downscaling is one of the important ways of utilizing the combined benefits of the high temporal resolution of Moderate Resolution Imaging Spectroradiometer (MODIS) images and fine spatial resolution of Landsat images. We have evaluated the output regression with intercept method and developed the Linear with Zero Intercept (LinZI) method for downscaling MODIS-based monthly actual evapotranspiration (AET) maps to the Landsat-scale monthly AET maps for the Colorado River Basin for 2010. We used the 8-day MODIS land surface temperature product (MOD11A2) and 328 cloud-free Landsat images for computing AET maps and downscaling. The regression with intercept method does have limitations in downscaling if the slope and intercept are computed over a large area. A good agreement was obtained between downscaled monthly AET using the LinZI method and the eddy covariance measurements from seven flux sites within the Colorado River Basin. The mean bias ranged from −16 mm (underestimation) to 22 mm (overestimation) per month, and the coefficient of determination varied from 0.52 to 0.88. Some discrepancies between measured and downscaled monthly AET at two flux sites were found to be due to the prevailing flux footprint. A reasonable comparison was also obtained between downscaled monthly AET using LinZI method and the gridded FLUXNET dataset. The downscaled monthly AET nicely captured the temporal variation in sampled land cover classes. The proposed LinZI method can be used at finer temporal resolution (such as 8 days) with further evaluation. The proposed downscaling method will be very useful in advancing the application of remotely sensed images in water resources planning and management.

  6. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  7. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. Themore » evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.« less

  8. Establishing an Operational Data System for Surface Currents Derived from Satellite Altimeters and Scatterometers; Pilot Study for the Tropical Pacific

    NASA Astrophysics Data System (ADS)

    Lagerloef, G. S.; Cheney, R.; Mitchum, G. T.

    2001-12-01

    We are initiating a pilot processing system and data center to provide operational ocean surface velocity fields from satellite altimeter and vector wind data. The team includes the above authors plus M. Bourassa (FSU), V.Kousky (NOAA/NCEP), J.Polovina (NOAA/NMFS/Hawaii CoastWatch), R.Legeckis (NOAA/NESDIS), G. Jacobs (NRL), F. Bonjean (ESR), E.Johnson (ESR) and J.Gunn (ESR). Methods to derive surface currents are the outcome of several years of NASA sponsored research and the pilot project will transition that capability to operational oceanographic applications. The regional focus will be the tropical Pacific. Data applications include large scale climate diagnostics and prediction, fisheries management and recruitment, monitoring debris drift, larvae drift, oil spills, fronts and eddies. Additional uses for search and rescue, naval and maritime operations will be investigated. The pilot study will produce velocity maps to be updated on a weekly basis initially, with a goal for eventual 2-day maximum delay from time of satellite measurement. Grid resolution will be 100 km for the basin scale, and finer resolution in the vicinity of the Pacific Islands. Various illustrations of the velocity maps and their applications will be presented. The project's goal is to leave in place an automated system running at NOAA/NESDIS, with an established user clientele and open Internet data access.

  9. Influence of reanalysis datasets on dynamically downscaling the recent past

    NASA Astrophysics Data System (ADS)

    Moalafhi, Ditiro B.; Evans, Jason P.; Sharma, Ashish

    2017-08-01

    Multiple reanalysis datasets currently exist that can provide boundary conditions for dynamic downscaling and simulating local hydro-climatic processes at finer spatial and temporal resolutions. Previous work has suggested that there are two reanalyses alternatives that provide the best lateral boundary conditions for downscaling over southern Africa. This study dynamically downscales these reanalyses (ERA-I and MERRA) over southern Africa to a high resolution (10 km) grid using the WRF model. Simulations cover the period 1981-2010. Multiple observation datasets were used for both surface temperature and precipitation to account for observational uncertainty when assessing results. Generally, temperature is simulated quite well, except over the Namibian coastal plain where the simulations show anomalous warm temperature related to the failure to propagate the influence of the cold Benguela current inland. Precipitation tends to be overestimated in high altitude areas, and most of southern Mozambique. This could be attributed to challenges in handling complex topography and capturing large-scale circulation patterns. While MERRA driven WRF exhibits slightly less bias in temperature especially for La Nina years, ERA-I driven simulations are on average superior in terms of RMSE. When considering multiple variables and metrics, ERA-I is found to produce the best simulation of the climate over the domain. The influence of the regional model appears to be large enough to overcome the small difference in relative errors present in the lateral boundary conditions derived from these two reanalyses.

  10. Scaling range sizes to threats for robust predictions of risks to biodiversity.

    PubMed

    Keith, David A; Akçakaya, H Resit; Murray, Nicholas J

    2018-04-01

    Assessments of risk to biodiversity often rely on spatial distributions of species and ecosystems. Range-size metrics used extensively in these assessments, such as area of occupancy (AOO), are sensitive to measurement scale, prompting proposals to measure them at finer scales or at different scales based on the shape of the distribution or ecological characteristics of the biota. Despite its dominant role in red-list assessments for decades, appropriate spatial scales of AOO for predicting risks of species' extinction or ecosystem collapse remain untested and contentious. There are no quantitative evaluations of the scale-sensitivity of AOO as a predictor of risks, the relationship between optimal AOO scale and threat scale, or the effect of grid uncertainty. We used stochastic simulation models to explore risks to ecosystems and species with clustered, dispersed, and linear distribution patterns subject to regimes of threat events with different frequency and spatial extent. Area of occupancy was an accurate predictor of risk (0.81<|r|<0.98) and performed optimally when measured with grid cells 0.1-1.0 times the largest plausible area threatened by an event. Contrary to previous assertions, estimates of AOO at these relatively coarse scales were better predictors of risk than finer-scale estimates of AOO (e.g., when measurement cells are <1% of the area of the largest threat). The optimal scale depended on the spatial scales of threats more than the shape or size of biotic distributions. Although we found appreciable potential for grid-measurement errors, current IUCN guidelines for estimating AOO neutralize geometric uncertainty and incorporate effective scaling procedures for assessing risks posed by landscape-scale threats to species and ecosystems. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  11. Using In Situ Observations and Satellite Retrievals to Constrain Large-Eddy Simulations and Single-Column Simulations: Implications for Boundary-Layer Cloud Parameterization in the NASA GISS GCM

    NASA Astrophysics Data System (ADS)

    Remillard, J.

    2015-12-01

    Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.

  12. Improvement of satellite-based gross primary production through incorporation of high resolution input data over east asia

    NASA Astrophysics Data System (ADS)

    Park, Haemi; Im, Jungho; Kim, Miae

    2016-04-01

    Photosynthesis of plants is the main mechanism of carbon absorption from the atmosphere into the terrestrial ecosystem and it contributes to remove greenhouse gases such as carbon dioxide. Annually, 120 Gt of C is supposed to be assimilated through photosynthetic activity of plants as the gross primary production (GPP) over global land area. In terms of climate change, GPP modelling is essential to understand carbon cycle and the balance of carbon budget over various ecosystems. One of the GPP modelling approaches uses light use efficiency that each vegetation type has a specific efficiency for consuming solar radiation related with temperature and humidity. Satellite data can be used to measure various meteorological and biophysical factors over vast areas, which can be used to quantify GPP. NASA Earth Observing System (EOS) program provides Moderate Resolution Imaging Spectroradiometer (MODIS)-derived global GPP product, namely MOD17A2H, on a daily basis. However, significant underestimation of MOD17A2H has been reported in Eastern Asia due to its dense forest distribution and humid condition during monsoon rainy season in summer. The objective of this study was to improve underestimation of MODIS GPP (MOD17A2H) by incorporating meteorological data-temperature, relative humidity, and solar radiation-of higher spatial resolution than data used in MOD17A2H. Landsat-based land cover maps of finer resolution observation and monitoring - global land cover (FROM-GLC) at 30m resolution were used for selection of light use efficiency (LUE). GPP (eq1. GPP = APAR×LUE) is computed by multiplication of APAR (IPAR×fPAR) and LUE (ɛ= ɛmax×T(°C)scalar×VPD(Pa)scalar, where, T is temperature, VPD is vapour pressure deficit) in this study. Meteorological data of Japanese 55-year Reanalysis (JRA-55, 0.56° grid, 3hr) were used for calculation of GPP in East Asia, including Eastern part of China, Korean peninsula, and Japan. Results were validated using flux tower-observed GPP data of AsiaFlux. Results showed that about 40% of underestimation of monthly average of MOD17A2H is confirmed and underestimation of MOD17A2 was improved from 42.3% and 60.4% to 8.3% and -26.2% for two flux tower sites (API site in Japan and GCK site in Korea), respectively. These improvements suggest that correction of LUE by finer land cover classification and/or better frequency of solar radiation data is effective where MOD17A2H does not work well. Further research will include evaluation of the proposed approach over areas in different climate conditions and environments.

  13. A multi-resolution approach to electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-07-01

    We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  14. Large uncertainties in observed daily precipitation extremes over land

    NASA Astrophysics Data System (ADS)

    Herold, Nicholas; Behrangi, Ali; Alexander, Lisa V.

    2017-01-01

    We explore uncertainties in observed daily precipitation extremes over the terrestrial tropics and subtropics (50°S-50°N) based on five commonly used products: the Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) dataset, the Global Precipitation Climatology Centre-Full Data Daily (GPCC-FDD) dataset, the Tropical Rainfall Measuring Mission (TRMM) multi-satellite research product (T3B42 v7), the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR), and the Global Precipitation Climatology Project's One-Degree Daily (GPCP-1DD) dataset. We use the precipitation indices R10mm and Rx1day, developed by the Expert Team on Climate Change Detection and Indices, to explore the behavior of "moderate" and "extreme" extremes, respectively. In order to assess the sensitivity of extreme precipitation to different grid sizes we perform our calculations on four common spatial resolutions (0.25° × 0.25°, 1° × 1°, 2.5° × 2.5°, and 3.75° × 2.5°). The impact of the chosen "order of operation" in calculating these indices is also determined. Our results show that moderate extremes are relatively insensitive to product and resolution choice, while extreme extremes can be very sensitive. For example, at 0.25° × 0.25° quasi-global mean Rx1day values vary from 37 mm in PERSIANN-CDR to 62 mm in T3B42. We find that the interproduct spread becomes prominent at resolutions of 1° × 1° and finer, thus establishing a minimum effective resolution at which observational products agree. Without improvements in interproduct spread, these exceedingly large observational uncertainties at high spatial resolution may limit the usefulness of model evaluations. As has been found previously, resolution sensitivity can be largely eliminated by applying an order of operation where indices are calculated prior to regridding. However, this approach is not appropriate when true area averages are desired (e.g., for model evaluations).

  15. Using High Resolution Model Data to Improve Lightning Forecasts across Southern California

    NASA Astrophysics Data System (ADS)

    Capps, S. B.; Rolinski, T.

    2014-12-01

    Dry lightning often results in a significant amount of fire starts in areas where the vegetation is dry and continuous. Meteorologists from the USDA Forest Service Predictive Services' program in Riverside, California are tasked to provide southern and central California's fire agencies with fire potential outlooks. Logistic regression equations were developed by these meteorologists several years ago, which forecast probabilities of lightning as well as lightning amounts, out to seven days across southern California. These regression equations were developed using ten years of historical gridded data from the Global Forecast System (GFS) model on a coarse scale (0.5 degree resolution), correlated with historical lightning strike data. These equations do a reasonably good job of capturing a lightning episode (3-5 consecutive days or greater of lightning), but perform poorly regarding more detailed information such as exact location and amounts. It is postulated that the inadequacies in resolving the finer details of episodic lightning events is due to the coarse resolution of the GFS data, along with limited predictors. Stability parameters, such as the Lifted Index (LI), the Total Totals index (TT), Convective Available Potential Energy (CAPE), along with Precipitable Water (PW) are the only parameters being considered as predictors. It is hypothesized that the statistical forecasts will benefit from higher resolution data both in training and implementing the statistical model. We have dynamically downscaled NCEP FNL (Final) reanalysis data using the Weather Research and Forecasting model (WRF) to 3km spatial and hourly temporal resolution across a decade. This dataset will be used to evaluate the contribution to the success of the statistical model of additional predictors in higher vertical, spatial and temporal resolution. If successful, we will implement an operational dynamically downscaled GFS forecast product to generate predictors for the resulting statistical lightning model. This data will help fire agencies be better prepared to pre-deploy resources in advance of these events. Specific information regarding duration, amount, and location will be especially valuable.

  16. Effect of elevation resolution on evapotranspiration simulations using MODFLOW.

    PubMed

    Kambhammettu, B V N P; Schmid, Wolfgang; King, James P; Creel, Bobby J

    2012-01-01

    Surface elevations represented in MODFLOW head-dependent packages are usually derived from digital elevation models (DEMs) that are available at much high resolution. Conventional grid refinement techniques to simulate the model at DEM resolution increases computational time, input file size, and in many cases are not feasible for regional applications. This research aims at utilizing the increasingly available high resolution DEMs for effective simulation of evapotranspiration (ET) in MODFLOW as an alternative to grid refinement techniques. The source code of the evapotranspiration package is modified by considering for a fixed MODFLOW grid resolution and for different DEM resolutions, the effect of variability in elevation data on ET estimates. Piezometric head at each DEM cell location is corrected by considering the gradient along row and column directions. Applicability of the research is tested for the lower Rio Grande (LRG) Basin in southern New Mexico. The DEM at 10 m resolution is aggregated to resampled DEM grid resolutions which are integer multiples of MODFLOW grid resolution. Cumulative outflows and ET rates are compared at different coarse resolution grids. Results of the analysis conclude that variability in depth-to-groundwater within the MODFLOW cell is a major contributing parameter to ET outflows in shallow groundwater regions. DEM aggregation methods for the LRG Basin have resulted in decreased volumetric outflow due to the formation of a smoothing error, which lowered the position of water table to a level below the extinction depth. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  17. Application of WRF/Chem-MADRID and WRF/Polyphemus in Europe - Part 1: Model description and evaluation of meteorological predictions

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Sartelet, K.; Wu, S.-Y.; Seigneur, C.

    2013-02-01

    Comprehensive model evaluation and comparison of two 3-D air quality modeling systems (i.e. the Weather Research and Forecast model (WRF)/Polyphemus and WRF with chemistry and the Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) (WRF/Chem-MADRID) are conducted over western Europe. Part 1 describes the background information for the model comparison and simulation design, as well as the application of WRF for January and July 2001 over triple-nested domains in western Europe at three horizontal grid resolutions: 0.5°, 0.125°, and 0.025°. Six simulated meteorological variables (i.e. temperature at 2 m (T2), specific humidity at 2 m (Q2), relative humidity at 2 m (RH2), wind speed at 10 m (WS10), wind direction at 10 m (WD10), and precipitation (Precip)) are evaluated using available observations in terms of spatial distribution, domainwide daily and site-specific hourly variations, and domainwide performance statistics. WRF demonstrates its capability in capturing diurnal/seasonal variations and spatial gradients of major meteorological variables. While the domainwide performance of T2, Q2, RH2, and WD10 at all three grid resolutions is satisfactory overall, large positive or negative biases occur in WS10 and Precip even at 0.025°. In addition, discrepancies between simulations and observations exist in T2, Q2, WS10, and Precip at mountain/high altitude sites and large urban center sites in both months, in particular, during snow events or thunderstorms. These results indicate the model's difficulty in capturing meteorological variables in complex terrain and subgrid-scale meteorological phenomena, due to inaccuracies in model initialization parameterization (e.g. lack of soil temperature and moisture nudging), limitations in the physical parameterizations of the planetary boundary layer (e.g. cloud microphysics, cumulus parameterizations, and ice nucleation treatments) as well as limitations in surface heat and moisture budget parameterizations (e.g. snow-related processes, subgrid-scale surface roughness elements, and urban canopy/heat island treatments and CO2 domes). While the use of finer grid resolutions of 0.125° and 0.025° shows some improvement for WS10, Precip, and some mesoscale events (e.g. strong forced convection and heavy precipitation), it does not significantly improve the overall statistical performance for all meteorological variables except for Precip. These results indicate a need to further improve the model representations of the above parameterizations at all scales.

  18. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy

    NASA Astrophysics Data System (ADS)

    Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng

    2018-02-01

    Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.

  19. Diversity in computing technologies and strategies for dynamic resource allocation

    DOE PAGES

    Garzoglio, G.; Gutsche, O.

    2015-12-23

    Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.

  20. Fine resolution 3D temperature fields off Kerguelen from instrumented penguins

    NASA Astrophysics Data System (ADS)

    Charrassin, Jean-Benoît; Park, Young-Hyang; Le Maho, Yvon; Bost, Charles-André

    2004-12-01

    The use of diving animals as autonomous vectors of oceanographic instruments is rapidly increasing, because this approach yields cost-efficient new information and can be used in previously poorly sampled areas. However, methods for analyzing the collected data are still under development. In particular, difficulties may arise from the heterogeneous data distribution linked to animals' behavior. Here we show how raw temperature data collected by penguin-borne loggers were transformed to a regular gridded dataset that provided new information on the local circulation off Kerguelen. A total of 16 king penguins ( Aptenodytes patagonicus) were equipped with satellite-positioning transmitters and with temperature-time-depth recorders (TTDRs) to record dive depth and sea temperature. The penguins' foraging trips recorded during five summers ranged from 140 to 600 km from the colony and 11,000 dives >100 m were recorded. Temperature measurements recorded during diving were used to produce detailed 3D temperature fields of the area (0-200 m). The data treatment included dive location, determination of the vertical profile for each dive, averaging and gridding of those profiles onto 0.1°×0.1° cells, and optimal interpolation in both the horizontal and vertical using an objective analysis. Horizontal fields of temperature at the surface and 100 m are presented, as well as a vertical section along the main foraging direction of the penguins. Compared to conventional temperature databases (Levitus World Ocean Atlas and historical stations available in the area), the 3D temperature fields collected from penguins are extremely finely resolved, by one order finer. Although TTDRs were less accurate than conventional instruments, such a high spatial resolution of penguin-derived data provided unprecedented detailed information on the upper level circulation pattern east of Kerguelen, as well as the iron-enrichment mechanism leading to a high primary production over the Kerguelen Plateau.

  1. A non-parametric, supervised classification of vegetation types on the Kaibab National Forest using decision trees

    Treesearch

    Suzanne M. Joy; R. M. Reich; Richard T. Reynolds

    2003-01-01

    Traditional land classification techniques for large areas that use Landsat Thematic Mapper (TM) imagery are typically limited to the fixed spatial resolution of the sensors (30m). However, the study of some ecological processes requires land cover classifications at finer spatial resolutions. We model forest vegetation types on the Kaibab National Forest (KNF) in...

  2. Snow water equivalent in the Alps as seen by gridded data sets, CMIP5 and CORDEX climate models

    NASA Astrophysics Data System (ADS)

    Terzago, Silvia; von Hardenberg, Jost; Palazzi, Elisa; Provenzale, Antonello

    2017-07-01

    The estimate of the current and future conditions of snow resources in mountain areas would require reliable, kilometre-resolution, regional-observation-based gridded data sets and climate models capable of properly representing snow processes and snow-climate interactions. At the moment, the development of such tools is hampered by the sparseness of station-based reference observations. In past decades passive microwave remote sensing and reanalysis products have mainly been used to infer information on the snow water equivalent distribution. However, the investigation has usually been limited to flat terrains as the reliability of these products in mountain areas is poorly characterized.This work considers the available snow water equivalent data sets from remote sensing and from reanalyses for the greater Alpine region (GAR), and explores their ability to provide a coherent view of the snow water equivalent distribution and climatology in this area. Further we analyse the simulations from the latest-generation regional and global climate models (RCMs, GCMs), participating in the Coordinated Regional Climate Downscaling Experiment over the European domain (EURO-CORDEX) and in the Fifth Coupled Model Intercomparison Project (CMIP5) respectively. We evaluate their reliability in reproducing the main drivers of snow processes - near-surface air temperature and precipitation - against the observational data set EOBS, and compare the snow water equivalent climatology with the remote sensing and reanalysis data sets previously considered. We critically discuss the model limitations in the historical period and we explore their potential in providing reliable future projections.The results of the analysis show that the time-averaged spatial distribution of snow water equivalent and the amplitude of its annual cycle are reproduced quite differently by the different remote sensing and reanalysis data sets, which in fact exhibit a large spread around the ensemble mean. We find that GCMs at spatial resolutions equal to or finer than 1.25° longitude are in closer agreement with the ensemble mean of satellite and reanalysis products in terms of root mean square error and standard deviation than lower-resolution GCMs. The set of regional climate models from the EURO-CORDEX ensemble provides estimates of snow water equivalent at 0.11° resolution that are locally much larger than those indicated by the gridded data sets, and only in a few cases are these differences smoothed out when snow water equivalent is spatially averaged over the entire Alpine domain. ERA-Interim-driven RCM simulations show an annual snow cycle that is comparable in amplitude to those provided by the reference data sets, while GCM-driven RCMs present a large positive bias. RCMs and higher-resolution GCM simulations are used to provide an estimate of the snow reduction expected by the mid-21st century (RCP 8.5 scenario) compared to the historical climatology, with the main purpose of highlighting the limits of our current knowledge and the need for developing more reliable snow simulations.

  3. Influence of air quality model resolution on uncertainty associated with health impacts

    NASA Astrophysics Data System (ADS)

    Thompson, T. M.; Selin, N. E.

    2012-10-01

    We use regional air quality modeling to evaluate the impact of model resolution on uncertainty associated with the human health benefits resulting from proposed air quality regulations. Using a regional photochemical model (CAMx), we ran a modeling episode with meteorological inputs simulating conditions as they occurred during August through September 2006 (a period representative of conditions leading to high ozone), and two emissions inventories (a 2006 base case and a 2018 proposed control scenario, both for Houston, Texas) at 36, 12, 4 and 2 km resolution. The base case model performance was evaluated for each resolution against daily maximum 8-h averaged ozone measured at monitoring stations. Results from each resolution were more similar to each other than they were to measured values. Population-weighted ozone concentrations were calculated for each resolution and applied to concentration response functions (with 95% confidence intervals) to estimate the health impacts of modeled ozone reduction from the base case to the control scenario. We found that estimated avoided mortalities were not significantly different between the 2, 4 and 12 km resolution runs, but the 36 km resolution may over-predict some potential health impacts. Given the cost/benefit analysis requirements motivated by Executive Order 12866 as it applies to the Clean Air Act, the uncertainty associated with human health impacts and therefore the results reported in this study, we conclude that health impacts calculated from population weighted ozone concentrations obtained using regional photochemical models at 36 km resolution fall within the range of values obtained using fine (12 km or finer) resolution modeling. However, in some cases, 36 km resolution may not be fine enough to statistically replicate the results achieved using 2, 4 or 12 km resolution. On average, when modeling at 36 km resolution, an estimated 5 deaths per week during the May through September ozone season are avoided because of ozone reductions resulting from the proposed emissions reductions (95% confidence interval was 2-8). When modeling at 2, 4 or 12 km finer scale resolution, on average 4 deaths are avoided due to the same reductions (95% confidence interval was 1-7). Study results show that ozone modeling at a resolution finer than 12 km is unlikely to reduce uncertainty in benefits analysis for this specific region. We suggest that 12 km resolution may be appropriate for uncertainty analyses of health impacts due to ozone control scenarios, in areas with similar chemistry, meteorology and population density, but that resolution requirements should be assessed on a case-by-case basis and revised as confidence intervals for concentration-response functions are updated.

  4. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    NASA Astrophysics Data System (ADS)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

  5. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    NASA Astrophysics Data System (ADS)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  6. The need to consider temporal variability when modelling exchange at the sediment-water interface

    USGS Publications Warehouse

    Rosenberry, Donald O.

    2011-01-01

    Most conceptual or numerical models of flows and processes at the sediment-water interface assume steady-state conditions and do not consider temporal variability. The steady-state assumption is required because temporal variability, if quantified at all, is usually determined on a seasonal or inter-annual scale. In order to design models that can incorporate finer-scale temporal resolution we first need to measure variability at a finer scale. Automated seepage meters that can measure flow across the sediment-water interface with temporal resolution of seconds to minutes were used in a variety of settings to characterize seepage response to rainfall, wind, and evapotranspiration. Results indicate that instantaneous seepage fluxes can be much larger than values commonly reported in the literature, although seepage does not always respond to hydrological processes. Additional study is needed to understand the reasons for the wide range and types of responses to these hydrologic and atmospheric events.

  7. Verification Test of the SURF and SURFplus Models in xRage: Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2016-06-20

    The previous study used an underdriven detonation wave (steady ZND reaction zone profile followed by a scale invariant rarefaction wave) for PBX 9502 as a validation test of the implementation of the SURF and SURFplus models in the xRage code. Even with a fairly fine uniform mesh (12,800 cells for 100mm) the detonation wave profile had limited resolution due to the thin reaction zone width (0.18mm) for the fast SURF burn rate. Here we study the effect of finer resolution by comparing results of simulations with cell sizes of 8, 2 and 1 μm, which corresponds to 25, 100 andmore » 200 points within the reaction zone. With finer resolution the lead shock pressure is closer to the von Neumann spike pressure, and there is less noise in the rarefaction wave due to fluctuations within the reaction zone. As a result the average error decreases. The pointwise error is still dominated by the smearing the pressure kink in the vicinity of the sonic point which occurs at the end of the reaction zone.« less

  8. An Evaluation of Recently Developed RANS-Based Turbulence Models for Flow Over a Two-Dimensional Block Subjected to Different Mesh Structures and Grid Resolutions

    NASA Astrophysics Data System (ADS)

    Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando

    2016-04-01

    Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.

  9. A variable resolution right TIN approach for gridded oceanographic data

    NASA Astrophysics Data System (ADS)

    Marks, David; Elmore, Paul; Blain, Cheryl Ann; Bourgeois, Brian; Petry, Frederick; Ferrini, Vicki

    2017-12-01

    Many oceanographic applications require multi resolution representation of gridded data such as for bathymetric data. Although triangular irregular networks (TINs) allow for variable resolution, they do not provide a gridded structure. Right TINs (RTINs) are compatible with a gridded structure. We explored the use of two approaches for RTINs termed top-down and bottom-up implementations. We illustrate why the latter is most appropriate for gridded data and describe for this technique how the data can be thinned. While both the top-down and bottom-up approaches accurately preserve the surface morphology of any given region, the top-down method of vertex placement can fail to match the actual vertex locations of the underlying grid in many instances, resulting in obscured topology/bathymetry. Finally we describe the use of the bottom-up approach and data thinning in two applications. The first is to provide thinned, variable resolution bathymetry data for tests of storm surge and inundation modeling, in particular hurricane Katrina. Secondly we consider the use of the approach for an application to an oceanographic data grid of 3-D ocean temperature.

  10. Multi-level adaptive finite element methods. 1: Variation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1979-01-01

    A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.

  11. An Upgrade of the Aeroheating Software ''MINIVER''

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.

  12. The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework

    NASA Astrophysics Data System (ADS)

    Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.

    2016-12-01

    The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During the 6th Session of the UN-GGIM in August 2016 the role of DGGS in the context of the GSGF was formally acknowledged. This paper proposes to highlight the synergies and role of DGGS in the Global Statistical Geospatial Framework and to show examples of the use of DGGS to combine geospatial statistics with traditional geoscientific data.

  13. Impact of bias-corrected reanalysis-derived lateral boundary conditions on WRF simulations

    NASA Astrophysics Data System (ADS)

    Moalafhi, Ditiro Benson; Sharma, Ashish; Evans, Jason Peter; Mehrotra, Rajeshwar; Rocheta, Eytan

    2017-08-01

    Lateral and lower boundary conditions derived from a suitable global reanalysis data set form the basis for deriving a dynamically consistent finer resolution downscaled product for climate and hydrological assessment studies. A problem with this, however, is that systematic biases have been noted to be present in the global reanalysis data sets that form these boundaries, biases which can be carried into the downscaled simulations thereby reducing their accuracy or efficacy. In this work, three Weather Research and Forecasting (WRF) model downscaling experiments are undertaken to investigate the impact of bias correcting European Centre for Medium range Weather Forecasting Reanalysis ERA-Interim (ERA-I) atmospheric temperature and relative humidity using Atmospheric Infrared Sounder (AIRS) satellite data. The downscaling is performed over a domain centered over southern Africa between the years 2003 and 2012. The sample mean and the mean as well as standard deviation at each grid cell for each variable are used for bias correction. The resultant WRF simulations of near-surface temperature and precipitation are evaluated seasonally and annually against global gridded observational data sets and compared with ERA-I reanalysis driving field. The study reveals inconsistencies between the impact of the bias correction prior to downscaling and the resultant model simulations after downscaling. Mean and standard deviation bias-corrected WRF simulations are, however, found to be marginally better than mean only bias-corrected WRF simulations and raw ERA-I reanalysis-driven WRF simulations. Performances, however, differ when assessing different attributes in the downscaled field. This raises questions about the efficacy of the correction procedures adopted.

  14. Validation of satellite daily rainfall estimates in complex terrain of Bali Island, Indonesia

    NASA Astrophysics Data System (ADS)

    Rahmawati, Novi; Lubczynski, Maciek W.

    2017-11-01

    Satellite rainfall products have different performances in different geographic regions under different physical and climatological conditions. In this study, the objective was to select the most reliable and accurate satellite rainfall products for specific, environmental conditions of Bali Island. The performances of four spatio-temporal satellite rainfall products, i.e., CMORPH25, CMORPH8, TRMM, and PERSIANN, were evaluated at the island, zonation (applying elevation and climatology as constraints), and pixel scales, using (i) descriptive statistics and (ii) categorical statistics, including bias decomposition. The results showed that all the satellite products had low accuracy because of spatial scale effect, daily resolution and the island complexity. That accuracy was relatively lower in (i) dry seasons and dry climatic zones than in wet seasons and wet climatic zones; (ii) pixels jointly covered by sea and mountainous land than in pixels covered by land or by sea only; and (iii) topographically diverse than uniform terrains. CMORPH25, CMORPH8, and TRMM underestimated and PERSIANN overestimated rainfall when comparing them to gauged rain. The CMORPH25 had relatively the best performance and the PERSIANN had the worst performance in the Bali Island. The CMORPH25 had the lowest statistical errors, the lowest miss, and the highest hit rainfall events; it also had the lowest miss rainfall bias and was relatively the most accurate in detecting, frequent in Bali, ≤ 20 mm day-1 rain events. Lastly, the CMORPH25 coarse grid better represented rainfall events from coastal to inlands areas than other satellite products, including finer grid CMORPH8.

  15. One-way coupling of an atmospheric and a hydrologic model in Colorado

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.

    2006-01-01

    This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.

  16. SU-C-209-03: Anti-Scatter Grid-Line Artifact Minimization for Removing the Grid Lines for Three Different Grids Used with a High Resolution CMOS Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    Purpose: Demonstrate the effectiveness of an anti-scatter grid artifact minimization method by removing the grid-line artifacts for three different grids when used with a high resolution CMOS detector. Method: Three different stationary x-ray grids were used with a high resolution CMOS x-ray detector (Dexela 1207, 75 µm pixels, sensitivity area 11.5cm × 6.5cm) to image a simulated artery block phantom (Nuclear Associates, Stenosis/Aneurysm Artery Block 76–705) combined with a frontal head phantom used as the scattering source. The x-ray parameters were 98kVp, 200mA, and 16ms for all grids. With all the three grids, two images were acquired: the first formore » a scatter-less flat field including the grid and the second of the object with the grid which may still have some scatter transmission. Because scatter has a low spatial frequency distribution, it was represented by an estimated constant value as an initial approximation and subtracted from the image of the object with grid before dividing by an average frame of the grid flat-field with no scatter. The constant value was iteratively changed to minimize residual grid-line artifact. This artifact minimization process was used for all the three grids. Results: Anti-scatter grid lines artifacts were successfully eliminated in all the three final images taken with the three different grids. The image contrast and CNR were also compared before and after the correction, and also compared with those from the image of the object when no grid was used. The corrected images showed an increase in CNR of approximately 28%, 33% and 25% for the three grids, as compared to the images when no grid at all was used. Conclusion: Anti-scatter grid-artifact minimization works effectively irrespective of the specifications of the grid when it is used with a high spatial resolution detector. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  17. Regional Data Assimilation Using a Stretched-Grid Approach and Ensemble Calculations

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, M. S.; Takacs, L. L.; Govindaraju, R. C.; Atlas, Robert (Technical Monitor)

    2002-01-01

    The global variable resolution stretched grid (SG) version of the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS) incorporating the GEOS SG-GCM (Fox-Rabinovitz 2000, Fox-Rabinovitz et al. 2001a,b), has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The major area of interest with enhanced regional resolution used in different SG-DAS experiments includes a rectangle over the U.S. with 50 or 60 km horizontal resolution. The analyses and diagnostics are produced for all mandatory levels from the surface to 0.2 hPa. The assimilated regional mesoscale products are consistent with global scale circulation characteristics due to using the SG-approach. Both the stretched grid and basic uniform grid DASs use the same amount of global grid-points and are compared in terms of regional product quality.

  18. Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walko, Robert

    2016-11-07

    The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of themore » atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.« less

  19. Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution

    NASA Astrophysics Data System (ADS)

    Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.

    2017-12-01

    We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.

  20. Development of a United States - Mexico emissions inventory for the Big Bend Regional Aerosol and Visibility Observational (BRAVO) Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampden Kuhns; Eladio M. Knipping; Jeffrey M. Vukovich,

    2005-05-01

    The Big Bend Regional Aerosol and Visibility Observational (BRAVO) Study investigated the sources of haze at Big Bend National Park in southwest Texas. The modeling domain includes most of the continental United States and Mexico. The BRAVO emissions inventory was constructed from the 1999 National Emission Inventory for the United States, modified to include finer-resolution data for Texas and 13 U.S. states in close proximity. The inventory includes emissions for CO, nitrogen oxides, sulfur dioxide, volatile organic compounds (VOCs), ammonia, particulate matter (PM) {lt}10 {mu}m in aerodynamic diameter, and PM {lt}2.5 {mu}m in aerodynamic diameter. The SMOKE modeling system wasmore » used to generate gridded emissions fields for use with the Regional Modeling System for Aerosols and Deposition (REMSAD) and the Community Multiscale Air Quality model modified with the Model of Aerosol Dynamics, Reaction, Ionization and Dissolution (CMAQ-MADRID). The compilation of the inventory, supporting model input data, and issues encountered during the development of the inventory are documented. A comparison of the BRAVO emissions inventory for Mexico with other emerging Mexican emission inventories illustrates their uncertainty. 65 refs., 4 figs., 9 tabs.« less

  1. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  2. Active Deformation along the Southern End of the Tosco-Abreojos Fault System: New Insights from Multibeam Swath Bathymetry

    NASA Astrophysics Data System (ADS)

    Michaud, François; Calmus, Thierry; Ratzov, Gueorgui; Royer, Jean-Yves; Sosson, Marc; Bigot-Cormier, Florence; Bandy, William; Mortera Gutiérrez, Carlos

    2011-08-01

    The relative motion of the Pacific plate with respect to the North America plate is partitioned between transcurrent faults located along the western margin of Baja California and transform faults and spreading ridges in the Gulf of California. However, the amount of right lateral offset along the Baja California western margin is still debated. We revisited multibeam swath bathymetry data along the southern end of the Tosco-Abreojos fault system. In this area the depths are less than 1,000 m and allow a finer gridding at 60 m cell spacing. This improved resolution unveils several transcurrent right lateral faults offsetting the seafloor and canyons, which can be used as markers to quantify local offsets. The seafloor of the southern end of the Tosco-Abreojos fault system (south of 24°N) displays NW-SE elongated bathymetric highs and lows, suggesting a transtensional tectonic regime associated with the formation of pull-apart basins. In such an active tectonic context, submarine canyon networks are unstable. Using the deformation rate inferred from kinematic predictions and pull-apart geometry, we suggest a minimum age for the reorganization of the canyon network.

  3. A Variable-Resolution Stretched-Grid General Circulation Model and Data Assimilation System with Multiple Areas of Interest: Studying the Anomalous Regional Climate Events of 1998

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence; Govindaraju, Ravi C.; Atlas, Robert (Technical Monitor)

    2002-01-01

    The new stretched-grid design with multiple (four) areas of interest, one at each global quadrant, is implemented into both a stretched-grid GCM (general circulation model) and a stretched-grid data assimilation system (DAS). The four areas of interest include: the U.S./Northern Mexico, the El Nino area/Central South America, India/China, and the Eastern Indian Ocean/Australia. Both the stretched-grid GCM and DAS annual (November 1997 through December 1998) integrations are performed with 50 km regional resolution. The efficient regional down-scaling to mesoscales is obtained for each of the four areas of interest while the consistent interactions between regional and global scales and the high quality of global circulation, are preserved. This is the advantage of the stretched-grid approach. The global variable resolution DAS incorporating the stretched-grid GCM has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The anomalous regional climate events of 1998 that occurred over the U.S., Mexico, South America, China, India, African Sahel, and Australia are investigated in both simulation and data assimilation modes. Tree assimilated products are also used, along with gauge precipitation data, for validating the simulation results. The obtained results show that the stretched-grid GCM and DAS are capable of producing realistic high quality simulated and assimilated products at mesoscale resolution for regional climate studies and applications.

  4. The influence of model grid resolution on estimation of national scale nitrogen deposition and exceedance of critical levels

    NASA Astrophysics Data System (ADS)

    Dore, A. J.; Kryza, M.; Hall, J. R.; Hallsworth, S.; Keller, V. J. D.; Vieno, M.; Sutton, M. A.

    2011-12-01

    The Fine Resolution Atmospheric Multi-pollutant Exchange model (FRAME) has been applied to model the spatial distribution of nitrogen deposition and air concentration over the UK at a 1 km spatial resolution. The modelled deposition and concentration data were gridded at resolutions of 1 km, 5 km and 50 km to test the sensitivity of calculations of the exceedance of critical loads for nitrogen deposition to the deposition data resolution. The modelled concentrations of NO2 were validated by comparison with measurements from the rural sites in the national monitoring network and were found to achieve better agreement with the high resolution 1 km data. High resolution plots were found to represent a more physically realistic distribution of nitrogen air concentrations and deposition resulting from use of 1 km resolution precipitation and emissions data as compared to 5 km resolution data. Summary statistics for national scale exceedance of the critical load for nitrogen deposition were not highly sensitive to the grid resolution of the deposition data but did show greater area exceedance with coarser grid resolution due to spatial averaging of high nitrogen deposition hot spots. Local scale deposition at individual Sites of Special Scientific Interest and high precipitation upland sites was sensitive to choice of grid resolution of deposition data. Use of high resolution data tended to generate lower deposition values in sink areas for nitrogen dry deposition (Sites of Scientific Interest) and higher values in high precipitation upland areas. In areas with generally low exceedance (Scotland) and for certain vegetation types (montane), the exceedance statistics were more sensitive to model data resolution.

  5. The influence of model grid resolution on estimation of national scale nitrogen deposition and exceedance of critical loads

    NASA Astrophysics Data System (ADS)

    Dore, A. J.; Kryza, M.; Hall, J. R.; Hallsworth, S.; Keller, V. J. D.; Vieno, M.; Sutton, M. A.

    2012-05-01

    The Fine Resolution Atmospheric Multi-pollutant Exchange model (FRAME) was applied to model the spatial distribution of reactive nitrogen deposition and air concentration over the United Kingdom at a 1 km spatial resolution. The modelled deposition and concentration data were gridded at resolutions of 1 km, 5 km and 50 km to test the sensitivity of calculations of the exceedance of critical loads for nitrogen deposition to the deposition data resolution. The modelled concentrations of NO2 were validated by comparison with measurements from the rural sites in the national monitoring network and were found to achieve better agreement with the high resolution 1 km data. High resolution plots were found to represent a more physically realistic distribution of reactive nitrogen air concentrations and deposition resulting from use of 1 km resolution precipitation and emissions data as compared to 5 km resolution data. Summary statistics for national scale exceedance of the critical load for nitrogen deposition were not highly sensitive to the grid resolution of the deposition data but did show greater area exceedance with coarser grid resolution due to spatial averaging of high nitrogen deposition hot spots. Local scale deposition at individual Sites of Special Scientific Interest and high precipitation upland sites was sensitive to choice of grid resolution of deposition data. Use of high resolution data tended to generate lower deposition values in sink areas for nitrogen dry deposition (Sites of Scientific Interest) and higher values in high precipitation upland areas. In areas with generally low exceedance (Scotland) and for certain vegetation types (montane), the exceedance statistics were more sensitive to model data resolution.

  6. MISR 17.6 KM Gridded Cloud Motion Vectors: Overview and Assessment

    NASA Technical Reports Server (NTRS)

    Mueller, Kevin; Garay, Michael; Moroney, Catherine; Jovanovic, Veljko

    2012-01-01

    The MISR (Multi-angle Imaging SpectroRadiometer) instrument on the Terra satellite has been retrieving cloud motion vectors (CMVs) globally and almost continuously since early in 2000. In February 2012 the new MISR Level 2 Cloud product was publicly released, providing cloud motion vectors at 17.6 km resolution with improved accuracy and roughly threefold increased coverage relative to the 70.4 km resolution vectors of the current MISR Level 2 Stereo product (which remains available). MISR retrieves both horizontal cloud motion and height from the apparent displacement due to parallax and movement of cloud features across three visible channel (670nm) camera views over a span of 200 seconds. The retrieval has comparable accuracy to operational atmospheric motion vectors from other current sensors, but holds the additional advantage of global coverage and finer precision height retrieval that is insensitive to radiometric calibration. The MISR mission is expected to continue operation for many more years, possibly until 2019, and Level 2 Cloud has the possibility of being produced with a sensing-to-availability lag of 5 hours. This report compares MISR CMV with collocated motion vectors from arctic rawinsonde sites, and from the GOES and MODISTerra instruments. CMV at heights below 3 km exhibit the smallest differences, as small as 3.3 m/s for MISR and GOES. Clouds above 3 km exhibit larger differences, as large as 8.9 m/s for MISR and MODIS. Typical differences are on the order of 6 m/s.

  7. LES on unstructured deforming meshes: Towards reciprocating IC engines

    NASA Technical Reports Server (NTRS)

    Haworth, D. C.; Jansen, K.

    1996-01-01

    A variable explicit/implicit characteristics-based advection scheme that is second-order accurate in space and time has been developed recently for unstructured deforming meshes (O'Rourke & Sahota 1996a). To explore the suitability of this methodology for Large-Eddy Simulation (LES), three subgrid-scale turbulence models have been implemented in the CHAD CFD code (O'Rourke & Sahota 1996b): a constant-coefficient Smagorinsky model, a dynamic Smagorinsky model for flows having one or more directions of statistical homogeneity, and a Lagrangian dynamic Smagorinsky model for flows having no spatial or temporal homogeneity (Meneveau et al. 1996). Computations have been made for three canonical flows, progressing towards the intended application of in-cylinder flow in a reciprocating engine. Grid sizes were selected to be comparable to the coarsest meshes used in earlier spectral LES studies. Quantitative results are reported for decaying homogeneous isotropic turbulence, and for a planar channel flow. Computations are compared to experimental measurements, to Direct-Numerical Simulation (DNS) data, and to Rapid-Distortion Theory (RDT) where appropriate. Generally satisfactory evolution of first and second moments is found on these coarse meshes; deviations are attributed to insufficient mesh resolution. Issues include mesh resolution and computational requirements for a specified level of accuracy, analytic characterization of the filtering implied by the numerical method, wall treatment, and inflow boundary conditions. To resolve these issues, finer-mesh simulations and computations of a simplified axisymmetric reciprocating piston-cylinder assembly are in progress.

  8. Snow and Ice Products from the Moderate Resolution Imaging Spectroradiometer

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Salomonson, Vincent V.; Riggs, George A.; Klein, Andrew G.

    2003-01-01

    Snow and sea ice products, derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument, flown on the Terra and Aqua satellites, are or will be available through the National Snow and Ice Data Center Distributed Active Archive Center (DAAC). The algorithms that produce the products are automated, thus providing a consistent global data set that is suitable for climate studies. The suite of MODIS snow products begins with a 500-m resolution, 2330-km swath snow-cover map that is then projected onto a sinusoidal grid to produce daily and 8-day composite tile products. The sequence proceeds to daily and 8-day composite climate-modeling grid (CMG) products at 0.05 resolution. A daily snow albedo product will be available in early 2003 as a beta test product. The sequence of sea ice products begins with a swath product at 1-km resolution that provides sea ice extent and ice-surface temperature (IST). The sea ice swath products are then mapped onto the Lambert azimuthal equal area or EASE-Grid projection to create a daily and 8-day composite sea ice tile product, also at 1 -km resolution. Climate-Modeling Grid (CMG) sea ice products in the EASE-Grid projection at 4-km resolution are planned for early 2003.

  9. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  10. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  11. Vertical resolution of baroclinic modes in global ocean models

    NASA Astrophysics Data System (ADS)

    Stewart, K. D.; Hogg, A. McC.; Griffies, S. M.; Heerdegen, A. P.; Ward, M. L.; Spence, P.; England, M. H.

    2017-05-01

    Improvements in the horizontal resolution of global ocean models, motivated by the horizontal resolution requirements for specific flow features, has advanced modelling capabilities into the dynamical regime dominated by mesoscale variability. In contrast, the choice of the vertical grid remains a subjective choice, and it is not clear that efforts to improve vertical resolution adequately support their horizontal counterparts. Indeed, considering that the bulk of the vertical ocean dynamics (including convection) are parameterized, it is not immediately obvious what the vertical grid is supposed to resolve. Here, we propose that the primary purpose of the vertical grid in a hydrostatic ocean model is to resolve the vertical structure of horizontal flows, rather than to resolve vertical motion. With this principle we construct vertical grids based on their abilities to represent baroclinic modal structures commensurate with the theoretical capabilities of a given horizontal grid. This approach is designed to ensure that the vertical grids of global ocean models complement (and, importantly, to not undermine) the resolution capabilities of the horizontal grid. We find that for z-coordinate global ocean models, at least 50 well-positioned vertical levels are required to resolve the first baroclinic mode, with an additional 25 levels per subsequent mode. High-resolution ocean-sea ice simulations are used to illustrate some of the dynamical enhancements gained by improving the vertical resolution of a 1/10° global ocean model. These enhancements include substantial increases in the sea surface height variance (∼30% increase south of 40°S), the barotropic and baroclinic eddy kinetic energies (up to 200% increase on and surrounding the Antarctic continental shelf and slopes), and the overturning streamfunction in potential density space (near-tripling of the Antarctic Bottom Water cell at 65°S).

  12. Development of a MODIS-Derived Surface Albedo Data Set: An Improved Model Input for Processing the NSRDB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclaurin, Galen; Sengupta, Manajit; Xie, Yu

    A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance)more » broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the Northern Hemisphere for the temporal extent of the NSRDB (1998-2015). We provide a review of validation studies conducted on these two products and describe the methodology developed by NREL to remap the data products to the NSRDB grid and integrate them into a seamless daily data set.« less

  13. Performance measures for freight & general traffic : investigating similarities and differences using alternate data sources.

    DOT National Transportation Integrated Search

    2015-06-01

    Recent advances in probe vehicle data collection systems have enabled monitoring traffic : conditions at finer temporal and spatial resolution. The primary objective of the current study is : to leverage these probe data sources to understand if ther...

  14. HPC Aspects of Variable-Resolution Global Climate Modeling using a Multi-scale Convection Parameterization

    EPA Science Inventory

    High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...

  15. Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Reed, Seann M.

    2003-09-01

    The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.

  16. Uncertainty of future projections of species distributions in mountainous regions.

    PubMed

    Tang, Ying; Winkler, Julie A; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang

    2018-01-01

    Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution.

  17. Uncertainty of future projections of species distributions in mountainous regions

    PubMed Central

    Tang, Ying; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang

    2018-01-01

    Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution. PMID:29320501

  18. A proxy for high-resolution regional reanalysis for the Southeast United States: assessment of precipitation variability in dynamically downscaled reanalyses

    USGS Publications Warehouse

    Stefanova, Lydia; Misra, Vasubandhu; Chan, Steven; Griffin, Melissa; O'Brien, James J.; Smith, Thomas J.

    2012-01-01

    We present an analysis of the seasonal, subseasonal, and diurnal variability of rainfall from COAPS Land- Atmosphere Regional Reanalysis for the Southeast at 10-km resolution (CLARReS10). Most of our assessment focuses on the representation of summertime subseasonal and diurnal variability.Summer precipitation in the Southeast United States is a particularly challenging modeling problem because of the variety of regional-scale phenomena, such as sea breeze, thunderstorms and squall lines, which are not adequately resolved in coarse atmospheric reanalyses but contribute significantly to the hydrological budget over the region. We find that the dynamically downscaled reanalyses are in good agreement with station and gridded observations in terms of both the relative seasonal distribution and the diurnal structure of precipitation, although total precipitation amounts tend to be systematically overestimated. The diurnal cycle of summer precipitation in the downscaled reanalyses is in very good agreement with station observations and a clear improvement both over their "parent" reanalyses and over newer-generation reanalyses. The seasonal cycle of precipitation is particularly well simulated in the Florida; this we attribute to the ability of the regional model to provide a more accurate representation of the spatial and temporal structure of finer-scale phenomena such as fronts and sea breezes. Over the northern portion of the domain summer precipitation in the downscaled reanalyses remains, as in the "parent" reanalyses, overestimated. Given the degree of success that dynamical downscaling of reanalyses demonstrates in the simulation of the characteristics of regional precipitation, its favorable comparison to conventional newer-generation reanalyses and its cost-effectiveness, we conclude that for the Southeast United states such downscaling is a viable proxy for high-resolution conventional reanalysis.

  19. Interpolation of diffusion weighted imaging datasets.

    PubMed

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W; Reislev, Nina L; Paulson, Olaf B; Ptito, Maurice; Siebner, Hartwig R

    2014-12-01

    Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical resolution and more anatomical details in complex regions such as tract boundaries and cortical layers, which are normally only visualized at higher image resolutions. Similar results were found with typical clinical human DWI dataset. However, a possible bias in quantitative values imposed by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue compartments. Copyright © 2014. Published by Elsevier Inc.

  20. Multiscale registration algorithm for alignment of meshes

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2004-03-01

    Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.

  1. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) data vegetative index calculation processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACVIN, calculates vegetative indexes numbers on limited area coverage/high resolution picture transmission data for selected IJ grid sections. The IJ grid sections were previously extracted from the full resolution data tapes and stored on disk files.

  2. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    NASA Astrophysics Data System (ADS)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  3. Impact of surface coupling grids on tropical cyclone extremes in high-resolution atmospheric simulations

    DOE PAGES

    Zarzycki, Colin M.; Reed, Kevin A.; Bacmeister, Julio T.; ...

    2016-02-25

    This article discusses the sensitivity of tropical cyclone climatology to surface coupling strategy in high-resolution configurations of the Community Earth System Model. Using two supported model setups, we demonstrate that the choice of grid on which the lowest model level wind stress and surface fluxes are computed may lead to differences in cyclone strength in multi-decadal climate simulations, particularly for the most intense cyclones. Using a deterministic framework, we show that when these surface quantities are calculated on an ocean grid that is coarser than the atmosphere, the computed frictional stress is misaligned with wind vectors in individual atmospheric gridmore » cells. This reduces the effective surface drag, and results in more intense cyclones when compared to a model configuration where the ocean and atmosphere are of equivalent resolution. Our results demonstrate that the choice of computation grid for atmosphere–ocean interactions is non-negligible when considering climate extremes at high horizontal resolution, especially when model components are on highly disparate grids.« less

  4. Study of a high-resolution PET system using a Silicon detector probe

    NASA Astrophysics Data System (ADS)

    Brzeziński, K.; Oliver, J. F.; Gillam, J.; Rafecas, M.

    2014-10-01

    A high-resolution silicon detector probe, in coincidence with a conventional PET scanner, is expected to provide images of higher quality than those achievable using the scanner alone. Spatial resolution should improve due to the finer pixelization of the probe detector, while increased sensitivity in the probe vicinity is expected to decrease noise. A PET-probe prototype is being developed utilizing this principle. The system includes a probe consisting of ten layers of silicon detectors, each a 80 × 52 array of 1 × 1 × 1 mm3 pixels, to be operated in coincidence with a modern clinical PET scanner. Detailed simulation studies of this system have been performed to assess the effect of the additional probe information on the quality of the reconstructed images. A grid of point sources was simulated to study the contribution of the probe to the system resolution at different locations over the field of view (FOV). A resolution phantom was used to demonstrate the effect on image resolution for two probe positions. A homogeneous source distribution with hot and cold regions was used to demonstrate that the localized improvement in resolution does not come at the expense of the overall quality of the image. Since the improvement is constrained to an area close to the probe, breast imaging is proposed as a potential application for the novel geometry. In this sense, a simplified breast phantom, adjacent to heart and torso compartments, was simulated and the effect of the probe on lesion detectability, through measurements of the local contrast recovery coefficient-to-noise ratio (CNR), was observed. The list-mode ML-EM algorithm was used for image reconstruction in all cases. As expected, the point spread function of the PET-probe system was found to be non-isotropic and vary with position, offering improvement in specific regions. Increase in resolution, of factors of up to 2, was observed in the region close to the probe. Images of the resolution phantom showed visible improvement in resolution when including the probe in the simulations. The image quality study demonstrated that contrast and spill-over ratio in other areas of the FOV were not sacrificed for this enhancement. The CNR study performed on the breast phantom indicates increased lesion detectability provided by the probe.

  5. VRLA Refined™ lead — A must for VRLA batteries. Specification and Performance

    NASA Astrophysics Data System (ADS)

    Stevenson, M. W.; Lakshmi, C. S.; Manders, J. E.; Lam, L. T.

    VRLA Refined™ lead produced and marketed by Pasminco since 1997 is a very high purity lead with guaranteed low levels of the gassing elements but with optimum bismuth content that produces oxide of finer particle size, higher acid absorption and imparts outstanding electrical performance and endurance especially under conditions of deep cycling. VRLA batteries suffer dry-out, self-discharge, negative plate capacity loss and poor cycle life unless special lead is used for the grids and active material. This paper addresses the lead used for active material.

  6. Is there potential added value in COSMO-CLM forced by ERA reanalysis data?

    NASA Astrophysics Data System (ADS)

    Lenz, Claus-Jürgen; Früh, Barbara; Adalatpanah, Fatemeh Davary

    2017-12-01

    An application of the potential added value (PAV) concept suggested by Di Luca et al. (Clim Dyn 40:443-464, 2013a) is applied to ERA Interim driven runs of the regional climate model COSMO-CLM. They are performed for the time period 1979-2013 for the EURO-CORDEX domain at horizontal grid resolutions 0.11°, 0.22°, and 0.44° such that the higher resolved model grid fits into the next coarser grid. The concept of the potential added value is applied to annual, seasonal, and monthly means of the 2 m air temperature. Results show the highest potential added value at the run with the finest grid and generally increasing PAV with increasing resolution. The potential added value strongly depends on the season as well as the region of consideration. The gain of PAV is higher enhancing the resolution from 0.44° to 0.22° than from 0.22° to 0.11°. At grid aggregations to 0.88° and 1.76° the differences in PAV between the COSMO-CLM runs on the mentioned grid resolutions are maximal. They nearly vanish at aggregations to even coarser grids. In all cases the PAV is dominated by at least 80% by its stationary part.

  7. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  8. A Variable Resolution Atmospheric General Circulation Model for a Megasite at the North Slope of Alaska

    NASA Astrophysics Data System (ADS)

    Dennis, L.; Roesler, E. L.; Guba, O.; Hillman, B. R.; McChesney, M.

    2016-12-01

    The Atmospheric Radiation Measurement (ARM) climate research facility has three siteslocated on the North Slope of Alaska (NSA): Barrrow, Oliktok, and Atqasuk. These sites, incombination with one other at Toolik Lake, have the potential to become a "megasite" whichwould combine observational data and high resolution modeling to produce high resolutiondata products for the climate community. Such a data product requires high resolutionmodeling over the area of the megasite. We present three variable resolution atmosphericgeneral circulation model (AGCM) configurations as potential alternatives to stand-alonehigh-resolution regional models. Each configuration is based on a global cubed-sphere gridwith effective resolution of 1 degree, with a refinement in resolution down to 1/8 degree overan area surrounding the ARM megasite. The three grids vary in the size of the refined areawith 13k, 9k, and 7k elements. SquadGen, NCL, and GIMP are used to create the grids.Grids vary based upon the selection of areas of refinement which capture climate andweather processes that may affect a proposed NSA megasite. A smaller area of highresolution may not fully resolve climate and weather processes before they reach the NSA,however grids with smaller areas of refinement have a significantly reduced computationalcost compared with grids with larger areas of refinement. Optimal size and shape of thearea of refinement for a variable resolution model at the NSA is investigated.

  9. FitEM2EM—Tools for Low Resolution Study of Macromolecular Assembly and Dynamics

    PubMed Central

    Frankenstein, Ziv; Sperling, Joseph; Sperling, Ruth; Eisenstein, Miriam

    2008-01-01

    Studies of the structure and dynamics of macromolecular assemblies often involve comparison of low resolution models obtained using different techniques such as electron microscopy or atomic force microscopy. We present new computational tools for comparing (matching) and docking of low resolution structures, based on shape complementarity. The matched or docked objects are represented by three dimensional grids where the value of each grid point depends on its position with regard to the interior, surface or exterior of the object. The grids are correlated using fast Fourier transformations producing either matches of related objects or docking models depending on the details of the grid representations. The procedures incorporate thickening and smoothing of the surfaces of the objects which effectively compensates for differences in the resolution of the matched/docked objects, circumventing the need for resolution modification. The presented matching tool FitEM2EMin successfully fitted electron microscopy structures obtained at different resolutions, different conformers of the same structure and partial structures, ranking correct matches at the top in every case. The differences between the grid representations of the matched objects can be used to study conformation differences or to characterize the size and shape of substructures. The presented low-to-low docking tool FitEM2EMout ranked the expected models at the top. PMID:18974836

  10. STAMMEX high resolution gridded daily precipitation dataset over Germany: a new potential for regional precipitation climate research

    NASA Astrophysics Data System (ADS)

    Zolina, Olga; Simmer, Clemens; Kapala, Alice; Mächel, Hermann; Gulev, Sergey; Groisman, Pavel

    2014-05-01

    We present new high resolution precipitation daily grids developed at Meteorological Institute, University of Bonn and German Weather Service (DWD) under the STAMMEX project (Spatial and Temporal Scales and Mechanisms of Extreme Precipitation Events over Central Europe). Daily precipitation grids have been developed from the daily-observing precipitation network of DWD, which runs one of the World's densest rain gauge networks comprising more than 7500 stations. Several quality-controlled daily gridded products with homogenized sampling were developed covering the periods 1931-onwards (with 0.5 degree resolution), 1951-onwards (0.25 degree and 0.5 degree), and 1971-2000 (0.1 degree). Different methods were tested to select the best gridding methodology that minimizes errors of integral grid estimates over hilly terrain. Besides daily precipitation values with uncertainty estimates (which include standard estimates of the kriging uncertainty as well as error estimates derived by a bootstrapping algorithm), the STAMMEX data sets include a variety of statistics that characterize temporal and spatial dynamics of the precipitation distribution (quantiles, extremes, wet/dry spells, etc.). Comparisons with existing continental-scale daily precipitation grids (e.g., CRU, ECA E-OBS, GCOS) which include considerably less observations compared to those used in STAMMEX, demonstrate the added value of high-resolution grids for extreme rainfall analyses. These data exhibit spatial variability pattern and trends in precipitation extremes, which are missed or incorrectly reproduced over Central Europe from coarser resolution grids based on sparser networks. The STAMMEX dataset can be used for high-quality climate diagnostics of precipitation variability, as a reference for reanalyses and remotely-sensed precipitation products (including the upcoming Global Precipitation Mission products), and for input into regional climate and operational weather forecast models. We will present numerous application of the STAMMEX grids spanning from case studies of the major Central European floods to long-term changes in different precipitation statistics, including those accounting for the alternation of dry and wet periods and precipitation intensities associated with prolonged rainy episodes.

  11. Fine Particulate Matter and Cardiovascular Disease: Comparison of Assessment Methods for Long-term Exposure

    EPA Science Inventory

    Background Adverse cardiovascular events have been linked with PM2.5 exposure obtained primarily from air quality monitors, which rarely co-locate with participant residences. Modeled PM2.5 predictions at finer resolution may more accurately predict residential exposure; however...

  12. Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models

    NASA Astrophysics Data System (ADS)

    Xu, Shiming

    2015-04-01

    We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.

  13. Wide-angle display-type retarding field analyzer with high energy and angular resolutions

    NASA Astrophysics Data System (ADS)

    Muro, Takayuki; Ohkochi, Takuo; Kato, Yukako; Izumi, Yudai; Fukami, Shun; Fujiwara, Hidenori; Matsushita, Tomohiro

    2017-12-01

    Deployments of spherical grids to obtain high energy and angular resolutions for retarding field analyzers (RFAs) having acceptance angles as large as or larger than ±45° were explored under the condition of using commercially available microchannel plates with effective diameters of approximately 100 mm. As a result of electron trajectory simulations, a deployment of three spherical grids with significantly different grid separations instead of conventional equidistant separations showed an energy resolving power (E/ΔE) of 3200 and an angular resolution of 0.6°. The mesh number of the wire mesh retarding grid used for the simulation was 250. An RFA constructed with the simulated design experimentally showed an E/ΔE of 1100 and an angular resolution of 1°. Using the RFA and synchrotron radiation of 900 eV, photoelectron diffraction (PED) measurements were performed for single-crystal graphite. A clear C 1s PED pattern was observed even when the differential energy of the RFA was set at 0.5 eV. Further improvement of the energy resolution was theoretically examined under the assumption of utilizing a retarding grid fabricated by making a large number of radially directed cylindrical holes through a partial spherical shell instead of using a wire mesh retarding grid. An E/ΔE of 14 500 was predicted for a hole design with a diameter of 60 μm and a depth of 100 μm. A retarding grid with this hole design and a holed area corresponding to an acceptance angle of ±7° was fabricated. An RFA constructed with this retarding grid experimentally showed an E/ΔE of 1800. Possible reasons for the experimental E/ΔE lower than the theoretical values are discussed.

  14. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  15. CryoSat Plus For Oceans: an ESA Project for CryoSat-2 Data Exploitation Over Ocean

    NASA Astrophysics Data System (ADS)

    Benveniste, J.; Cotton, D.; Clarizia, M.; Roca, M.; Gommenginger, C. P.; Naeije, M. C.; Labroue, S.; Picot, N.; Fernandes, J.; Andersen, O. B.; Cancet, M.; Dinardo, S.; Lucas, B. M.

    2012-12-01

    The ESA CryoSat-2 mission is the first space mission to carry a space-borne radar altimeter that is able to operate in the conventional pulsewidth-limited (LRM) mode and in the novel Synthetic Aperture Radar (SAR) mode. Although the prime objective of the Cryosat-2 mission is dedicated to monitoring land and marine ice, the SAR mode capability of the Cryosat-2 SIRAL altimeter also presents the possibility of demonstrating significant potential benefits of SAR altimetry for ocean applications, based on expected performance enhancements which include improved range precision and finer along track spatial resolution. With this scope in mind, the "CryoSat Plus for Oceans" (CP4O) Project, dedicated to the exploitation of CryoSat-2 Data over ocean, supported by the ESA STSE (Support To Science Element) programme, brings together an expert European consortium comprising: DTU Space, isardSAT, National Oceanography Centre , Noveltis, SatOC, Starlab, TU Delft, the University of Porto and CLS (supported by CNES),. The objectives of CP4O are: - to build a sound scientific basis for new scientific and operational applications of Cryosat-2 data over the open ocean, polar ocean, coastal seas and for sea-floor mapping. - to generate and evaluate new methods and products that will enable the full exploitation of the capabilities of the Cryosat-2 SIRAL altimeter , and extend their application beyond the initial mission objectives. - to ensure that the scientific return of the Cryosat-2 mission is maximised. In particular four themes will be addressed: -Open Ocean Altimetry: Combining GOCE Geoid Model with CryoSat Oceanographic LRM Products for the retrieval of CryoSat MSS/MDT model over open ocean surfaces and for analysis of mesoscale and large scale prominent open ocean features. Under this priority the project will also foster the exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to detect short spatial scale open ocean features. -High Resolution Polar Ocean Altimetry: Combination of GOCE Geoid Model with CryoSat Oceanographic SAR Products over polar oceans for the retrieval of CryoSat MSS/MDT and currents circulations system improving the polar tides models and studying the coupling between blowing wind and current pattern. -High Resolution Coastal Zone Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to get the radar altimetry closer to the shore exploiting the SARIn mode for the discrimination of off-nadir land targets (e.g. steep cliffs) in the radar footprint from nadir sea return. -High Resolution Sea-Floor Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to resolve the weak short-wavelength sea surface signals caused by sea-floor topography elements and to map uncharted sea-mounts/trenches. One of the first project activities is the consolidation of preliminary scientific requirements for the four themes under investigation. This paper will present the CP4O project content and objectives and will address the first initial results from the on-going work to define the scientific requirements.

  16. Modeling the near-ultraviolet band of GK stars. III. Dependence on abundance pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Short, C. Ian; Campbell, Eamonn A., E-mail: ishort@ap.smu.ca

    2013-06-01

    We extend the grid of non-LTE (NLTE) models presented in Paper II to explore variations in abundance pattern in two ways: (1) the adoption of the Asplund et al. (GASS10) abundances, (2) for stars of metallicity, [M/H], of –0.5, the adoption of a non-solar enhancement of α-elements by +0.3 dex. Moreover, our grid of synthetic spectral energy distributions (SEDs) is interpolated to a finer numerical resolution in both T {sub eff} (ΔT {sub eff} = 25 K) and log g (Δlog g = 0.25). We compare the values of T {sub eff} and log g inferred from fitting LTE andmore » NLTE SEDs to observed SEDs throughout the entire visible band, and in an ad hoc 'blue' band. We compare our spectrophotometrically derived T {sub eff} values to a variety of T {sub eff} calibrations, including more empirical ones, drawn from the literature. For stars of solar metallicity, we find that the adoption of the GASS10 abundances lowers the inferred T {sub eff} value by 25-50 K for late-type giants, and NLTE models computed with the GASS10 abundances give T {sub eff} results that are marginally in better agreement with other T {sub eff} calibrations. For stars of [M/H] = –0.5 there is marginal evidence that adoption of α-enhancement further lowers the derived T {sub eff} value by 50 K. Stellar parameters inferred from fitting NLTE models to SEDs are more dependent than LTE models on the wavelength region being fitted, and we find that the effect depends on how heavily line blanketed the fitting region is, whether the fitting region is to the blue of the Wien peak of the star's SED, or both.« less

  17. A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows

    NASA Technical Reports Server (NTRS)

    Felici, Helene Marie

    1992-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.

  18. Recommended aquifer grid resolution for E-Area PA revision transport simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G.

    This memorandum addresses portions of Section 3.5.2 of SRNL (2016) by recommending horizontal and vertical grid resolution for aquifer transport, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.

  19. Filter and Grid Resolution in DG-LES

    NASA Astrophysics Data System (ADS)

    Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman

    2017-11-01

    The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.

  20. Downscaling of Remotely Sensed Land Surface Temperature with multi-sensor based products

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Baik, J.; Choi, M.

    2016-12-01

    Remotely sensed satellite data provides a bird's eye view, which allows us to understand spatiotemporal behavior of hydrologic variables at global scale. Especially, geostationary satellite continuously observing specific regions is useful to monitor the fluctuations of hydrologic variables as well as meteorological factors. However, there are still problems regarding spatial resolution whether the fine scale land cover can be represented with the spatial resolution of the satellite sensor, especially in the area of complex topography. To solve these problems, many researchers have been trying to establish the relationship among various hydrological factors and combine images from multi-sensor to downscale land surface products. One of geostationary satellite, Communication, Ocean and Meteorological Satellite (COMS), has Meteorological Imager (MI) and Geostationary Ocean Color Imager (GOCI). MI performing the meteorological mission produce Rainfall Intensity (RI), Land Surface Temperature (LST), and many others every 15 minutes. Even though it has high temporal resolution, low spatial resolution of MI data is treated as major research problem in many studies. This study suggests a methodology to downscale 4 km LST datasets derived from MI in finer resolution (500m) by using GOCI datasets in Northeast Asia. Normalized Difference Vegetation Index (NDVI) recognized as variable which has significant relationship with LST are chosen to estimate LST in finer resolution. Each pixels of NDVI and LST are separated according to land cover provided from MODerate resolution Imaging Spectroradiometer (MODIS) to achieve more accurate relationship. Downscaled LST are compared with LST observed from Automated Synoptic Observing System (ASOS) for assessing its accuracy. The downscaled LST results of this study, coupled with advantage of geostationary satellite, can be applied to observe hydrologic process efficiently.

  1. NPP-VIIRS DNB-based reallocating subpopulations to mercury in Urumqi city cluster, central Asia

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Feng, X. B.; Dai, W.; Li, P.; Ju, C. Y.; Bao, Z. D.; Han, Y. L.

    2017-02-01

    Accurate and update assignment of population-related environmental matters onto fine grid cells in oasis cities of arid areas remains challenging. We present the approach based on Suomi National Polar-orbiting Partnership (S-NPP) -Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) to reallocate population onto a regular finer surface. The number of potential population to the mercury were reallocated onto 0.1x0.1 km reference grid in Urumqi city cluster of China’s Xinjiang, central Asia. The result of Monte Carlo modelling indicated that the range of 0.5 to 2.4 million people was reliable. The study highlights that the NPP-VIIRS DNB-based multi-layered, dasymetric, spatial method enhances our abilities to remotely estimate the distribution and size of target population at the street-level scale and has the potential to transform control strategies for epidemiology, public policy and other socioeconomic fields.

  2. Improving Barotropic Tides by Two-way Nesting High and Low Resolution Domains

    NASA Astrophysics Data System (ADS)

    Jeon, C. H.; Buijsman, M. C.; Wallcraft, A. J.; Shriver, J. F.; Hogan, P. J.; Arbic, B. K.; Richman, J. G.

    2017-12-01

    In a realistically forced global ocean model, relatively large sea-surface-height root-mean-square (RMS) errors are observed in the North Atlantic near the Hudson Strait. These may be associated with large tidal resonances interacting with coastal bathymetry that are not correctly represented with a low resolution grid. This issue can be overcome by using high resolution grids, but at a high computational cost. In this paper we apply two-way nesting as an alternative solution. This approach applies high resolution to the area with large RMS errors and a lower resolution to the rest. It is expected to improve the tidal solution as well as reduce the computational cost. To minimize modification of the original source codes of the ocean circulation model (HYCOM), we apply the coupler OASIS3-MCT. This coupler is used to exchange barotropic pressures and velocity fields through its APIs (Application Programming Interface) between the parent and the child components. The developed two-way nesting framework has been validated with an idealized test case where the parent and the child domains have identical grid resolutions. The result of the idealized case shows very small RMS errors between the child and parent solutions. We plan to show results for a case with realistic tidal forcing in which the resolution of the child grid is three times that of the parent grid. The numerical results of this realistic case are compared to TPXO data.

  3. Spatial forecasting of disease risk and uncertainty

    USGS Publications Warehouse

    De Cola, L.

    2002-01-01

    Because maps typically represent the value of a single variable over 2-dimensional space, cartographers must simplify the display of multiscale complexity, temporal dynamics, and underlying uncertainty. A choropleth disease risk map based on data for polygonal regions might depict incidence (cases per 100,000 people) within each polygon for a year but ignore the uncertainty that results from finer-scale variation, generalization, misreporting, small numbers, and future unknowns. In response to such limitations, this paper reports on the bivariate mapping of data "quantity" and "quality" of Lyme disease forecasts for states of the United States. Historical state data for 1990-2000 are used in an autoregressive model to forecast 2001-2010 disease incidence and a probability index of confidence, each of which is then kriged to provide two spatial grids representing continuous values over the nation. A single bivariate map is produced from the combination of the incidence grid (using a blue-to-red hue spectrum), and a probabilistic confidence grid (used to control the saturation of the hue at each grid cell). The resultant maps are easily interpretable, and the approach may be applied to such problems as detecting unusual disease occurences, visualizing past and future incidence, and assembling a consistent regional disease atlas showing patterns of forecasted risks in light of probabilistic confidence.

  4. LES-based generation of high-frequency fluctuation in wind turbulence obtained by meteorological model

    NASA Astrophysics Data System (ADS)

    Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao

    2017-11-01

    The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.

  5. The impact of wave number selection and spin up time when using spectral nudging for dynamical downscaling applications

    NASA Astrophysics Data System (ADS)

    Gómez, Breogán; Miguez-Macho, Gonzalo

    2017-04-01

    Nudging techniques are commonly used to constrain the evolution of numerical models to a reference dataset that is typically of a lower resolution. The nudged model retains some of the features of the reference field while incorporating its own dynamics to the solution. These characteristics have made nudging very popular in dynamic downscaling applications that cover from shot range, single case studies, to multi-decadal regional climate simulations. Recently, a variation of this approach called Spectral Nudging, has gained popularity for its ability to maintain the higher temporal and spatial variability of the model results, while forcing the large scales in the solution with a coarser resolution field. In this work, we focus on a not much explored aspect of this technique: the impact of selecting different cut-off wave numbers and spin-up times. We perform four-day long simulations with the WRF model, daily for three different one-month periods that include a free run and several Spectral Nudging experiments with cut-off wave numbers ranging from the smallest to the largest possible (full Grid Nudging). Results show that Spectral Nudging is very effective at imposing the selected scales onto the solution, while allowing the limited area model to incorporate finer scale features. The model error diminishes rapidly as the nudging expands over broader parts of the spectrum, but this decreasing trend ceases sharply at cut-off wave numbers equivalent to a length scale of about 1000 km, and the error magnitude changes minimally thereafter. This scale corresponds to the Rossby Radius of deformation, separating synoptic from convective scales in the flow. When nudging above this value is applied, a shifting of the synoptic patterns can occur in the solution, yielding large model errors. However, when selecting smaller scales, the fine scale contribution of the model is damped, thus making 1000 km the appropriate scale threshold to nudge in order to balance both effects. Finally, we note that longer spin-up times are needed for model errors to stabilize when using Spectral Nudging than with Grid Nudging. Our results suggest that this time is between 36 and 48 hours.

  6. A graphene oxide-carbon nanotube grid for high-resolution transmission electron microscopy of nanomaterials.

    PubMed

    Zhang, Lina; Zhang, Haoxu; Zhou, Ruifeng; Chen, Zhuo; Li, Qunqing; Fan, Shoushan; Ge, Guanglu; Liu, Renxiao; Jiang, Kaili

    2011-09-23

    A novel grid for use in transmission electron microscopy is developed. The supporting film of the grid is composed of thin graphene oxide films overlying a super-aligned carbon nanotube network. The composite film combines the advantages of graphene oxide and carbon nanotube networks and has the following properties: it is ultra-thin, it has a large flat and smooth effective supporting area with a homogeneous amorphous appearance, high stability, and good conductivity. The graphene oxide-carbon nanotube grid has a distinct advantage when characterizing the fine structure of a mass of nanomaterials over conventional amorphous carbon grids. Clear high-resolution transmission electron microscopy images of various nanomaterials are obtained easily using the new grids.

  7. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  8. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  9. Verification of Global Assimilation of Ionospheric Measurements Gauss Markov (GAIM-GM) Model Forecast Accuracy

    DTIC Science & Technology

    2011-09-01

    m b e r o f O cc u rr e n ce s 50 ( a ) Kp 0-3 (b) Kp 4-9 Figure 25. Scatter plot of...dependent physics based model that uses the Ionospheric Forecast Model ( IFM ) as a background model upon which perturbations are imposed via a Kalman filter...vertical output resolution as the IFM . GAIM-GM can also be run in a regional mode with a finer resolution (Scherliess et al., 2006). GAIM-GM is

  10. Downscaling Global Emissions and Its Implications Derived from Climate Model Experiments

    PubMed Central

    Abe, Manabu; Kinoshita, Tsuguki; Hasegawa, Tomoko; Kawase, Hiroaki; Kushida, Kazuhide; Masui, Toshihiko; Oka, Kazutaka; Shiogama, Hideo; Takahashi, Kiyoshi; Tatebe, Hiroaki; Yoshikawa, Minoru

    2017-01-01

    In climate change research, future scenarios of greenhouse gas and air pollutant emissions generated by integrated assessment models (IAMs) are used in climate models (CMs) and earth system models to analyze future interactions and feedback between human activities and climate. However, the spatial resolutions of IAMs and CMs differ. IAMs usually disaggregate the world into 10–30 aggregated regions, whereas CMs require a grid-based spatial resolution. Therefore, downscaling emissions data from IAMs into a finer scale is necessary to input the emissions into CMs. In this study, we examined whether differences in downscaling methods significantly affect climate variables such as temperature and precipitation. We tested two downscaling methods using the same regionally aggregated sulfur emissions scenario obtained from the Asian-Pacific Integrated Model/Computable General Equilibrium (AIM/CGE) model. The downscaled emissions were fed into the Model for Interdisciplinary Research on Climate (MIROC). One of the methods assumed a strong convergence of national emissions intensity (e.g., emissions per gross domestic product), while the other was based on inertia (i.e., the base-year remained unchanged). The emissions intensities in the downscaled spatial emissions generated from the two methods markedly differed, whereas the emissions densities (emissions per area) were similar. We investigated whether the climate change projections of temperature and precipitation would significantly differ between the two methods by applying a field significance test, and found little evidence of a significant difference between the two methods. Moreover, there was no clear evidence of a difference between the climate simulations based on these two downscaling methods. PMID:28076446

  11. File Specification for the 7-km GEOS-5 Nature Run, Ganymed Release Non-Hydrostatic 7-km Global Mesoscale Simulation

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo M.; Putman, William; Nattala, J.

    2014-01-01

    This document describes the gridded output files produced by a two-year global, non-hydrostatic mesoscale simulation for the period 2005-2006 produced with the non-hydrostatic version of GEOS-5 Atmospheric Global Climate Model (AGCM). In addition to standard meteorological parameters (wind, temperature, moisture, surface pressure), this simulation includes 15 aerosol tracers (dust, sea-salt, sulfate, black and organic carbon), O3, CO and CO2. This model simulation is driven by prescribed sea-surface temperature and sea-ice, daily volcanic and biomass burning emissions, as well as high-resolution inventories of anthropogenic sources. A description of the GEOS-5 model configuration used for this simulation can be found in Putman et al. (2014). The simulation is performed at a horizontal resolution of 7 km using a cubed-sphere horizontal grid with 72 vertical levels, extending up to to 0.01 hPa (approximately 80 km). For user convenience, all data products are generated on two logically rectangular longitude-latitude grids: a full-resolution 0.0625 deg grid that approximately matches the native cubed-sphere resolution, and another 0.5 deg reduced-resolution grid. The majority of the full-resolution data products are instantaneous with some fields being time-averaged. The reduced-resolution datasets are mostly time-averaged, with some fields being instantaneous. Hourly data intervals are used for the reduced-resolution datasets, while 30-minute intervals are used for the full-resolution products. All full-resolution output is on the model's native 72-layer hybrid sigma-pressure vertical grid, while the reduced-resolution output is given on native vertical levels and on 48 pressure surfaces extending up to 0.02 hPa. Section 4 presents additional details on horizontal and vertical grids. Information of the model surface representation can be found in Appendix B. The GEOS-5 product is organized into file collections that are described in detail in Appendix C. Additional details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.

  12. Large-Eddy Simulation Comparison of Neutral Flow Over a Canopy: Sensitivities to Physical and Numerical Conditions, and Similarity to Other Representations

    NASA Astrophysics Data System (ADS)

    Ouwersloot, H. G.; Moene, A. F.; Attema, J. J.; de Arellano, J. Vilà-Guerau

    2017-01-01

    The representation of a neutral atmospheric flow over roughness elements simulating a vegetation canopy is compared between two large-eddy simulation models, wind-tunnel data and recently updated empirical flux-gradient relationships. Special attention is devoted to the dynamics in the roughness sublayer above the canopy layer, where turbulence is most intense. By demonstrating that the flow properties are consistent across these different approaches, confidence in the individual independent representations is bolstered. Systematic sensitivity analyses with the Dutch Atmospheric Large-Eddy Simulation model show that the transition in the one-sided plant-area density from the canopy layer to unobstructed air potentially alters the flow in the canopy and roughness sublayer. Anomalously induced fluctuations can be fully suppressed by spreading the transition over four steps. Finer vertical resolutions only serve to reduce the magnitude of these fluctuations, but do not prevent them. To capture the general dynamics of the flow, a resolution of 10 % of the canopy height is found to suffice, while a finer resolution still improves the representation of the turbulent kinetic energy. Finally, quadrant analyses indicate that momentum transport is dominated by the mean velocity components within each quadrant. Consequently, a mass-flux approach can be applied to represent the momentum flux.

  13. WE-G-204-06: Grid-Line Artifact Minimization for High Resolution Detectors Using Iterative Residual Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    2015-06-15

    Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less

  14. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-02-01

    In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.

  15. Micro-Slit Collimators for X-Ray/Gamma-Ray Imaging

    NASA Technical Reports Server (NTRS)

    Appleby, Michael; Fraser, Iain; Klinger, Jill

    2011-01-01

    A hybrid photochemical-machining process is coupled with precision stack lamination to allow for the fabrication of multiple ultra-high-resolution grids on a single array substrate. In addition, special fixturing and etching techniques have been developed that allow higher-resolution multi-grid collimators to be fabricated. Building on past work of developing a manufacturing technique for fabricating multi-grid, high-resolution coating modulation collimators for arcsecond and subarcsecond x-ray and gamma-ray imaging, the current work reduces the grid pitch by almost a factor of two, down to 22 microns. Additionally, a process was developed for reducing thin, high-Z (tungsten or molybdenum) from the thinnest commercially available foil (25 microns thick) down to approximately equal to 10 microns thick using precisely controlled chemical etching

  16. Population at risk: using areal interpolation and Twitter messages to create population models for burglaries and robberies

    PubMed Central

    2018-01-01

    ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fuyu; Collins, William D.; Wehner, Michael F.

    High-resolution climate models have been shown to improve the statistics of tropical storms and hurricanes compared to low-resolution models. The impact of increasing horizontal resolution in the tropical storm simulation is investigated exclusively using a series of Atmospheric Global Climate Model (AGCM) runs with idealized aquaplanet steady-state boundary conditions and a fixed operational storm-tracking algorithm. The results show that increasing horizontal resolution helps to detect more hurricanes, simulate stronger extreme rainfall, and emulate better storm structures in the models. However, increasing model resolution does not necessarily produce stronger hurricanes in terms of maximum wind speed, minimum sea level pressure, andmore » mean precipitation, as the increased number of storms simulated by high-resolution models is mainly associated with weaker storms. The spatial scale at which the analyses are conducted appears to have more important control on these meteorological statistics compared to horizontal resolution of the model grid. When the simulations are analyzed on common low-resolution grids, the statistics of the hurricanes, particularly the hurricane counts, show reduced sensitivity to the horizontal grid resolution and signs of scale invariant.« less

  18. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    NASA Astrophysics Data System (ADS)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  19. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  20. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  1. Multi-resolution MPS method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid

    2018-04-01

    In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.

  2. An investigation of the impact of variations of DVH calculation algorithms on DVH dependant radiation therapy plan evaluation metrics

    NASA Astrophysics Data System (ADS)

    Kennedy, A. M.; Lane, J.; Ebert, M. A.

    2014-03-01

    Plan review systems often allow dose volume histogram (DVH) recalculation as part of a quality assurance process for trials. A review of the algorithms provided by a number of systems indicated that they are often very similar. One notable point of variation between implementations is in the location and frequency of dose sampling. This study explored the impact such variations can have on DVH based plan evaluation metrics (Normal Tissue Complication Probability (NTCP), min, mean and max dose), for a plan with small structures placed over areas of high dose gradient. Dose grids considered were exported from the original planning system at a range of resolutions. We found that for the CT based resolutions used in all but one plan review systems (CT and CT with guaranteed minimum number of sampling voxels in the x and y direction) results were very similar and changed in a similar manner with changes in the dose grid resolution despite the extreme conditions. Differences became noticeable however when resolution was increased in the axial (z) direction. Evaluation metrics also varied differently with changing dose grid for CT based resolutions compared to dose grid based resolutions. This suggests that if DVHs are being compared between systems that use a different basis for selecting sampling resolution it may become important to confirm that a similar resolution was used during calculation.

  3. Overset grid applications on distributed memory MIMD computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana; Weeratunga, Sisira

    1994-01-01

    Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.

  4. Process-based model predictions of hurricane induced morphodynamic change on low-lying barrier islands

    USGS Publications Warehouse

    Plant, Nathaniel G.; Thompson, David M.; Elias, Edwin; Wang, Ping; Rosati, Julie D.; Roberts, Tiffany M.

    2011-01-01

    Using Delft3D, a Chandeleur Island model was constructed to examine the sediment-transport patterns and morphodynamic change caused by Hurricane Katrina and similar storm events. The model setup included a coarse Gulf of Mexico domain and a nested finer-resolution Chandeleur Island domain. The finer-resolution domain resolved morphodynamic processes driven by storms and tides. A sensitivity analysis of the simulated morphodynamic response was performed to investigate the effects of variations in surge levels. The Chandeleur morphodynamic model reproduced several important features that matched observed morphodynamic changes. A simulation of bathymetric change driven by storm surge alone (no waves) along the central portion of the Chandeleur Islands showed (1) a general landward retreat and lowering of the island chain and (2) multiple breaches that increased the degree of island dissection. The locations of many of the breaches correspond with the low-lying or narrow sections of the initial bathymetry. The major part of the morphological change occurred prior to the peak of the surge when overtopping of the islands produced a strong water-level gradient and induced significant flow velocities.

  5. Analysis of improved government geological map information for mineral exploration: Incorporating efficiency, productivity, effectiveness, and risk considerations

    USGS Publications Warehouse

    Bernknopf, R.L.; Wein, A.M.; St-Onge, M. R.; Lucas, S.B.

    2007-01-01

    This bulletin/professional paper focuses on the value of geoscientific information and knowledge, as provided in published government bedrock geological maps, to the mineral exploration sector. An economic model is developed that uses an attribute- ranking approach to convert geological maps into domains of mineral favourability. Information about known deposits in these (or analogous) favourability domains allow the calculation of exploration search statistics that provide input into measures of exploration efficiency, productivity, effectiveness, risk, and cost stemming from the use of the published geological maps. Two case studies, the Flin Flon Belt (Manitoba and Saskatchewan) and the south Baffin Island area (Nunavut), demonstrate that updated, finer resolution maps can be used to identify more exploration campaign options, and campaigns thats are more efficient, more effective, and less risky than old, coarser resolution maps when used as a guide for mineral exploration. The Flin Flon Belt study illustrates that an updated, coarser resolution bedrock map enables improved mineral exploration efficiency, productivity, and effectiveness by locating 60% more targets and supporting an exploration campaign that is 44% more efficient. Refining the map resolution provides an additional 17% reduction in search effort across all favourable domains and a 55% reduction in search effort in the most favourable domain. The south Baffin Island case study projects a 40% increase in expected targets and a 27% reduction in search effort when the updated, finer resolution map is used in lieu of the old, coarser resolution map. On southern Baffin Island, the economic value of the up dated map ranges from CAN$2.28 million to CAN$15.21 million, which can be compared to the CAN$1.86 million that it cost to produce the map (a multiplier effect of up to eight).

  6. Air Quality Science and Regulatory Efforts Require Geostationary Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Pickering, Kenneth E.; Allen, D. J.; Stehr, J. W.

    2006-01-01

    Air quality scientists and regulatory agencies would benefit from the high spatial and temporal resolution trace gas and aerosol data that could be provided by instruments on a geostationary platform. More detailed time-resolved data from a geostationary platform could be used in tracking regional transport and in evaluating mesoscale air quality model performance in terms of photochemical evolution throughout the day. The diurnal cycle of photochemical pollutants is currently missing from the data provided by the current generation of atmospheric chemistry satellites which provide only one measurement per day. Often peak surface ozone mixing ratios are reached much earlier in the day during major regional pollution episodes than during local episodes due to downward mixing of ozone that had been transported above the boundary layer overnight. The regional air quality models often do not simulate this downward mixing well enough and underestimate surface ozone in regional episodes. Having high time-resolution geostationary data will make it possible to determine the magnitude of this lower-and mid-tropospheric transport that contributes to peak eight-hour average ozone and 24-hour average PM2.5 concentrations. We will show ozone and PM(sub 2.5) episodes from the CMAQ model and suggest ways in which geostationary satellite data would improve air quality forecasting. Current regulatory modeling is typically being performed at 12 km horizontal resolution. State and regional air quality regulators in regions with complex topography and/or land-sea breezes are anxious to move to 4-km or finer resolution simulations. Geostationary data at these or finer resolutions will be useful in evaluating such models.

  7. Validation of a novel robot-assisted 3DUS system for real-time planning and guidance of breast interstitial HDR brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulin, Eric; Beaulieu, Luc, E-mail: Luc.Beaulieu@phy.ulaval.ca; Gardi, Lori

    Purpose: In current clinical practice, there is no integrated 3D ultrasound (3DUS) guidance system clinically available for breast brachytherapy. In this study, the authors present a novel robot-assisted 3DUS system for real-time planning and guidance of breast interstitial high dose rate (HDR) brachytherapy treatment. Methods: For this work, a new computer controlled robotic 3DUS system was built to perform a hybrid motion scan, which is a combination of a 6 cm linear translation with a 30° rotation at both ends. The new 3DUS scanner was designed to fit on a modified Kuske assembly, keeping the current template grid configuration butmore » modifying the frame to allow the mounting of the 3DUS system at several positions. A finer grid was also tested. A user interface was developed to perform image reconstruction, semiautomatic segmentation of the surgical bed as well as catheter reconstruction and tracking. A 3D string phantom was used to validate the geometric accuracy of the reconstruction. The volumetric accuracy of the system was validated with phantoms using magnetic resonance imaging (MRI) and computed tomography (CT) images. In order to accurately determine whether 3DUS can effectively replace CT for treatment planning, the authors have compared the 3DUS catheter reconstruction to the one obtained from CT images. In addition, in agarose-based phantoms, an end-to-end procedure was performed by executing six independent complete procedures with both 14 and 16 catheters, and for both standard and finer Kuske grids. Finally, in phantoms, five end-to-end procedures were performed with the final CT planning for the validation of 3DUS preplanning. Results: The 3DUS acquisition time is approximately 10 s. A paired Student t-test showed that there was no statistical significant difference between known and measured values of string separations in each direction. Both MRI and CT volume measurements were not statistically different from 3DUS volume (Student t-test: p > 0.05) and they were significantly correlated to 3DUS measurement (Pearson test: MRI p < 0.05 and CT p < 0.001). The mean angular separation distance between catheter trajectories segmented from 3DUS and CT images was 0.42° ± 0.24°, while the maximum and mean trajectory separations were 0.51 ± 0.19 and 0.37 ± 0.17 mm, respectively. Overall, the new finer grid has performed significantly better in terms of dosimetric indices. The planning target volume dosimetric indices were not found statistically different between 3DUS and CT planning (Student t-test, p > 0.05). Both the skin and the pectoral muscle dosimetric indices were within ABS guidelines. Conclusions: A novel robot-assisted 3DUS system was designed and validated. To their knowledge, this is the first system capable of performing real-time guidance and planning of breast multicatheter HDR brachytherapy treatments. Future investigation will test the feasibility of using the system in the clinic and for permanent breast brachytherapy.« less

  8. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  9. Spatiotemporal Variability of Drought in Pakistan through High-Resolution Daily Gridded In-Situ Observations

    NASA Astrophysics Data System (ADS)

    Bashir, F.; Zeng, X.; Gupta, H. V.; Hazenberg, P.

    2017-12-01

    Drought as an extreme event may have far reaching socio-economic impacts on agriculture based economies like Pakistan. Effective assessment of drought requires high resolution spatiotemporally continuous hydrometeorological information. For this purpose, new in-situ daily observations based gridded analyses of precipitation, maximum, minimum and mean temperature and diurnal temperature range are developed, that covers whole Pakistan on 0.01º latitude-longitude for a 54-year period (1960-2013). The number of participating meteorological observatories used in these gridded analyses is 2 to 6 times greater than any other similar product available. This data set is used to identify extreme wet and dry periods and their spatial patterns across Pakistan using Palmer Drought Severity Index (PDSI) and Standardized Precipitation Index (SPI). Periodicity of extreme events is estimated at seasonal to decadal scales. Spatiotemporal signatures of drought incidence indicating its extent and longevity in different areas may help water resource managers and policy makers to mitigate the severity of the drought and its impact on food security through suitable adaptive techniques. Moreover, this high resolution gridded in-situ observations of precipitation and temperature is used to evaluate other coarser-resolution gridded products.

  10. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  11. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    NASA Astrophysics Data System (ADS)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  12. Using multi-scale sampling and spatial cross-correlation to investigate patterns of plant species richness

    USGS Publications Warehouse

    Kalkhan, M.A.; Stohlgren, T.J.

    2000-01-01

    Land managers need better techniques to assess exoticplant invasions. We used the cross-correlationstatistic, IYZ, to test for the presence ofspatial cross-correlation between pair-wisecombinations of soil characteristics, topographicvariables, plant species richness, and cover ofvascular plants in a 754 ha study site in RockyMountain National Park, Colorado, U.S.A. Using 25 largeplots (1000 m2) in five vegetation types, 8 of 12variables showed significant spatial cross-correlationwith at least one other variable, while 6 of 12variables showed significant spatial auto-correlation. Elevation and slope showed significant spatialcross-correlation with all variables except percentcover of native and exotic species. Percent cover ofnative species had significant spatialcross-correlations with soil variables, but not withexotic species. This was probably because of thepatchy distributions of vegetation types in the studyarea. At a finer resolution, using data from ten1 m2 subplots within each of the 1000 m2 plots, allvariables showed significant spatial auto- andcross-correlation. Large-plot sampling was moreaffected by topographic factors than speciesdistribution patterns, while with finer resolutionsampling, the opposite was true. However, thestatistically and biologically significant spatialcorrelation of native and exotic species could only bedetected with finer resolution sampling. We foundexotic plant species invading areas with high nativeplant richness and cover, and in fertile soils high innitrogen, silt, and clay. Spatial auto- andcross-correlation statistics, along with theintegration of remotely sensed data and geographicinformation systems, are powerful new tools forevaluating the patterns and distribution of native andexotic plant species in relation to landscape structure.

  13. The functional micro-organization of grid cells revealed by cellular-resolution imaging

    PubMed Central

    Heys, James G.; Rangarajan, Krsna V.; Dombeck, Daniel A.

    2015-01-01

    Summary Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater micro-circuit level understanding of the brain’s representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to non-grid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: The similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a “Mexican Hat” shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. PMID:25467986

  14. Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts

    USGS Publications Warehouse

    Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.

    2009-01-01

    We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.

  15. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    NASA Astrophysics Data System (ADS)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  16. Simulation of the Atmospheric Boundary Layer for Wind Energy Applications

    NASA Astrophysics Data System (ADS)

    Marjanovic, Nikola

    Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).

  17. Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.

    2017-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both the south and north polar regions, grids can be exported from GMRT in a variety of formats including ASCII, GeoTIFF and NetCDF to support use in common mapping software applications such as ArcGIS, GMT, Matlab, and Python. New web services have also been developed to enable programmatic access to grids and images in north and south polar projections.

  18. Probing the Inelastic Interactions in Molecular Junctions by Scanning Tunneling Microscope

    NASA Astrophysics Data System (ADS)

    Xu, Chen

    With a sub-Kelvin scanning tunneling microscope, the energy resolution of spectroscopy is improved dramatically. Detailed studies of finer features of spectrum become possible. The asymmetry in the line shape of carbon monoxide vibrational spectra is observed to correlate with the couplings of the molecule to the tip and substrates. The spin-vibronic coupling in the molecular junctions is revisited with two metal phthalocyanine molecules, unveiling sharp spin-vibronic peaks. Finally, thanks to the improved spectrum resolution, the bonding structure of the acyclic compounds molecules is surveyed with STM inelastic tunneling probe, expanding the capability of the innovative high resolution imaging technique.

  19. Prospects for improving the representation of coastal and shelf seas in global ocean models

    NASA Astrophysics Data System (ADS)

    Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard

    2017-02-01

    Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.

  20. Meteorology, Emissions, and Grid Resolution: Effects on Discrete and Probabilistic Model Performance

    EPA Science Inventory

    In this study, we analyze the impacts of perturbations in meteorology and emissions and variations in grid resolution on air quality forecast simulations. The meteorological perturbations con-sidered in this study introduce a typical variability of ~1°C, 250 - 500 m, 1 m/s, and 1...

  1. The Separate Physics and Dynamics Experiment (SPADE) framework for determining resolution awareness: A case study of microphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Ma, Po-Lun; Xiao, Heng

    2013-08-29

    The ability to use multi-resolution dynamical cores for weather and climate modeling is pushing the atmospheric community towards developing scale aware or, more specifically, resolution aware parameterizations that will function properly across a range of grid spacings. Determining the resolution dependence of specific model parameterizations is difficult due to strong resolution dependencies in many pieces of the model. This study presents the Separate Physics and Dynamics Experiment (SPADE) framework that can be used to isolate the resolution dependent behavior of specific parameterizations without conflating resolution dependencies from other portions of the model. To demonstrate the SPADE framework, the resolution dependencemore » of the Morrison microphysics from the Weather Research and Forecasting model and the Morrison-Gettelman microphysics from the Community Atmosphere Model are compared for grid spacings spanning the cloud modeling gray zone. It is shown that the Morrison scheme has stronger resolution dependence than Morrison-Gettelman, and that the ability of Morrison-Gettelman to use partial cloud fractions is not the primary reason for this difference. This study also discusses how to frame the issue of resolution dependence, the meaning of which has often been assumed, but not clearly expressed in the atmospheric modeling community. It is proposed that parameterization resolution dependence can be expressed in terms of "resolution dependence of the first type," RA1, which implies that the parameterization behavior converges towards observations with increasing resolution, or as "resolution dependence of the second type," RA2, which requires that the parameterization reproduces the same behavior across a range of grid spacings when compared at a given coarser resolution. RA2 behavior is considered the ideal, but brings with it serious implications due to limitations of parameterizations to accurately estimate reality with coarse grid spacing. The type of resolution awareness developers should target in their development depends upon the particular modeler’s application.« less

  2. Charge Sharing and Charge Loss in a Cadmium-Zinc-Telluride Fine-Pixel Detector Array

    NASA Technical Reports Server (NTRS)

    Gaskin, J. A.; Sharma, D. P.; Ramsey, B. D.; Six, N. Frank (Technical Monitor)

    2002-01-01

    Because of its high atomic number, room temperature operation, low noise, and high spatial resolution a Cadmium-Zinc-Telluride (CZT) multi-pixel detector is ideal for hard x-ray astrophysical observation. As part of on-going research at MSFC (Marshall Space Flight Center) to develop multi-pixel CdZnTe detectors for this purpose, we have measured charge sharing and charge loss for a 4x4 (750micron pitch), lmm thick pixel array and modeled these results using a Monte-Carlo simulation. This model was then used to predict the amount of charge sharing for a much finer pixel array (with a 300micron pitch). Future work will enable us to compare the simulated results for the finer array to measured values.

  3. The Marine Geoscience Data System and the Global Multi-Resolution Topography Synthesis: Online Resources for Exploring Ocean Mapping Data

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.

    2016-02-01

    The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases per year. GMRT is available as both gridded data and images that can be viewed and downloaded directly through the Java application GeoMapApp (www.geomapapp.org) and the web-based GMRT MapTool. In addition, the GMRT GridServer API provides programmatic access to grids, imagery, profiles, and single point elevation values.

  4. Clouds Optically Gridded by Stereo COGS product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oktem, Rusen; Romps, David

    COGS product is a 4D grid of cloudiness covering a 6 km × 6 km × 6 km cube centered at the central facility of SGP site at a spatial resolution of 50 meters and a temporal resolution of 20 seconds. The dimensions are X, Y, Z, and time, where X,Y, Z, correspond to east-west, north-south, and altitude of the grid point, respectively. COGS takes on values 0, 1, and -1 denoting "cloud", "no cloud", and "not available". 

  5. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

    1995-01-01

    Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

  6. Calibrated, Enhanced-Resolution Brightness Temperature Earth System Data Record: A New Era for Gridded Passive Microwave Data

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.

    2017-12-01

    Since 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Up until recently, the available global gridded passive microwave data sets have not been produced consistently. Various projections (equal-area, polar stereographic), a number of different gridding techniques were used, along with various temporal sampling as well as a mix of Level 2 source data versions. In addition, not all data from all sensors have been processed completely and they have not been processed in any one consistent way. Furthermore, the original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. As part of NASA MEaSUREs, we have re-processed all data from SMMR, all SSM/I-SSMIS and AMSR-E instruments, using the most mature Level 2 data. The Calibrated, Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) gridded data are now available from the NSIDC DAAC. The data are distributed as netCDF files that comply with CF-1.6 and ACDD-1.3 conventions. The data have been produced on EASE 2.0 projections at smoothed, 25 kilometer resolution and spatially-enhanced resolutions, up to 3.125 km depending on channel frequency, using the radiometer version of the Scatterometer Image Reconstruction (rSIR) method. We expect this newly produced data set to enable scientists to better analyze trends in coastal regions, marginal ice zones and in mountainous terrain that were not possible with the previous gridded passive microwave data. The use of the EASE-Grid 2.0 definition and netCDF-CF formatting allows users to extract compliant geotiff images and provides for easy importing and correct reprojection interoperability in many standard packages. As a consistently-processed, high-quality satellite passive microwave ESDR, we expect this data set to replace earlier gridded passive microwave data sets, and to pave the way for new insights from higher-resolution derived geophysical products.

  7. Playing the Scales: Regional Transformations and the Differentiation of Rural Space in the Chilean Wine Industry

    ERIC Educational Resources Information Center

    Overton, John; Murray, Warwick E.

    2011-01-01

    Globalization and industrial restructuring transform rural places in complex and often contradictory ways. These involve both quantitative changes, increasing the size and scope of operation to achieve economies of scale, and qualitative shifts, sometimes leading to a shift up the quality/price scale, towards finer spatial resolution and…

  8. Atmospheric and Fundamental Parameters of Stars in Hubble's Next Generation Spectral Library

    NASA Technical Reports Server (NTRS)

    Heap, Sally

    2010-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R approximately 1000 spectra of 374 stars of assorted temperature, gravity, and metallicity. We are presently working to determine the atmospheric and fundamental parameters of the stars from the NGSL spectra themselves via full-spectrum fitting of model spectra to the observed (extinction-corrected) spectrum over the full wavelength range, 0.2-1.0 micron. We use two grids of model spectra for this purpose: the very low-resolution spectral grid from Castelli-Kurucz (2004), and the grid from MARCS (2008). Both the observed spectrum and the MARCS spectra are first degraded in resolution to match the very low resolution of the Castelli-Kurucz models, so that our fitting technique is the same for both model grids. We will present our preliminary results with a comparison with those from the Sloan/Segue Stellar Parameter Pipeline, ELODIE, and MILES, etc.

  9. Influence of Terraced area DEM Resolution on RUSLE LS Factor

    NASA Astrophysics Data System (ADS)

    Zhang, Hongming; Baartman, Jantiene E. M.; Yang, Xiaomei; Gai, Lingtong; Geissen, Viollette

    2017-04-01

    Topography has a large impact on the erosion of soil by water. Slope steepness and slope length are combined (the LS factor) in the universal soil-loss equation (USLE) and its revised version (RUSLE) for predicting soil erosion. The LS factor is usually extracted from a digital elevation model (DEM). The grid size of the DEM will thus influence the LS factor and the subsequent calculation of soil loss. Terracing is considered as a support practice factor (P) in the USLE/RUSLE equations, which is multiplied with the other USLE/RUSLE factors. However, as terraces change the slope length and steepness, they also affect the LS factor. The effect of DEM grid size on the LS factor has not been investigated for a terraced area. We obtained a high-resolution DEM by unmanned aerial vehicles (UAVs) photogrammetry, from which the slope steepness, slope length, and LS factor were extracted. The changes in these parameters at various DEM resolutions were then analysed. The DEM produced detailed LS-factor maps, particularly for low LS factors. High (small valleys, gullies, and terrace ridges) and low (flats and terrace fields) spatial frequencies were both sensitive to changes in resolution, so the areas of higher and lower slope steepness both decreased with increasing grid size. Average slope steepness decreased and average slope length increased with grid size. Slope length, however, had a larger effect than slope steepness on the LS factor as the grid size varied. The LS factor increased when the grid size increased from 0.5 to 30-m and increased significantly at grid sizes >5-m. The LS factor was increasingly overestimated as grid size decreased. The LS factor decreased from grid sizes of 30 to 100-m, because the details of the terraced terrain were gradually lost, but the factor was still overestimated.

  10. Improved microgrid arrangement for integrated imaging polarimeters.

    PubMed

    LeMaster, Daniel A; Hirakawa, Keigo

    2014-04-01

    For almost 20 years, microgrid polarimetric imaging systems have been built using a 2×2 repeating pattern of polarization analyzers. In this Letter, we show that superior spatial resolution is achieved over this 2×2 case when the analyzers are arranged in a 2×4 repeating pattern. This unconventional result, in which a more distributed sampling pattern results in finer spatial resolution, is also achieved without affecting the conditioning of the polarimetric data-reduction matrix. Proof is provided theoretically and through Stokes image reconstruction of synthesized data.

  11. A Simple Algebraic Grid Adaptation Scheme with Applications to Two- and Three-dimensional Flow Problems

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.; Lytle, John K.

    1989-01-01

    An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.

  12. Investigating the Effects of Grid Resolution of WRF Model for Simulating the Atmosphere for use in the Study of Wake Turbulence

    NASA Astrophysics Data System (ADS)

    Prince, Alyssa; Trout, Joseph; di Mercurio, Alexis

    2017-01-01

    The Weather Research and Forecasting (WRF) Model is a nested-grid, mesoscale numerical weather prediction system maintained by the Developmental Testbed Center. The model simulates the atmosphere by integrating partial differential equations, which use the conservation of horizontal momentum, conservation of thermal energy, and conservation of mass along with the ideal gas law. This research investigated the possible use of WRF in investigating the effects of weather on wing tip wake turbulence. This poster shows the results of an investigation into the accuracy of WRF using different grid resolutions. Several atmospheric conditions were modeled using different grid resolutions. In general, the higher the grid resolution, the better the simulation, but the longer the model run time. This research was supported by Dr. Manuel A. Rios, Ph.D. (FAA) and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA'' (13-G-006). Dr. Manuel A. Rios, Ph.D. (FAA), and the grant ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''

  13. The functional micro-organization of grid cells revealed by cellular-resolution imaging.

    PubMed

    Heys, James G; Rangarajan, Krsna V; Dombeck, Daniel A

    2014-12-03

    Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater microcircuit-level understanding of the brain's representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to nongrid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: the similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a "Mexican hat"-shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    PubMed Central

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jiali; Swati, F. N. U.; Stein, Michael L.

    Regional climate models (RCMs) are a standard tool for downscaling climate forecasts to finer spatial scales. The evaluation of RCMs against observational data is an important step in building confidence in the use of RCMs for future prediction. In addition to model performance in climatological means and marginal distributions, a model’s ability to capture spatio-temporal relationships is important. This study develops two approaches: (1) spatial correlation/variogram for a range of spatial lags, with total monthly precipitation and non-seasonal precipitation components used to assess the spatial variations of precipitation; and (2) spatio-temporal correlation for a wide range of distances, directions, andmore » time lags, with daily precipitation occurrence used to detect the dynamic features of precipitation. These measures of spatial and spatio-temporal dependence are applied to a high-resolution RCM run and to the National Center for Environmental Prediction (NCEP)-U.S. Department of Energy (DOE) AMIP II reanalysis data (NCEP-R2), which provides initial and lateral boundary conditions for the RCM. The RCM performs better than NCEP-R2 in capturing both the spatial variations of total and non-seasonal precipitation components and the spatio-temporal correlations of daily precipitation occurrences, which are related to dynamic behaviors of precipitating systems. The improvements are apparent not just at resolutions finer than that of NCEP-R2, but also when the RCM and observational data are aggregated to the resolution of NCEP-R2.« less

  16. Interannual rainfall variability over China in the MetUM GA6 and GC2 configurations

    NASA Astrophysics Data System (ADS)

    Stephan, Claudia Christine; Klingaman, Nicholas P.; Vidale, Pier Luigi; Turner, Andrew G.; Demory, Marie-Estelle; Guo, Liang

    2018-05-01

    Six climate simulations of the Met Office Unified Model Global Atmosphere 6.0 and Global Coupled 2.0 configurations are evaluated against observations and reanalysis data for their ability to simulate the mean state and year-to-year variability of precipitation over China. To analyse the sensitivity to air-sea coupling and horizontal resolution, atmosphere-only and coupled integrations at atmospheric horizontal resolutions of N96, N216 and N512 (corresponding to ˜ 200, 90 and 40 km in the zonal direction at the equator, respectively) are analysed. The mean and interannual variance of seasonal precipitation are too high in all simulations over China but improve with finer resolution and coupling. Empirical orthogonal teleconnection (EOT) analysis is applied to simulated and observed precipitation to identify spatial patterns of temporally coherent interannual variability in seasonal precipitation. To connect these patterns to large-scale atmospheric and coupled air-sea processes, atmospheric and oceanic fields are regressed onto the corresponding seasonal mean time series. All simulations reproduce the observed leading pattern of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are the four leading patterns associated with the observed physical mechanisms. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. However, finer resolution does not improve the fidelity of these patterns or their associated mechanisms. This shows that evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient. The EOT analysis adds knowledge about coherent variability and associated mechanisms.

  17. Evolution of precipitation extremes in two large ensembles of climate simulations

    NASA Astrophysics Data System (ADS)

    Martel, Jean-Luc; Mailhot, Alain; Talbot, Guillaume; Brissette, François; Ludwig, Ralf; Frigon, Anne; Leduc, Martin; Turcotte, Richard

    2017-04-01

    Recent studies project significant changes in the future distribution of precipitation extremes due to global warming. It is likely that extreme precipitation intensity will increase in a future climate and that extreme events will be more frequent. In this work, annual maxima daily precipitation series from the Canadian Earth System Model (CanESM2) 50-member large ensemble (spatial resolution of 2.8°x2.8°) and the Community Earth System Model (CESM1) 40-member large ensemble (spatial resolution of 1°x1°) are used to investigate extreme precipitation over the historical (1980-2010) and future (2070-2100) periods. The use of these ensembles results in respectively 1 500 (30 years x 50 members) and 1200 (30 years x 40 members) simulated years over both the historical and future periods. These large datasets allow the computation of empirical daily extreme precipitation quantiles for large return periods. Using the CanESM2 and CESM1 large ensembles, extreme daily precipitation with return periods ranging from 2 to 100 years are computed in historical and future periods to assess the impact of climate change. Results indicate that daily precipitation extremes generally increase in the future over most land grid points and that these increases will also impact the 100-year extreme daily precipitation. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety. Estimated increases in precipitation associated to very extreme precipitation events (e.g. 100 years) will drastically change the likelihood of flooding and their extent in future climate. These results, although interesting, need to be extended to sub-daily durations, relevant for urban flooding protection and urban infrastructure design (e.g. sewer networks, culverts). Models and simulations at finer spatial and temporal resolution are therefore needed.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Q; Xie, S

    This report describes the Atmospheric Radiation Measurement (ARM) Best Estimate (ARMBE) 2-dimensional (2D) gridded surface data (ARMBE2DGRID) value-added product. Spatial variability is critically important to many scientific studies, especially those that involve processes of great spatial variations at high temporal frequency (e.g., precipitation, clouds, radiation, etc.). High-density ARM sites deployed at the Southern Great Plains (SGP) allow us to observe the spatial patterns of variables of scientific interests. The upcoming megasite at SGP with its enhanced spatial density will facilitate the studies at even finer scales. Currently, however, data are reported only at individual site locations at different time resolutionsmore » for different datastreams. It is difficult for users to locate all the data they need and requires extra effort to synchronize the data. To address these problems, the ARMBE2DGRID value-added product merges key surface measurements at the ARM SGP sites and interpolates the data to a regular 2D grid to facilitate the data application.« less

  19. Computation of Turbulent Heat Transfer on the Walls of a 180 Degree Turn Channel With a Low Reynolds Number Reynolds Stress Model

    NASA Technical Reports Server (NTRS)

    Ameri, A. A.; Rigby, D. L.; Steinthorsson, E.; Gaugler, Raymond (Technical Monitor)

    2002-01-01

    The Low Reynolds number version of the Stress-omega model and the two equation k-omega model of Wilcox were used for the calculation of turbulent heat transfer in a 180 degree turn simulating an internal coolant passage. The Stress-omega model was chosen for its robustness. The turbulent thermal fluxes were calculated by modifying and using the Generalized Gradient Diffusion Hypothesis. The results showed that using this Reynolds Stress model allowed better prediction of heat transfer compared to the k-omega two equation model. This improvement however required a finer grid and commensurately more CPU time.

  20. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  1. A novel hybrid approach with multidimensional-like effects for compressible flow computations

    NASA Astrophysics Data System (ADS)

    Kalita, Paragmoni; Dass, Anoop K.

    2017-07-01

    A multidimensional scheme achieves good resolution of strong and weak shocks irrespective of whether the discontinuities are aligned with or inclined to the grid. However, these schemes are computationally expensive. This paper achieves similar effects by hybridizing two schemes, namely, AUSM and DRLLF and coupling them through a novel shock switch that operates - unlike existing switches - on the gradient of the Mach number across the cell-interface. The schemes that are hybridized have contrasting properties. The AUSM scheme captures grid-aligned (and strong) shocks crisply but it is not so good for non-grid-aligned weaker shocks, whereas the DRLLF scheme achieves sharp resolution of non-grid-aligned weaker shocks, but is not as good for grid-aligned strong shocks. It is our experience that if conventional shock switches based on variables like density, pressure or Mach number are used to combine the schemes, the desired effect of crisp resolution of grid-aligned and non-grid-aligned discontinuities are not obtained. To circumvent this problem we design a shock switch based - for the first time - on the gradient of the cell-interface Mach number with very impressive results. Thus the strategy of hybridizing two carefully selected schemes together with the innovative design of the shock switch that couples them, affords a method that produces the effects of a multidimensional scheme with a lower computational cost. It is further seen that hybridization of the AUSM scheme with the recently developed DRLLFV scheme using the present shock switch gives another scheme that provides crisp resolution for both shocks and boundary layers. Merits of the scheme are established through a carefully selected set of numerical experiments.

  2. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  3. Science Enabling Applications of Gridded Radiances and Products

    NASA Astrophysics Data System (ADS)

    Goldberg, M.; Wolf, W.; Zhou, L.

    2005-12-01

    New generations of hyperspectral sounders and imagers are not only providing vastly improved information to monitor, assess and predict the Earth's environment, they also provide tremendous volumes of data to manage. Key management challenges must include data processing, distribution, archive and utilization. At the NOAA/NESDIS Office of Research and Applications, we have started to address the challenge of utilizing high volume satellite by thinning observations and developing gridded datasets from the observations made from the NASA AIRS, AMSU and MODIS instrument. We have developed techniques for intelligent thinning of AIRS data for numerical weather prediction, by selecting the clearest AIRS 14 km field of view within a 3 x 3 array. The selection uses high spatial resolution 1 km MODIS data which are spatially convolved to the AIRS field of view. The MODIS cloud masks and AIRS cloud tests are used to select the clearest. During the real-time processing the data are thinned and gridded to support monitoring, validation and scientific studies. Products from AIRS, which includes profiles of temperature, water vapor and ozone and cloud-corrected infrared radiances for more than 2000 channels, are derived from a single AIRS/AMSU field of regard, which is a 3 x 3 array of AIRS footprints (each with a 14 km spatial resolution) collocated with a single AMSU footprint (42 km). One of our key gridded dataset is a daily 3 x 3 latitude/longitude projection which contains the nearest AIRS/AMSU field of regard with respect to the center of the 3 x 3 lat/lon grid. This particular gridded dataset is 1/40 the size of the full resolution data. This gridded dataset is the type of product request that can be used to support algorithm validation and improvements. It also provides for a very economical approach for reprocessing, testing and improving algorithms for climate studies without having to reprocess the full resolution data stored at the DAAC. For example, on a single CPU workstation, all the AIRS derived products can be derived from a single year of gridded data in 5 days. This relatively short turnaround time, which can be reduced considerably to 3 hours by using a cluster of 40 pc G5processors, allows for repeated reprocessing at the PIs home institution before substantial investments are made to reprocess the full resolution data sets archived at the DAAC. In other words, do not reprocess the full resolution data until the science community have tested and selected the optimal algorithm on the gridded data. Development and applications of gridded radiances and products will be discussed. The applications can be provided as part of a web-based service.

  4. A Semi-Structured MODFLOW-USG Model to Evaluate Local Water Sources to Wells for Decision Support.

    PubMed

    Feinstein, Daniel T; Fienen, Michael N; Reeves, Howard W; Langevin, Christian D

    2016-07-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A "semi-structured" approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a). Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  5. A semi-structured MODFLOW-USG model to evaluate local water sources to wells for decision support

    USGS Publications Warehouse

    Feinstein, Daniel T.; Fienen, Michael N.; Reeves, Howard W.; Langevin, Christian D.

    2016-01-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A “semi-structured” approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a).

  6. A Lagrangian particle model to predict the airborne spread of foot-and-mouth disease virus

    NASA Astrophysics Data System (ADS)

    Mayer, D.; Reiczigel, J.; Rubel, F.

    Airborne spread of bioaerosols in the boundary layer over a complex terrain is simulated using a Lagrangian particle model, and applied to modelling the airborne spread of foot-and-mouth disease (FMD) virus. Two case studies are made with study domains located in a hilly region in the northwest of the Styrian capital Graz, the second largest town in Austria. Mountainous terrain as well as inhomogeneous and time varying meteorological conditions prevent from application of so far used Gaussian dispersion models, while the proposed model can handle these realistically. In the model, trajectories of several thousands of particles are computed and the distribution of virus concentration near the ground is calculated. This allows to assess risk of infection areas with respect to animal species of interest, such as cattle, swine or sheep. Meteorological input data like wind field and other variables necessary to compute turbulence were taken from the new pre-operational version of the non-hydrostatic numerical weather prediction model LMK ( Lokal-Modell-Kürzestfrist) running at the German weather service DWD ( Deutscher Wetterdienst). The LMK model provides meteorological parameters with a spatial resolution of about 2.8 km. To account for the spatial resolution of 400 m used by the Lagrangian particle model, the initial wind field is interpolated upon the finer grid by a mass consistent interpolation method. Case studies depict a significant influence of local wind systems on the spread of virus. Higher virus concentrations at the upwind side of the hills and marginal concentrations in the lee are well observable, as well as canalization effects by valleys. The study demonstrates that the Lagrangian particle model is an appropriate tool for risk assessment of airborne spread of virus by taking into account the realistic orographic and meteorological conditions.

  7. Utilizing Machine Learning to Downscale SMAP L3_SM_P Brightness Temperatures in Iowa for Agricultural Applications

    NASA Astrophysics Data System (ADS)

    Chakrabarti, S.; Judge, J.; Bindlish, R.; Bongiovanni, T.; Jackson, T. J.

    2016-12-01

    The NASA Soil Moisture Active Passive (SMAP) mission provides global observations of brightness temperatures (TB) at 36km. For these observations to be relevant to studies in agricultural regions, the TB values need to be downscaled to finer resolutions. In this study, a machine learning algorithm is introduced for downscaling of TB from 36km to 9km. The algorithm uses image segmentation to cluster the study region based on meteorological and land cover similarity, followed by a support vector machine based regression that computes the value of the disaggregated TB at all pixels. High resolution remote sensing products such as land surface temperature, normalized difference vegetation index, enhanced vegetation index, precipitation, soil texture, and land-cover were used for downscaling. The algorithm was implemented in Iowa, United States, during the growing season from April to July 2015 when the SMAP L3-SM_AP TB product at 9 km was available for comparison. In addition, the downscaled estimates from the algorithm are compared with 9km TB obtained by resampling SMAP L1B_TB product at 36km. It was found that the downscaled TB were very similar to the SMAP-L3_SM _AP TB product, even for vegetated areas with a mean difference ≤ 5K. However, the standard deviation of the downscaled was lower by 7K than that of the AP product. The probability density functions of the downscaled TB were similar to the SMAP- TB. The results indicate that these downscaling algorithms may be used for downscaling TB using complex non-linear correlations on a grid without using active microwave observations.

  8. High-resolution computer-aided moire

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  9. SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS III: BOUNDARY AND INITIAL CONDITIONS, MODEL GRID RESOLUTION, AND HG(II) REDUCTION MECHANISMS

    EPA Science Inventory

    In this study we investigate the CMAQ model response in terms of simulated mercury concentration and deposition to boundary/initial conditions (BC/IC), model grid resolution (12- versus 36-km), and two alternative Hg(II) reduction mechanisms. The model response to the change of g...

  10. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  11. Proof of Concept for an Approach to a Finer Resolution Inventory

    Treesearch

    Chris J. Cieszewski; Kim Iles; Roger C. Lowe; Michal Zasada

    2005-01-01

    This report presents a proof of concept for a statistical framework to develop a timely, accurate, and unbiased fiber supply assessment in the State of Georgia, U.S.A. The proposed approach is based on using various data sources and modeling techniques to calibrate satellite image-based statewide stand lists, which provide initial estimates for a State inventory on a...

  12. Atmospheric model development in support of SEASAT. Volume 1: Summary of findings

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.

    1977-01-01

    Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.

  13. Focusing X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    O'Dell, Stephen; Brissenden, Roger; Davis, William; Elsner, Ronald; Elvis, Martin; Freeman, Mark; Gaetz, Terrance; Gorenstein, Paul; Gubarev, Mikhall; Jerlus, Diab; hide

    2010-01-01

    During the half-century history of x-ray astronomy, focusing x-ray telescopes, through increased effective area and finer angular resolution, have improved sensitivity by 8 orders of magnitude. Here, we review previous and current x-ray-telescope missions. Next, we describe the planned next-generation x-ray-astronomy facility, the International X-ray Observatory (IXO). We conclude with an overview of a concept for the next next-generation facility, Generation X. Its scientific objectives will require very large areas (about 10,000 sq m) of highly-nested, lightweight grazing-incidence mirrors, with exceptional (about 0.1-arcsec) resolution. Achieving this angular resolution with lightweight mirrors will likely require on-orbit adjustment of alignment and figure.

  14. Improving Spectroscopic Performance of a Coplanar-Anode High-Pressure Xenon Gamma-Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Kiff, Scott Douglas; He, Zhong; Tepper, Gary C.

    2007-08-01

    High-pressure xenon (HPXe) gas is a desirable radiation detection medium for homeland security applications because of its good inherent room-temperature energy resolution, potential for large, efficient devices, and stability over a broad temperature range. Past work in HPXe has produced large-diameter gridded ionization chambers with energy resolution at 662 keV between 3.5 and 4% FWHM. However, one major limitation of these detectors is resolution degradation due to Frisch grid microphonics. A coplanar-anode HPXe detector has been developed as an alternative to gridded chambers. An investigation of this detector's energy resolution is reported in this submission. A simulation package is used to investigate the contributions of important physical processes to the measured photopeak broadening. Experimental data is presented for pure Xe and Xe + 0.2%H2 mixtures, including an analysis of interaction location effects on the energy spectrum.

  15. On precise phase difference measurement approach using border stability of detection resolution.

    PubMed

    Bai, Lina; Su, Xin; Zhou, Wei; Ou, Xiaojuan

    2015-01-01

    For the precise phase difference measurement, this paper develops an improved dual phase coincidence detection method. The measurement resolution of the digital phase coincidence detection circuits is always limited, for example, only at the nanosecond level. This paper reveals a new way to improve the phase difference measurement precision by using the border stability of the circuit detection fuzzy areas. When a common oscillator signal is used to detect the phase coincidence with the two comparison signals, there will be two detection fuzzy areas for the reason of finite detection resolution surrounding the strict phase coincidence. Border stability of fuzzy areas and the fluctuation difference of the two fuzzy areas can be even finer than the picoseconds level. It is shown that the system resolution obtained only depends on the stability of the circuit measurement resolution which is much better than the measurement device resolution itself.

  16. Coarsening of physics for biogeochemical model in NEMO

    NASA Astrophysics Data System (ADS)

    Bricaud, Clement; Le Sommer, Julien; Madec, Gurvan; Deshayes, Julie; Chanut, Jerome; Perruche, Coralie

    2017-04-01

    Ocean mesoscale and submesoscale turbulence contribute to ocean tracer transport and to shaping ocean biogeochemical tracers distribution. Representing adequately tracer transport in ocean models therefore requires to increase model resolution so that the impact of ocean turbulence is adequately accounted for. But due to supercomputers power and storage limitations, global biogeochemical models are not yet run routinely at eddying resolution. Still, because the "effective resolution" of eddying ocean models is much coarser than the physical model grid resolution, tracer transport can be reconstructed to a large extent by computing tracer transport and diffusion with a model grid resolution close to the effective resolution of the physical model. This observation has motivated the implementation of a new capability in NEMO ocean model (http://www.nemo-ocean.eu/) that allows to run the physical model and the tracer transport model at different grid resolutions. In a first time, we present results obtained with this new capability applied to a synthetic age tracer in a global eddying model configuration. In this model configuration, ocean dynamic is computed at ¼° resolution but tracer transport is computed at 3/4° resolution. The solution obtained is compared to 2 reference setup ,one at ¼° resolution for both physics and passive tracer models and one at 3/4° resolution for both physics and passive tracer model. We discuss possible options for defining the vertical diffusivity coefficient for the tracer transport model based on information from the high resolution grid. We describe the impact of this choice on the distribution and one the penetration of the age tracer. In a second time we present results obtained by coupling the physics with the biogeochemical model PISCES. We look at the impact of this methodology on some tracers distribution and dynamic. The method described here can found applications in ocean forecasting, such as the Copernicus Marine service operated by Mercator-Ocean, and in Earth System Models for climate applications.

  17. Reprocessing the Historical Satellite Passive Microwave Record at Enhanced Spatial Resolutions using Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.; Paget, A. C.; Armstrong, R. L.

    2015-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Currently available global gridded passive microwave data sets serve a diverse community of hundreds of data users, but do not meet many requirements of modern Earth System Data Records (ESDRs) or Climate Data Records (CDRs), most notably in the areas of intersensor calibration, quality-control, provenance and consistent processing methods. The original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. Further, since the first Level 3 data sets were produced, the Level 2 passive microwave data on which they were based have been reprocessed as Fundamental CDRs (FCDRs) with improved calibration and documentation. We are funded by NASA MEaSUREs to reprocess the historical gridded data sets as EASE-Grid 2.0 ESDRs, using the most mature available Level 2 satellite passive microwave (SMMR, SSM/I-SSMIS, AMSR-E) records from 1978 to the present. We have produced prototype data from SSM/I and AMSR-E for the year 2003, for review and feedback from our Early Adopter user community. The prototype data set includes conventional, low-resolution ("drop-in-the-bucket" 25 km) grids and enhanced-resolution grids derived from the two candidate image reconstruction techniques we are evaluating: 1) Backus-Gilbert (BG) interpolation and 2) a radiometer version of Scatterometer Image Reconstruction (SIR). We summarize our temporal subsetting technique, algorithm tuning parameters and computational costs, and include sample SSM/I images at enhanced resolutions of up to 3 km. We are actively working with our Early Adopters to finalize content and format of this new, consistently-processed high-quality satellite passive microwave ESDR.

  18. Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows

    NASA Astrophysics Data System (ADS)

    Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.

    2017-12-01

    Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.

  19. Downscaling hydrodynamics features to depict causes of major productivity of Sicilian-Maltese area and implications for resource management.

    PubMed

    Capodici, Fulvio; Ciraolo, Giuseppe; Cosoli, Simone; Maltese, Antonino; Mangano, M Cristina; Sarà, Gianluca

    2018-07-01

    Chlorophyll-a (CHL-a) and sea surface temperature (SST) are generally accepted as proxies for water quality. They can be easily retrieved in a quasi-near real time mode through satellite remote sensing and, as such, they provide an overview of the water quality on a synoptic scale in open waters. Their distributions evolve in space and time in response to local and remote forcing, such as winds and currents, which however have much finer temporal and spatial scales than those resolvable by satellites in spite of recent advances in satellite remote-sensing techniques. Satellite data are often characterized by a moderate temporal resolution to adequately catch the actual sub-grid physical processes. Conventional pointwise measurements can resolve high-frequency motions such as tides or high-frequency wind-driven currents, however they are inadequate to resolve their spatial variability over wide areas. We show in this paper that a combined use of near-surface currents, available through High-Frequency (HF) radars, and satellite data (e.g., TERRA and AQUA/MODIS), can properly resolve the main oceanographic features in both coastal and open-sea regions, particularly at the coastal boundaries where satellite imageries fail, and are complementary tools to interpret ocean productivity and resource management in the Sicily Channel. Copyright © 2018. Published by Elsevier B.V.

  20. The impact of mesoscale convective systems on global precipitation: A modeling study

    NASA Astrophysics Data System (ADS)

    Tao, Wei-Kuo

    2017-04-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.

  1. Comparison of alternative spatial resolutions in the application of a spatially distributed biogeochemical model over complex terrain

    USGS Publications Warehouse

    Turner, D.P.; Dodson, R.; Marks, D.

    1996-01-01

    Spatially distributed biogeochemical models may be applied over grids at a range of spatial resolutions, however, evaluation of potential errors and loss of information at relatively coarse resolutions is rare. In this study, a georeferenced database at the 1-km spatial resolution was developed to initialize and drive a process-based model (Forest-BGC) of water and carbon balance over a gridded 54976 km2 area covering two river basins in mountainous western Oregon. Corresponding data sets were also prepared at 10-km and 50-km spatial resolutions using commonly employed aggregation schemes. Estimates were made at each grid cell for climate variables including daily solar radiation, air temperature, humidity, and precipitation. The topographic structure, water holding capacity, vegetation type and leaf area index were likewise estimated for initial conditions. The daily time series for the climatic drivers was developed from interpolations of meteorological station data for the water year 1990 (1 October 1989-30 September 1990). Model outputs at the 1-km resolution showed good agreement with observed patterns in runoff and productivity. The ranges for model inputs at the 10-km and 50-km resolutions tended to contract because of the smoothed topography. Estimates for mean evapotranspiration and runoff were relatively insensitive to changing the spatial resolution of the grid whereas estimates of mean annual net primary production varied by 11%. The designation of a vegetation type and leaf area at the 50-km resolution often subsumed significant heterogeneity in vegetation, and this factor accounted for much of the difference in the mean values for the carbon flux variables. Although area wide means for model outputs were generally similar across resolutions, difference maps often revealed large areas of disagreement. Relatively high spatial resolution analyses of biogeochemical cycling are desirable from several perspectives and may be particularly important in the study of the potential impacts of climate change.

  2. Scaling effects on spring phenology detections from MODIS data at multiple spatial resolutions over the contiguous United States

    NASA Astrophysics Data System (ADS)

    Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin

    2017-10-01

    Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.

  3. Ensuring Safety of Navigation: A Three-Tiered Approach

    NASA Astrophysics Data System (ADS)

    Johnson, S. D.; Thompson, M.; Brazier, D.

    2014-12-01

    The primary responsibility of the Hydrographic Department at the Naval Oceanographic Office (NAVOCEANO) is to support US Navy surface and sub-surface Safety of Navigation (SoN) requirements. These requirements are interpreted, surveys are conducted, and accurate products are compiled and archived for future exploitation. For a number of years NAVOCEANO has employed a two-tiered data-basing structure to support SoN. The first tier (Data Warehouse, or DWH) provides access to the full-resolution sonar and lidar data. DWH preserves the original data such that any scale product can be built. The second tier (Digital Bathymetric Database - Variable resolution, or DBDB-V) served as the final archive for SoN chart scale, gridded products compiled from source bathymetry. DBDB-V has been incorporated into numerous DoD tactical decision aids and serves as the foundation bathymetry for ocean modeling. With the evolution of higher density survey systems and the addition of high-resolution gridded bathymetry product requirements, a two-tiered model did not provide an efficient solution for SoN. The two-tiered approach required scientists to exploit full-resolution data in order to build any higher resolution product. A new perspective on the archival and exploitation of source data was required. This new perspective has taken the form of a third tier, the Navigation Surface Database (NSDB). NSDB is an SQLite relational database populated with International Hydrographic Organization (IHO), S-102 compliant Bathymetric Attributed Grids (BAGs). BAGs archived within NSDB are developed at the highest resolution that the collection sensor system can support and contain nodal estimates for depth, uncertainty, separation values and metadata. Gridded surface analysis efforts culminate in the generation of the source resolution BAG files and their storage within NSDB. Exploitation of these resources eliminates the time and effort needed to re-grid and re-analyze native source file formats.

  4. NAM Products

    Science.gov Websites

    Available NAM 218 AWIPS Grid - CONUS (12-km Resolution; full complement of pressure level fields and some ; full complement of surface-based fields) Filename Inventory nam.tccz.awip12fh.tm00.grib2 FH00 FH01 fh.xxxx_tl.press_gr.grbgrd NAM 242 AWIPS Grid - Over Alaska (11.25 KM Resolution; full complement of pressure level fields

  5. Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2010-01-01

    This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.

  6. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  7. Development of a gridded meteorological dataset over Java island, Indonesia 1985-2014.

    PubMed

    Yanto; Livneh, Ben; Rajagopalan, Balaji

    2017-05-23

    We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985-2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology.

  8. X-ray photon correlation spectroscopy using a fast pixel array detector with a grid mask resolution enhancer.

    PubMed

    Hoshino, Taiki; Kikuchi, Moriya; Murakami, Daiki; Harada, Yoshiko; Mitamura, Koji; Ito, Kiminori; Tanaka, Yoshihito; Sasaki, Sono; Takata, Masaki; Jinnai, Hiroshi; Takahara, Atsushi

    2012-11-01

    The performance of a fast pixel array detector with a grid mask resolution enhancer has been demonstrated for X-ray photon correlation spectroscopy (XPCS) measurements to investigate fast dynamics on a microscopic scale. A detecting system, in which each pixel of a single-photon-counting pixel array detector, PILATUS, is covered by grid mask apertures, was constructed for XPCS measurements of silica nanoparticles in polymer melts. The experimental results are confirmed to be consistent by comparison with other independent experiments. By applying this method, XPCS measurements can be carried out by customizing the hole size of the grid mask to suit the experimental conditions, such as beam size, detector size and sample-to-detector distance.

  9. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  10. An analysis of MM5 sensitivity to different parameterizations for high-resolution climate simulations

    NASA Astrophysics Data System (ADS)

    Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.

    2009-04-01

    An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.

  11. Fast ultra-wideband microwave spectral scanning utilizing photonic wavelength- and time-division multiplexing.

    PubMed

    Li, Yihan; Kuse, Naoya; Fermann, Martin

    2017-08-07

    A high-speed ultra-wideband microwave spectral scanning system is proposed and experimentally demonstrated. Utilizing coherent dual electro-optical frequency combs and a recirculating optical frequency shifter, the proposed system realizes wavelength- and time-division multiplexing at the same time, offering flexibility between scan speed and size, weight and power requirements (SWaP). High-speed spectral scanning spanning from ~1 to 8 GHz with ~1.2 MHz spectral resolution is achieved experimentally within 14 µs. The system can be easily scaled to higher bandwidth coverage, faster scanning speed or finer spectral resolution with suitable hardware.

  12. Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems

    NASA Technical Reports Server (NTRS)

    Morris, Chris

    2013-01-01

    Five different central difference schemes, based on a conservative differencing form of the Kennedy and Gruber skew-symmetric scheme, were compared with six different upwind schemes based on primitive variable reconstruction and the Roe flux. These eleven schemes were tested on a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem and a turbulent channel flow problem. The central schemes were generally very accurate and stable, provided the grid stretching rate was kept below 10%. As near-DNS grid resolutions, the results were comparable to reference DNS calculations. At coarser grid resolutions, the need for an LES SGS model became apparent. There was a noticeable improvement moving from CD-2 to CD-4, and higher-order schemes appear to yield clear benefits on coarser grids. The UB-7 and CU-5 upwind schemes also performed very well at near-DNS grid resolutions. The UB-5 upwind scheme does not do as well, but does appear to be suitable for well-resolved DNS. The UF-2 and UB-3 upwind schemes, which have significant dissipation over a wide spectral range, appear to be poorly suited for DNS or LES.

  13. GRID: a high-resolution protein structure refinement algorithm.

    PubMed

    Chitsaz, Mohsen; Mayo, Stephen L

    2013-03-05

    The energy-based refinement of protein structures generated by fold prediction algorithms to atomic-level accuracy remains a major challenge in structural biology. Energy-based refinement is mainly dependent on two components: (1) sufficiently accurate force fields, and (2) efficient conformational space search algorithms. Focusing on the latter, we developed a high-resolution refinement algorithm called GRID. It takes a three-dimensional protein structure as input and, using an all-atom force field, attempts to improve the energy of the structure by systematically perturbing backbone dihedrals and side-chain rotamer conformations. We compare GRID to Backrub, a stochastic algorithm that has been shown to predict a significant fraction of the conformational changes that occur with point mutations. We applied GRID and Backrub to 10 high-resolution (≤ 2.8 Å) crystal structures from the Protein Data Bank and measured the energy improvements obtained and the computation times required to achieve them. GRID resulted in energy improvements that were significantly better than those attained by Backrub while expending about the same amount of computational resources. GRID resulted in relaxed structures that had slightly higher backbone RMSDs compared to Backrub relative to the starting crystal structures. The average RMSD was 0.25 ± 0.02 Å for GRID versus 0.14 ± 0.04 Å for Backrub. These relatively minor deviations indicate that both algorithms generate structures that retain their original topologies, as expected given the nature of the algorithms. Copyright © 2012 Wiley Periodicals, Inc.

  14. Gridless, pattern-driven point cloud completion and extension

    NASA Astrophysics Data System (ADS)

    Gravey, Mathieu; Mariethoz, Gregoire

    2016-04-01

    While satellites offer Earth observation with a wide coverage, other remote sensing techniques such as terrestrial LiDAR can acquire very high-resolution data on an area that is limited in extension and often discontinuous due to shadow effects. Here we propose a numerical approach to merge these two types of information, thereby reconstructing high-resolution data on a continuous large area. It is based on a pattern matching process that completes the areas where only low-resolution data is available, using bootstrapped high-resolution patterns. Currently, the most common approach to pattern matching is to interpolate the point data on a grid. While this approach is computationally efficient, it presents major drawbacks for point clouds processing because a significant part of the information is lost in the point-to-grid resampling, and that a prohibitive amount of memory is needed to store large grids. To address these issues, we propose a gridless method that compares point clouds subsets without the need to use a grid. On-the-fly interpolation involves a heavy computational load, which is met by using a GPU high-optimized implementation and a hierarchical pattern searching strategy. The method is illustrated using data from the Val d'Arolla, Swiss Alps, where high-resolution terrestrial LiDAR data are fused with lower-resolution Landsat and WorldView-3 acquisitions, such that the density of points is homogeneized (data completion) and that it is extend to a larger area (data extension).

  15. A study of overflow simulations using MPAS-Ocean: Vertical grids, resolution, and viscosity

    NASA Astrophysics Data System (ADS)

    Reckinger, Shanon M.; Petersen, Mark R.; Reckinger, Scott J.

    2015-12-01

    MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are carried out using three of the vertical coordinate types available in MPAS-Ocean, including z-star with partial bottom cells, z-star with full cells, and sigma coordinates. The results are first benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model, which are used to set the base case used for this work. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Lastly, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates.

  16. Improvement of sub-20nm pattern quality with dose modulation technique for NIL template production

    NASA Astrophysics Data System (ADS)

    Yagawa, Keisuke; Ugajin, Kunihiro; Suenaga, Machiko; Kanamitsu, Shingo; Motokawa, Takeharu; Hagihara, Kazuki; Arisawa, Yukiyasu; Kobayashi, Sachiko; Saito, Masato; Ito, Masamitsu

    2016-04-01

    Nanoimprint lithography (NIL) technology is in the spotlight as a next-generation semiconductor manufacturing technique for integrated circuits at 22 nm and beyond. NIL is the unmagnified lithography technique using template which is replicated from master templates. On the other hand, master templates are currently fabricated by electron-beam (EB) lithography[1]. In near future, finer patterns less than 15nm will be required on master template and EB data volume increases exponentially. So, we confront with a difficult challenge. A higher resolution EB mask writer and a high performance fabrication process will be required. In our previous study, we investigated a potential of photomask fabrication process for finer patterning and achieved 15.5nm line and space (L/S) pattern on template by using VSB (Variable Shaped Beam) type EB mask writer and chemically amplified resist. In contrast, we found that a contrast loss by backscattering decreases the performance of finer patterning. For semiconductor devices manufacturing, we must fabricate complicated patterns which includes high and low density simultaneously except for consecutive L/S pattern. Then it's quite important to develop a technique to make various size or coverage patterns all at once. In this study, a small feature pattern was experimentally formed on master template with dose modulation technique. This technique makes it possible to apply the appropriate exposure dose for each pattern size. As a result, we succeed to improve the performance of finer patterning in bright field area. These results show that the performance of current EB lithography process have a potential to fabricate NIL template.

  17. Scenario-Based Tsunami Hazard Assessment from Earthquake and Landslide Sources for Eastern Sicily, Italy

    NASA Astrophysics Data System (ADS)

    Tinti, S.; Armigliato, A.; Pagnoni, G.; Paparo, M. A.; Zaniboni, F.

    2016-12-01

    Eastern Sicily was theatre of the most damaging tsunamis that ever struck Italy, such as the 11 January 1693 and the 28 December 1908 tsunamis. Tectonic studies and paleotsunami investigations extended historical records of tsunami occurrence back of several thousands of years. Tsunami sources relevant for eastern Sicily are both local and remote, the latter being located in the Ionian Greece and in the Western Hellenic Arc. Here in 365 A.D. a large earthquake generated a tsunami that was seen in the whole eastern and central Mediterranean including the Sicilian coasts. The objective of this study is the evaluation of tsunami hazard along the coast of eastern Sicily, central Mediterranean, Italy via a scenario-based technique, which has been preferred to the PTHA approach because, when dealing with tsunamis induced by landslides, uncertainties are usually so large to undermine the PTHA results. Tsunamis of earthquake and landslide origin are taken into account for the entire coast of Sicily, from the Messina to the Siracusa provinces. Landslides are essentially local sources and can occur underwater along the unstable flanks of the Messina Straits or along the steep slopes of the Hyblaean-Malta escarpment. The method is based on a two-step procedure. After a preliminary step where very many earthquake and landslide sources are taken into account and tsunamis are computed on a low-resolution grid, the worst-case scenarios are selected and tsunamis are simulated on a finer-resolution grid allowing for a better calculation of coastal wave height and tsunami penetration. The final result of our study is given in the form of aggregate fields computed from individual scenarios. Also interesting is the contribution of the various tsunami sources in different localities along the coast. It is found that the places with the highest level of hazard are the low lands of La Playa south of Catania and of the Bay of Augusta, which is in agreement also with historical observations. It is further found that remote seismic sources from the Hellenic Arc are the dominant factor of hazard in several places, and that, though in general earthquakes contribute to hazard more than landslides, in some places the opposite is true.

  18. Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.

    1996-01-01

    Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.

  19. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  20. Experimental and analytical study of close-coupled ventral nozzles for ASTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Mcardle, Jack G.; Smith, C. Frederic

    1990-01-01

    Flow in a generic ventral nozzle system was studied experimentally and analytically with a block version of the PARC3D computational fluid dynamics program (a full Navier-Stokes equation solver) in order to evaluate the program's ability to predict system performance and internal flow patterns. For the experimental work a one-third-size model tailpipe with a single large rectangular ventral nozzle mounted normal to the tailpipe axis was tested with unheated air at steady-state pressure ratios up to 4.0. The end of the tailpipe was closed to simulate a blocked exhaust nozzle. Measurements showed about 5 1/2 percent flow-turning loss, reasonable nozzle performance coefficients, and a significant aftward axial component of thrust due to flow turning loss, reasonable nozzle performance coefficients, and a significant aftward axial component of thrust due to flow turning more than 90 deg. Flow behavior into and through the ventral duct is discussed and illustrated with paint streak flow visualization photographs. For the analytical work the same ventral system configuration was modeled with two computational grids to evaluate the effect of grid density. Both grids gave good results. The finer-grid solution produced more detailed flow patterns and predicted performance parameters, such as thrust and discharge coefficient, within 1 percent of the measured values. PARC3D flow visualization images are shown for comparison with the paint streak photographs. Modeling and computational issues encountered in the analytical work are discussed.

  1. Ground Boundary Conditions for Thermal Convection Over Horizontal Surfaces at High Rayleigh Numbers

    NASA Astrophysics Data System (ADS)

    Hanjalić, K.; Hrebtov, M.

    2016-07-01

    We present "wall functions" for treating the ground boundary conditions in the computation of thermal convection over horizontal surfaces at high Rayleigh numbers using coarse numerical grids. The functions are formulated for an algebraic-flux model closed by transport equations for the turbulence kinetic energy, its dissipation rate and scalar variance, but could also be applied to other turbulence models. The three-equation algebraic-flux model, solved in a T-RANS mode ("Transient" Reynolds-averaged Navier-Stokes, based on triple decomposition), was shown earlier to reproduce well a number of generic buoyancy-driven flows over heated surfaces, albeit by integrating equations up to the wall. Here we show that by using a set of wall functions satisfactory results are found for the ensemble-averaged properties even on a very coarse computational grid. This is illustrated by the computations of the time evolution of a penetrative mixed layer and Rayleigh-Bénard (open-ended, 4:4:1 domain) convection, using 10 × 10 × 100 and 10 × 10 × 20 grids, compared also with finer grids (e.g. 60 × 60 × 100), as well as with one-dimensional treatment using 1 × 1 × 100 and 1 × 1 × 20 nodes. The approach is deemed functional for simulations of a convective boundary layer and mesoscale atmospheric flows, and pollutant transport over realistic complex hilly terrain with heat islands, urban and natural canopies, for diurnal cycles, or subjected to other time and space variations in ground conditions and stratification.

  2. Grid of Supergiant B[e] Models from HDUST Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Carciofi, A. C.

    2012-12-01

    By using the Monte Carlo radiative transfer code HDUST (developed by A. C. Carciofi and J..E. Bjorkman) we have built a grid of models for stars presenting the B[e] phenomenon and a bimodal outflowing envelope. The models are particularly adapted to the study of B[e] supergiants and FS CMa type stars. The adopted physical parameters of the calculated models make the grid well adapted to interpret high angular and high spectral observations, in particular spectro-interferometric data from ESO-VLTI instruments AMBER (near-IR at low and medium spectral resolution) and MIDI (mid-IR at low spectral resolution). The grid models include, for example, a central B star with different effective temperatures, a gas (hydrogen) and silicate dust circumstellar envelope with a bimodal mass loss presenting dust in the denser equatorial regions. The HDUST grid models were pre-calculated using the high performance parallel computing facility Mésocentre SIGAMM, located at OCA, France.

  3. Potential Technologies for Assessing Risk Associated with a Mesoscale Forecast

    DTIC Science & Technology

    2015-10-01

    American GFS models, and informally applied on the Weather Research and Forecasting ( WRF ) model. The current CI equation is as follows...Reen B, Penc R. Investigating surface bias errors in the Weather Research and Forecasting ( WRF ) model using a Geographic Information System (GIS). J...Forecast model ( WRF -ARW) with extensions that might include finer terrain resolutions and more detailed representations of the underlying atmospheric

  4. Experimental and Numerical Investigation of Controlled, Small-Scale Motions in a Turbulent Shear Layer

    DTIC Science & Technology

    2007-06-01

    cross flow are taken at finer resolution, down to 6.5 μm/pixel. For the flow mapping, both the CCD camera and part of the laser -sheet optics are...Control of Supersonic Impinging Jet Flows using Microjets . AIAA Journal. 41(7):1347-1355, 2001. [9] M.J. Stanek, G. Raman, V. Kibens, J.A. Ross, J. Odedra

  5. Computed Tomography Status

    DOE R&D Accomplishments Database

    Hansche, B. D.

    1983-01-01

    Computed tomography (CT) is a relatively new radiographic technique which has become widely used in the medical field, where it is better known as computerized axial tomographic (CAT) scanning. This technique is also being adopted by the industrial radiographic community, although the greater range of densities, variation in samples sizes, plus possible requirement for finer resolution make it difficult to duplicate the excellent results that the medical scanners have achieved.

  6. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  7. Fibonacci Grids

    NASA Technical Reports Server (NTRS)

    Swinbank, Richard; Purser, James

    2006-01-01

    Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.

  8. On the use of high-resolution topographic data as a proxy for seismic site conditions (VS30)

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.

    2009-01-01

    An alternative method has recently been proposed for evaluating global seismic site conditions, or the average shear velocity to 30 m depth (VS30), from the Shuttle Radar Topography Mission (SRTM) 30 arcsec digital elevation models (DEMs). The basic premise of the method is that the topographic slope can be used as a reliable proxy for VS30 in the absence of geologically and geotechnically based site-condition maps through correlations between VS30 measurements and topographic gradient. Here we evaluate the use of higher-resolution (3 and 9 arcsec) DEMs to examine whether we are able to resolve VS30 in more detail than can be achieved using the lower-resolution SRTM data. High-quality DEMs at resolutions greater than 30 arcsec are not uniformly available at the global scale. However, in many regions where such data exist, they may be employed to resolve finer-scale variations in topographic gradient, and consequently, VS30. We use the U.S. Geological Survey Earth Resources Observation and Science (EROS) Data Center's National Elevation Dataset (NED) to investigate the use of high-resolution DEMs for estimating VS30 in several regions across the United States, including the San Francisco Bay area in California, Los Angeles, California, and St. Louis, Missouri. We compare these results with an example from Taipei, Taiwan, that uses 9 arcsec SRTM data, which are globally available. The use of higher-resolution NED data recovers finer-scale variations in topographic gradient, which better correlate to geological and geomorphic features, in particular, at the transition between hills and basins, warranting their use over 30 arcsec SRTM data where available. However, statistical analyses indicate little to no improvement over lower-resolution topography when compared to VS30 measurements, suggesting that some topographic smoothing may provide more stable VS30 estimates. Furthermore, we find that elevation variability in canopy-based SRTM measurements at resolutions greater than 30 arcsec are too large to resolve reliable slopes, particularly in low-gradient sedimentary basins.

  9. A framework for evaluating statistical downscaling performance under changing climatic conditions (Invited)

    NASA Astrophysics Data System (ADS)

    Dixon, K. W.; Balaji, V.; Lanzante, J.; Radhakrishnan, A.; Hayhoe, K.; Stoner, A. K.; Gaitan, C. F.

    2013-12-01

    Statistical downscaling (SD) methods may be viewed as generating a value-added product - a refinement of global climate model (GCM) output designed to add finer scale detail and to address GCM shortcomings via a process that gleans information from a combination of observations and GCM-simulated climate change responses. Making use of observational data sets and GCM simulations representing the same historical period, cross-validation techniques allow one to assess how well an SD method meets this goal. However, lacking observations of future, the extent to which a particular SD method's skill might degrade when applied to future climate projections cannot be assessed in the same manner. Here we illustrate and describe extensions to a 'perfect model' experimental design that seeks to quantify aspects of SD method performance both for a historical period (1979-2008) and for late 21st century climate projections. Examples highlighting cases in which downscaling performance deteriorates in future climate projections will be discussed. Also, results will be presented showing how synthetic datasets having known statistical properties may be used to further isolate factors responsible for degradations in SD method skill under changing climatic conditions. We will describe a set of input files used to conduct these analyses that are being made available to researchers who wish to utilize this experimental framework to evaluate SD methods they have developed. The gridded data sets cover a region centered on the contiguous 48 United States with a grid spacing of approximately 25km, have daily time resolution (e.g., maximum and minimum near-surface temperature and precipitation), and represent a total of 120 years of model simulations. This effort is consistent with the 2013 National Climate Predictions and Projections Platform Quantitative Evaluation of Downscaling Workshop goal of supporting a community approach to promote the informed use of downscaled climate projections.

  10. Altimetric lagrangian advection to reconstruct Pacific Ocean fine scale surface tracer fields

    NASA Astrophysics Data System (ADS)

    Rogé, Marine; Morrow, Rosemary; Dencausse, Guillaume

    2015-04-01

    In past studies, lagrangian stirring of surface tracer fields by altimetric surface geostrophic currents has been performed in different mid to high-latitude regions, showing good results in reconstructing finer-scale tracer patterns. Here we apply the technique to three different regions in the eastern and western tropical Pacific, and in the subtropical southwest Pacific. Initial conditions are derived from weekly gridded temperature and salinity fields, based on hydrographic data and Argo. Validation of the improved fine-scale surface tracer fields is performed using satellite AMSRE SST data, and high-resolution ship thermosalinograph data. We test two kinds of lagrangian advection. The standard one-way advection is shown to introduce an increased tracer bias as the advection time increases. Indeed, since we only use passive stirring, a bias is introduced from the missing physics, such as air-sea fluxes or mixing. A second "backward-forward" advection technique is shown to reduce the seasonal bias, but more data is lost around coasts and islands, a strong handicap in the tropical Pacific with many small islands. In the subtropical Pacific Ocean, the mesoscale temperature and salinity fronts are well represented by the one-way advection over a 10-day advection time, including westward propagating features not apparent in the initial fields. In the tropics, the results are less clear. The validation is hampered by the complex vertical stratification, and the technique is limited by the lack of accurate surface currents for the stirring - the gridded altimetric fields poorly represent the meridional currents, and are not detecting the fast tropical instability waves, nor the wind-driven circulation. We suggest that the passive lateral stirring technique is efficient in regions with moderate the high mesoscale energy and correlated mesoscale surface temperature and surface height. In other regions, more complex dynamical processes may need to be included.

  11. Dynamical Downscaling of Seasonal Climate Prediction over Nordeste Brazil with ECHAM3 and NCEP's Regional Spectral Models at IRI.

    NASA Astrophysics Data System (ADS)

    Nobre, Paulo; Moura, Antonio D.; Sun, Liqiang

    2001-12-01

    This study presents an evaluation of a seasonal climate forecast done with the International Research Institute for Climate Prediction (IRI) dynamical forecast system (regional model nested into a general circulation model) over northern South America for January-April 1999, encompassing the rainy season over Brazil's Nordeste. The one-way nesting is one in two tiers: first the NCEP's Regional Spectral Model (RSM) runs with an 80-km grid mesh forced by the ECHAM3 atmospheric general circulation model (AGCM) outputs; then the RSM runs with a finer grid mesh (20 km) forced by the forecasts generated by the RSM-80. An ensemble of three realizations is done. Lower boundary conditions over the oceans for both ECHAM and RSM model runs are sea surface temperature forecasts over the tropical oceans. Soil moisture is initialized by ECHAM's inputs. The rainfall forecasts generated by the regional model are compared with those of the AGCM and observations. It is shown that the regional model at 80-km resolution improves upon the AGCM rainfall forecast, reducing both seasonal bias and root-mean-square error. On the other hand, the RSM-20 forecasts presented larger errors, with spatial patterns that resemble those of local topography. The better forecast of the position and width of the intertropical convergence zone (ITCZ) over the tropical Atlantic by the RSM-80 model is one of the principal reasons for better-forecast scores of the RSM-80 relative to the AGCM. The regional model improved the spatial as well as the temporal details of rainfall distribution, and also presenting the minimum spread among the ensemble members. The statistics of synoptic-scale weather variability on seasonal timescales were best forecast with the regional 80-km model over the Nordeste. The possibility of forecasting the frequency distribution of dry and wet spells within the rainy season is encouraging.

  12. Development of a Tsunami Scenario Database for Marmara Sea

    NASA Astrophysics Data System (ADS)

    Ozer Sozdinler, Ceren; Necmioglu, Ocal; Meral Ozel, Nurcan

    2016-04-01

    Due to the very short travel times in Marmara Sea, a Tsunami Early Warning System (TEWS) has to be strongly coupled with the earthquake early warning system and should be supported with a pre-computed tsunami scenario database to be queried in near real-time based on the initial earthquake parameters. To address this problem, 30 different composite earthquake scenarios with maximum credible Mw values based on 32 fault segments have been identified to produce a detailed scenario database for all possible earthquakes in the Marmara Sea with a tsunamigenic potential. The bathy/topo data of Marmara Sea was prepared using GEBCO and ASTER data, bathymetric measurements along Bosphorus, Istanbul and Dardanelle, Canakkale and the coastline digitized from satellite images. The coarser domain in 90m-grid size was divided into 11 sub-regions having 30m-grid size in order to increase the data resolution and precision of the calculation results. The analyses were performed in nested domains with numerical model NAMIDANCE using non-linear shallow water equations. In order to cover all the residential areas, industrial facilities and touristic locations, more than 1000 numerical gauge points were selected along the coasts of Marmara Sea, which are located at water depth of 5 to 10m in finer domain. The distributions of tsunami hydrodynamic parameters were investigated together with the change of water surface elevations, current velocities, momentum fluxes and other important parameters at the gauge points. This work is funded by the project MARsite - New Directions in Seismic Hazard assessment through Focused Earth Observation in the Marmara Supersite (FP7-ENV.2012 6.4-2, Grant 308417 - see NH2.3/GMPV7.4/SM7.7) and supported by SATREPS-MarDim Project (Earthquake and Tsunami Disaster Mitigation in the Marmara Region and Disaster Education in Turkey) and JICA (Japan International Cooperation Agency). The authors would like to acknowledge Ms. Basak Firat for her assistance in preparation of the database.

  13. A Novel Multi-Scale Domain Overlapping CFD/STH Coupling Methodology for Multi-Dimensional Flows Relevant to Nuclear Applications

    NASA Astrophysics Data System (ADS)

    Grunloh, Timothy P.

    The objective of this dissertation is to develop a 3-D domain-overlapping coupling method that leverages the superior flow field resolution of the Computational Fluid Dynamics (CFD) code STAR-CCM+ and the fast execution of the System Thermal Hydraulic (STH) code TRACE to efficiently and accurately model thermal hydraulic transport properties in nuclear power plants under complex conditions of regulatory and economic importance. The primary contribution is the novel Stabilized Inertial Domain Overlapping (SIDO) coupling method, which allows for on-the-fly correction of TRACE solutions for local pressures and velocity profiles inside multi-dimensional regions based on the results of the CFD simulation. The method is found to outperform the more frequently-used domain decomposition coupling methods. An STH code such as TRACE is designed to simulate large, diverse component networks, requiring simplifications to the fluid flow equations for reasonable execution times. Empirical correlations are therefore required for many sub-grid processes. The coarse grids used by TRACE diminish sensitivity to small scale geometric details such as Reactor Pressure Vessel (RPV) internals. A CFD code such as STAR-CCM+ uses much finer computational meshes that are sensitive to the geometric details of reactor internals. In turbulent flows, it is infeasible to fully resolve the flow solution, but the correlations used to model turbulence are at a low level. The CFD code can therefore resolve smaller scale flow processes. The development of a 3-D coupling method was carried out with the intention of improving predictive capabilities of transport properties in the downcomer and lower plenum regions of an RPV in reactor safety calculations. These regions are responsible for the multi-dimensional mixing effects that determine the distribution at the core inlet of quantities with reactivity implications, such as fluid temperature and dissolved neutron absorber concentration.

  14. Spotlight-Mode Synthetic Aperture Radar Processing for High-Resolution Lunar Mapping

    NASA Technical Reports Server (NTRS)

    Harcke, Leif; Weintraub, Lawrence; Yun, Sang-Ho; Dickinson, Richard; Gurrola, Eric; Hensley, Scott; Marechal, Nicholas

    2010-01-01

    During the 2008-2009 year, the Goldstone Solar System Radar was upgraded to support radar mapping of the lunar poles at 4 m resolution. The finer resolution of the new system and the accompanying migration through resolution cells called for spotlight, rather than delay-Doppler, imaging techniques. A new pre-processing system supports fast-time Doppler removal and motion compensation to a point. Two spotlight imaging techniques which compensate for phase errors due to i) out of focus-plane motion of the radar and ii) local topography, have been implemented and tested. One is based on the polar format algorithm followed by a unique autofocus technique, the other is a full bistatic time-domain backprojection technique. The processing system yields imagery of the specified resolution. Products enabled by this new system include topographic mapping through radar interferometry, and change detection techniques (amplitude and coherent change) for geolocation of the NASA LCROSS mission impact site.

  15. A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration

    2017-11-01

    An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.

  16. High Resolution Wind Direction and Speed Information for Support of Fire Operations

    Treesearch

    B.W. Butler; J.M. Forthofer; M.A. Finney; L.S. Bradshaw; R. Stratton

    2006-01-01

    Computational Fluid Dynamics (CFD) technology has been used to model wind speed and direction in mountainous terrain at a relatively high resolution compared to other readily available technologies. The process termed “gridded wind” is not a forecast, but rather represents a method for calculating the influence of terrain on general wind flows. Gridded wind simulations...

  17. Development of high-resolution (250 m) historical daily gridded air temperature data using reanalysis and distributed sensor networks for the US northern Rocky Mountains

    Treesearch

    Zachary A. Holden; Alan Swanson; Anna E. Klene; John T. Abatzoglou; Solomon Z. Dobrowski; Samuel A. Cushman; John Squires; Gretchen G. Moisen; Jared W. Oyler

    2016-01-01

    Gridded temperature data sets are typically produced at spatial resolutions that cannot fully resolve fine-scale variation in surface air temperature in regions of complex topography. These data limitations have become increasingly important as scientists and managers attempt to understand and plan for potential climate change impacts. Here, we describe the...

  18. The R package 'icosa' for coarse resolution global triangular and penta-hexagonal gridding

    NASA Astrophysics Data System (ADS)

    Kocsis, Adam T.

    2017-04-01

    With the development of the internet and the computational power of personal computers, open source programming environments have become indispensable for science in the past decade. This includes the increase of the GIS capacity of the free R environment, which was originally developed for statistical analyses. The flexibility of R made it a preferred programming tool in a multitude of disciplines from the area of the biological and geological sciences. Many of these subdisciplines operate with incidence (occurrence) data that are in a large number of cases to be grained before further analyses can be conducted. This graining is executed mostly by gridding data to cells of a Gaussian grid of various resolutions to increase the density of data in a single unit of the analyses. This method has obvious shortcomings despite the ease of its application: well-known systematic biases are induced to cell sizes and shapes that can interfere with the results of statistical procedures, especially if the number of incidence points influences the metrics in question. The 'icosa' package employs a common method to overcome this obstacle by implementing grids with roughly equal cell sizes and shapes that are based on tessellated icosahedra. These grid objects are essentially polyhedra with xyz Cartesian vertex data that are linked to tables of faces and edges. At its current developmental stage, the package uses a single method of tessellation which balances grid cell size and shape distortions, but its structure allows the implementation of various other types of tessellation algorithms. The resolution of the grids can be set by the number of breakpoints inserted into a segment forming an edge of the original icosahedron. Both the triangular and their inverted penta-hexagonal grids are available for creation with the package. The package also incorporates functions to look up coordinates in the grid very effectively and data containers to link data to the grid structure. The classes defined in the package are communicating with classes of the 'sp' and 'raster' packages and functions are supplied that allow resolution change and type conversions. Three-dimensional rendering is made available with the 'rgl' package and two-dimensional projections can be calculated using 'sp' and 'rgdal'. The package was developed as part of a project funded by the Deutsche Forschungsgemeinschaft (KO - 5382/1-1).

  19. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) tape IJ grid pixel extraction processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACREG, extracted all pixels that are contained in a specific IJ grid section. The pixels, along with a header record are stored in a disk file defined by the user. The program will extract up to 99 IJ grid sections.

  20. Quantum interpolation for high-resolution sensing

    PubMed Central

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-01-01

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy. PMID:28196889

  1. Quantum interpolation for high-resolution sensing.

    PubMed

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-02-28

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy.

  2. Propagation-based phase-contrast tomography for high-resolution lung imaging with laboratory sources

    NASA Astrophysics Data System (ADS)

    Krenkel, Martin; Töpperwien, Mareike; Dullin, Christian; Alves, Frauke; Salditt, Tim

    2016-03-01

    We have performed high-resolution phase-contrast tomography on whole mice with a laboratory setup. Enabled by a high-brilliance liquid-metal-jet source, we show the feasibility of propagation-based phase contrast in local tomography even in the presence of strongly absorbing surrounding tissue as it is the case in small animal imaging of the lung. We demonstrate the technique by reconstructions of the mouse lung for two different fields of view, covering the whole organ, and a zoom to the local finer structure of terminal airways and alveoli. With a resolution of a few micrometers and the wide availability of the technique, studies of larger biological samples at the cellular level become possible.

  3. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    NASA Astrophysics Data System (ADS)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  4. Evaluating MODIS snow products for modelling snowmelt runoff: Case study of the Rio Grande headwaters

    NASA Astrophysics Data System (ADS)

    Steele, Caitriana; Dialesandro, John; James, Darren; Elias, Emile; Rango, Albert; Bleiweiss, Max

    2017-12-01

    Snow-covered area (SCA) is a key variable in the Snowmelt-Runoff Model (SRM) and in other models for simulating discharge from snowmelt. Landsat Thematic Mapper (TM), Enhanced Thematic Mapper (ETM +) or Operational Land Imager (OLI) provide remotely sensed data at an appropriate spatial resolution for mapping SCA in small headwater basins, but the temporal resolution of the data is low and may not always provide sufficient cloud-free dates. The coarser spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) offers better temporal resolution and in cloudy years, MODIS data offer the best alternative for mapping snow cover when finer spatial resolution data are unavailable. However, MODIS' coarse spatial resolution (500 m) can obscure fine spatial patterning in snow cover and some MODIS products are not sensitive to end-of-season snow cover. In this study, we aimed to test MODIS snow products for use in simulating snowmelt runoff from smaller headwater basins by a) comparing maps of TM and MODIS-based SCA and b) determining how SRM streamflow simulations are changed by the different estimates of seasonal snow depletion. We compared gridded MODIS snow products (Collection 5 MOD10A1 fractional and binary SCA; SCA derived from Collection 6 MOD10A1 Normalised Difference Snow Index (NDSI) Snow Cover), and the MODIS Snow Covered-Area and Grain size retrieval (MODSCAG) canopy-corrected fractional SCA (SCAMG), with reference SCA maps (SCAREF) generated from binary classification of TM imagery. SCAMG showed strong agreement with SCAREF; excluding true negatives (where both methods agreed no snow was present) the median percent difference between SCAREF and SCAMG ranged between -2.4% and 4.7%. We simulated runoff for each of the four study years using SRM populated with and calibrated for snow depletion curves derived from SCAREF. We then substituted in each of the MODIS-derived depletion curves. With efficiency coefficients ranging between 0.73 and 0.93, SRM simulation results from the SCAMG runs yielded the best results of all the MODIS products and only slightly underestimated discharge volume (between 7 and 11% of measured annual discharge). SRM simulations that used SCA derived from Collection 6 NDSI Snow Cover also yielded promising results, with efficiency coefficients ranging between 0.73 and 0.91. In conclusion, we recommend that when simulating snowmelt runoff from small basins (<4000 km2) with SRM, we recommend that users select either canopy-corrected MODSCAG or create their own site-specific products from the Collection 6 MOD10A1 NDSI.

  5. Regional Climate Simulation and Data Assimilation with Variable-Resolution GCMs

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.

    2002-01-01

    Variable resolution GCMs using a global stretched grid (SG) with enhanced regional resolution over one or multiple areas of interest represents a viable new approach to regional climateklimate change and data assimilation studies and applications. The multiple areas of interest, at least one within each global quadrant, include the major global mountains and major global monsoonal circulations over North America, South America, India-China, and Australia. They also can include the polar domains, and the European and African regions. The SG-approach provides an efficient regional downscaling to mesoscales, and it is an ideal tool for representing consistent interactions of globaYlarge- and regionallmeso- scales while preserving the high quality of global circulation. Basically, the SG-GCM simulations are no different from those of the traditional uniform-grid GCM simulations besides using a variable-resolution grid. Several existing SG-GCMs developed by major centers and groups are briefly described. The major discussion is based on the GEOS (Goddard Earth Observing System) SG-GCM regional climate simulations.

  6. Evaluation of tropical channel refinement using MPAS-A aquaplanet simulations

    DOE PAGES

    Martini, Matus N.; Gustafson, Jr., William I.; O'Brien, Travis A.; ...

    2015-09-13

    Climate models with variable-resolution grids offer a computationally less expensive way to provide more detailed information at regional scales and increased accuracy for processes that cannot be resolved by a coarser grid. This study uses the Model for Prediction Across Scales–Atmosphere (MPAS22A), consisting of a nonhydrostatic dynamical core and a subset of Advanced Research Weather Research and Forecasting (ARW-WRF) model atmospheric physics that have been modified to include the Community Atmosphere Model version 5 (CAM5) cloud fraction parameterization, to investigate the potential benefits of using increased resolution in an tropical channel. The simulations are performed with an idealized aquaplanet configurationmore » using two quasi-uniform grids, with 30 km and 240 km grid spacing, and two variable-resolution grids spanning the same grid spacing range; one with a narrow (20°S–20°N) and one with a wide (30°S–30°N) tropical channel refinement. Results show that increasing resolution in the tropics impacts both the tropical and extratropical circulation. Compared to the quasi-uniform coarse grid, the narrow-channel simulation exhibits stronger updrafts in the Ferrel cell as well as in the middle of the upward branch of the Hadley cell. The wider tropical channel has a closer correspondence to the 30 km quasi-uniform simulation. However, the total atmospheric poleward energy transports are similar in all simulations. The largest differences are in the low-level cloudiness. The refined channel simulations show improved tropical and extratropical precipitation relative to the global 240 km simulation when compared to the global 30 km simulation. All simulations have a single ITCZ. Furthermore, the relatively small differences in mean global and tropical precipitation rates among the simulations are a promising result, and the evidence points to the tropical channel being an effective method for avoiding the extraneous numerical artifacts seen in earlier studies that only refined portion of the tropics.« less

  7. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  8. Parallel hyperbolic PDE simulation on clusters: Cell versus GPU

    NASA Astrophysics Data System (ADS)

    Rostrup, Scott; De Sterck, Hans

    2010-12-01

    Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.

  9. Effects of Stencil Width on Surface Ocean Geostrophic Velocity and Vorticity Estimation from Gridded Satellite Altimeter Data

    DTIC Science & Technology

    2012-03-17

    Texas at Austin, Austin, Texas, USA. g dq ’Departement de Physique and LPO, Universite de Bretagne V _ /" r5r’ Occidental, Brest ...grid points are used in the calculation, so that the grid spacing is 8 times larger than on the original grid. The 3-point stencil differences are sig...that the difference between narrow and wide stencil estimates increases over that found on the original higher resolution grid. Interpolation of the

  10. Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Chacón, Luis; Pernice, Michael

    2008-10-01

    An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.

  11. A Two-Stage Procedure Toward the Efficient Implementation of PANS and Other Hybrid Turbulence Models

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Girimaji, Sharath S.

    2004-01-01

    The main objective of this article is to introduce and to show the implementation of a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for Partial Averaged Navier-Stokes (PANS) and other hybrid models. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The first step is to solve the unsteady or steady Reynolds Averaged Navier-Stokes (URANS/RANS) equations. From this preprocessing step, the turbulence length-scale field is obtained. This is then used to compute the characteristic length-scale ratio between the turbulence scale and the grid spacing. Based on this ratio, we can assess the finest scale resolution that a given grid for a given flow can support. Along with other additional criteria, we are able to analytically identify the appropriate hybrid solver resolution for different regions of the flow. This procedure removes the grid dependency issue that affects the results produced by different hybrid procedures in solving unsteady flows. The formulation, implementation methodology, and validation example are presented. We implemented this capability in a production Computational Fluid Dynamics (CFD) code, PAB3D, for the simulation of unsteady flows.

  12. Development of a gridded meteorological dataset over Java island, Indonesia 1985–2014

    PubMed Central

    Yanto; Livneh, Ben; Rajagopalan, Balaji

    2017-01-01

    We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985–2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology. PMID:28534871

  13. Aggregating pixel-level basal area predictions derived from LiDAR data to industrial forest stands in North-Central Idaho

    Treesearch

    Andrew T. Hudak; Jeffrey S. Evans; Nicholas L. Crookston; Michael J. Falkowski; Brant K. Steigers; Rob Taylor; Halli Hemingway

    2008-01-01

    Stand exams are the principal means by which timber companies monitor and manage their forested lands. Airborne LiDAR surveys sample forest stands at much finer spatial resolution and broader spatial extent than is practical on the ground. In this paper, we developed models that leverage spatially intensive and extensive LiDAR data and a stratified random sample of...

  14. Designing an Error Resolution Checklist for a Shared Manned-Unmanned Environment

    DTIC Science & Technology

    2010-06-01

    performance during the Olympics. Thank you to Birsen Donmez, who took an active role in my statistics instruction. I appreciate your time and patience...in teaching me the finer details of “varsity statistics ”. Also, thank you for being so responsive through e-mail, even though you are now located in...105! 6.3.! Experiment recommendations and future work................................................ 105! Appendix A: Descriptive Statistics

  15. EMAG2: A 2-arc min resolution Earth Magnetic Anomaly Grid compiled from satellite, airborne, and marine magnetic measurements

    USGS Publications Warehouse

    Maus, S.; Barckhausen, U.; Berkenbosch, H.; Bournas, N.; Brozena, J.; Childers, V.; Dostaler, F.; Fairhead, J.D.; Finn, C.; von Frese, R.R.B; Gaina, C.; Golynsky, S.; Kucks, R.; Lu, Hai; Milligan, P.; Mogren, S.; Muller, R.D.; Olesen, O.; Pilkington, M.; Saltus, R.; Schreckenberger, B.; Thebault, E.; Tontini, F.C.

    2009-01-01

    A global Earth Magnetic Anomaly Grid (EMAG2) has been compiled from satellite, ship, and airborne magnetic measurements. EMAG2 is a significant update of our previous candidate grid for the World Digital Magnetic Anomaly Map. The resolution has been improved from 3 arc min to 2 arc min, and the altitude has been reduced from 5 km to 4 km above the geoid. Additional grid and track line data have been included, both over land and the oceans. Wherever available, the original shipborne and airborne data were used instead of precompiled oceanic magnetic grids. Interpolation between sparse track lines in the oceans was improved by directional gridding and extrapolation, based on an oceanic crustal age model. The longest wavelengths (>330 km) were replaced with the latest CHAMP satellite magnetic field model MF6. EMAG2 is available at http://geomag.org/models/EMAG2 and for permanent archive at http://earthref.org/ cgi-bin/er.cgi?s=erda.cgi?n=970. ?? 2009 by the American Geophysical Union.

  16. The Effects of Dissipation and Coarse Grid Resolution for Multigrid in Flow Problems

    NASA Technical Reports Server (NTRS)

    Eliasson, Peter; Engquist, Bjoern

    1996-01-01

    The objective of this paper is to investigate the effects of the numerical dissipation and the resolution of the solution on coarser grids for multigrid with the Euler equation approximations. The convergence is accomplished by multi-stage explicit time-stepping to steady state accelerated by FAS multigrid. A theoretical investigation is carried out for linear hyperbolic equations in one and two dimensions. The spectra reveals that for stability and hence robustness of spatial discretizations with a small amount of numerical dissipation the grid transfer operators have to be accurate enough and the smoother of low temporal accuracy. Numerical results give grid independent convergence in one dimension. For two-dimensional problems with a small amount of numerical dissipation, however, only a few grid levels contribute to an increased speed of convergence. This is explained by the small numerical dissipation leading to dispersion. Increasing the mesh density and hence making the problem over resolved increases the number of mesh levels contributing to an increased speed of convergence. If the steady state equations are elliptic, all grid levels contribute to the convergence regardless of the mesh density.

  17. Analysis of retarding field energy analyzer transmission by simulation of ion trajectories

    NASA Astrophysics Data System (ADS)

    van de Ven, T. H. M.; de Meijere, C. A.; van der Horst, R. M.; van Kampen, M.; Banine, V. Y.; Beckers, J.

    2018-04-01

    Retarding field energy analyzers (RFEAs) are used routinely for the measurement of ion energy distribution functions. By contrast, their ability to measure ion flux densities has been considered unreliable because of lack of knowledge about the effective transmission of the RFEA grids. In this work, we simulate the ion trajectories through a three-gridded RFEA using the simulation software SIMION. Using idealized test cases, it is shown that at high ion energy (i.e., >100 eV) the transmission is equal to the optical transmission rather than the product of the individual grid transparencies. Below 20 eV, ion trajectories are strongly influenced by the electric fields in between the grids. In this region, grid alignment and ion focusing effects contribute to fluctuations in transmission with ion energy. Subsequently the model has been used to simulate the transmission and energy resolution of an experimental RFEA probe. Grid misalignments reduce the transmission fluctuations at low energy. The model predicts the minimum energy resolution, which has been confirmed experimentally by irradiating the probe with a beam of ions with a small energy bandwidth.

  18. Influence of Scale Effect and Model Performance in Downscaling ASTER Land Surface Temperatures to a Very High Spatial Resolution in an Agricultural Area

    NASA Astrophysics Data System (ADS)

    Zhou, J.; Li, G.; Liu, S.; Zhan, W.; Zhang, X.

    2015-12-01

    At present land surface temperatures (LSTs) can be generated from thermal infrared remote sensing with spatial resolutions from ~100 m to tens of kilometers. However, LSTs with high spatial resolution, e.g. tens of meters, are still lack. The purpose of LST downscaling is to generate LSTs with finer spatial resolutions than their native spatial resolutions. The statistical linear or nonlinear regression models are most frequently used for LST downscaling. The basic assumption of these models is the scale-invariant relationships between LST and its descriptors, which is questioned but rare researches have been reported. In addition, few researches can be found for downscaling satellite LST or TIR data to a high spatial resolution, i.e. better than 100 m or even finer. The lack of LST with high spatial resolution cannot satisfy the requirements of applications such as evapotranspiration mapping at the field scale. By selecting a dynamically developing agricultural oasis as the study area, the aim of this study is to downscale the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) LSTs to 15 m, to satisfy the requirement of evapotranspiration mapping at the field scale. Twelve ASTER images from May to September in 2012, covering the entire growth stage of maize, were selected. Four statistical models were evaluated, including one global model, one piecewise model, and two local models. The influence from scale effect in downscaling LST was quantified. The downscaled LSTs are evaluated from accuracy and image quality. Results demonstrate that the influence from scale effect varies according to models and the maize growth stage. Significant influence about -4 K to 6 K existed at the early stage and weaker influence existed in the middle stage. When compared with the ground measured LSTs, the downscaled LSTs resulted from the global and local models yielded higher accuracies and better image qualities than the local models. In addition to the vegetation indices, the surface albedo is an important descriptor for downscaling LST through explaining its spatial variation induced by soil moisture.

  19. A Meteorological Model's Dependence on Radiation Update Frequency

    NASA Technical Reports Server (NTRS)

    Eastman, Joseph L.; Peters-Lidard, Christa; Tao, Wei-Kuo; Kumar, Sujay; Tian, Yudong; Lang, Stephen E.; Zeng, Xiping

    2004-01-01

    Numerical weather models are used to simulate circulations in the atmosphere including clouds and precipitation by applying a set of mathematical equations over a three-dimensional grid. The grid is composed of discrete points at which the meteorological variables are defined. As computing power continues to rise these models are being used at finer grid spacing, but they must still cover a wide range of scales. Some of the physics that must be accounted for in the model cannot be explicitly resolved, and their effects, therefore, must be estimated or "parameterized". Some of these parameterizations are computationally expensive. To alleviate the problem, they are not always updated at the time resolution of the model with the assumption being that the impact will be small. In this study, a coupled land-atmosphere model is used to assess the impact of less frequent updates of the computationally expensive radiation physics for a case on June 6, 2002, that occurred during a field experiment over the central plains known as International H20 Project (IHOP). The model was tested using both the original conditions, which were dry, and with modified conditions wherein moisture was added to the lower part of the atmosphere to produce clouds and precipitation (i.e., a wet case). For each of the conditions (i.e., dry and wet), four set of experiments were conducted wherein the model was run for a period of 24 hours and the radiation fields (including both incoming solar and outgoing longwave) were updated every 1, 3, 10, and 100 time steps. Statistical tests indicated that average quantities of surface variables for both the dry and wet cases were the same for the various update frequencies. However, spatially the results could be quite different especially in the wet case after it began to rain. The near-surface wind field was found to be different most of the time even for the dry case. In the wet case, rain intensities and average vertical profiles of heating associated with cloudy areas were found to differ for the various radiation update frequencies. The latter implies that the mean state of the model could be different as a result of not updating the radiation fields every time step and has important implications for longer term climate studies

  20. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    NASA Astrophysics Data System (ADS)

    Greenough, J. A.; Rider, W. J.

    2004-05-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.

  1. JIGSAW-GEO (1.0): Locally Orthogonal Staggered Unstructured Grid Generation for General Circulation Modelling on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2017-01-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  2. JIGSAW-GEO (1.0): locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere

    NASA Astrophysics Data System (ADS)

    Engwirda, Darren

    2017-06-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.; Cole, C.R.; Arnett, R.C.

    The Hanford Pathline Calculational Program (HPCP) is a numerical model developed to predict the movement of fluid particles from one location to another within the Hanford or similar groundwater systems. As such it can be considered a simple transport model wherein only advective changes are considered. Application of the numerical HPCP to test cases for which semianalytical results are obtainable showed that with reasonable time steps and the grid spacing requirements HPCP give good agreement with the semianalytical solution. The accuracy of the HPCP results is most sensitive in areas near steep or rapidly changing potential gradients and may requiremore » finer grid spacing in those areas than for the groundwater system as a whole. Initial applications of HPCP to the Hanford groundwater flow regime show that significant differences (improvements) in the predictions of fluid particle movement are obtainable with the pathline approach (changing groundwater potential or water table surface) as opposed to the streamline approach (unchanging potential or water table surface) used in past Hanford groundwater analyses. This report documents capability developed for estimating groundwater travel times from the Hanford high-level waste areas to the Columbia River at different water table levels.« less

  4. Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.

  5. A Structured-Grid Quality Measure for Simulated Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A structured-grid quality measure is proposed, combining three traditional measurements: intersection angles, stretching, and curvature. Quality assesses whether the grid generated provides the best possible tradeoffs in grid stretching and skewness that enable accurate flow predictions, whereas the grid density is assumed to be a constraint imposed by the available computational resources and the desired resolution of the flow field. The usefulness of this quality measure is assessed by comparing heat transfer predictions from grid convergence studies for grids of varying quality in the range of [0.6-0.8] on an 8'half-angle sphere-cone, at laminar, perfect gas, Mach 10 wind tunnel conditions.

  6. Efficacy of Electrocuting Devices to Catch Tsetse Flies (Glossinidae) and Other Diptera

    PubMed Central

    Vale, Glyn A.; Hargrove, John W.; Cullis, N. Alan; Chamisa, Andrew; Torr, Stephen J.

    2015-01-01

    Background The behaviour of insect vectors has an important bearing on the epidemiology of the diseases they transmit, and on the opportunities for vector control. Two sorts of electrocuting device have been particularly useful for studying the behaviour of tsetse flies (Glossina spp), the vectors of the trypanosomes that cause sleeping sickness in humans and nagana in livestock. Such devices consist of grids on netting (E-net) to catch tsetse in flight, or on cloth (E-cloth) to catch alighting flies. Catches are most meaningful when the devices catch as many as possible of the flies potentially available to them, and when the proportion caught is known. There have been conflicting indications for the catching efficiency, depending on whether the assessments were made by the naked eye or assisted by video recordings. Methodology/Principal Findings Using grids of 0.5m2 in Zimbabwe, we developed catch methods of studying the efficiency of E-nets and E-cloth for tsetse, using improved transformers to supply the grids with electrical pulses of ~40kV. At energies per pulse of 35–215mJ, the efficiency was enhanced by reducing the pulse interval from 3200 to 1ms. Efficiency was low at 35mJ per pulse, but there seemed no benefit of increasing the energy beyond 70mJ. Catches at E-nets declined when the fine netting normally used became either coarser or much finer, and increased when the grid frame was moved from 2.5cm to 27.5cm from the grid. Data for muscoids and tabanids were roughly comparable to those for tsetse. Conclusion/Significance The catch method of studying efficiency is useful for supplementing and extending video methods. Specifications are suggested for E-nets and E-cloth that are ~95% efficient and suitable for estimating the absolute numbers of available flies. Grids that are less efficient, but more economical, are recommended for studies of relative numbers available to various baits. PMID:26505202

  7. Time-marching multi-grid seismic tomography

    NASA Astrophysics Data System (ADS)

    Tong, P.; Yang, D.; Liu, Q.

    2016-12-01

    From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.

  8. Efficacy of Electrocuting Devices to Catch Tsetse Flies (Glossinidae) and Other Diptera.

    PubMed

    Vale, Glyn A; Hargrove, John W; Cullis, N Alan; Chamisa, Andrew; Torr, Stephen J

    2015-10-01

    The behaviour of insect vectors has an important bearing on the epidemiology of the diseases they transmit, and on the opportunities for vector control. Two sorts of electrocuting device have been particularly useful for studying the behaviour of tsetse flies (Glossina spp), the vectors of the trypanosomes that cause sleeping sickness in humans and nagana in livestock. Such devices consist of grids on netting (E-net) to catch tsetse in flight, or on cloth (E-cloth) to catch alighting flies. Catches are most meaningful when the devices catch as many as possible of the flies potentially available to them, and when the proportion caught is known. There have been conflicting indications for the catching efficiency, depending on whether the assessments were made by the naked eye or assisted by video recordings. Using grids of 0.5m2 in Zimbabwe, we developed catch methods of studying the efficiency of E-nets and E-cloth for tsetse, using improved transformers to supply the grids with electrical pulses of ~40kV. At energies per pulse of 35-215mJ, the efficiency was enhanced by reducing the pulse interval from 3200 to 1ms. Efficiency was low at 35mJ per pulse, but there seemed no benefit of increasing the energy beyond 70mJ. Catches at E-nets declined when the fine netting normally used became either coarser or much finer, and increased when the grid frame was moved from 2.5cm to 27.5cm from the grid. Data for muscoids and tabanids were roughly comparable to those for tsetse. The catch method of studying efficiency is useful for supplementing and extending video methods. Specifications are suggested for E-nets and E-cloth that are ~95% efficient and suitable for estimating the absolute numbers of available flies. Grids that are less efficient, but more economical, are recommended for studies of relative numbers available to various baits.

  9. Selective Capture of Histidine-tagged Proteins from Cell Lysates Using TEM grids Modified with NTA-Graphene Oxide

    NASA Astrophysics Data System (ADS)

    Benjamin, Christopher J.; Wright, Kyle J.; Bolton, Scott C.; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L.; Jiang, Wen; Thompson, David H.

    2016-10-01

    We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with Nα, Nα-dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His6-T7 bacteriophage and His6-GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His6-GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.

  10. Selective Capture of Histidine-tagged Proteins from Cell Lysates Using TEM grids Modified with NTA-Graphene Oxide.

    PubMed

    Benjamin, Christopher J; Wright, Kyle J; Bolton, Scott C; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L; Jiang, Wen; Thompson, David H

    2016-10-17

    We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with N α , N α -dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His 6 -T7 bacteriophage and His 6 -GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His 6 -GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.

  11. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  12. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  13. Optically sectioned fluorescence endomicroscopy with hybrid-illumination imaging through a flexible fiber bundle.

    PubMed

    Santos, Silvia; Chu, Kengyeh K; Lim, Daryl; Bozinovic, Nenad; Ford, Tim N; Hourtoule, Claire; Bartoo, Aaron C; Singh, Satish K; Mertz, Jerome

    2009-01-01

    We present an endomicroscope apparatus that exhibits out-of-focus background rejection based on wide-field illumination through a flexible imaging fiber bundle. Our technique, called HiLo microscopy, involves acquiring two images, one with grid-pattern illumination and another with standard uniform illumination. An evaluation of the image contrast with grid-pattern illumination provides an optically sectioned image with low resolution. This is complemented with high-resolution information from the uniform illumination image, leading to a full-resolution image that is optically sectioned. HiLo endomicroscope movies are presented of fluorescently labeled rat colonic mucosa.

  14. Optically sectioned fluorescence endomicroscopy with hybrid-illumination imaging through a flexible fiber bundle

    NASA Astrophysics Data System (ADS)

    Santos, Silvia; Chu, Kengyeh K.; Lim, Daryl; Bozinovic, Nenad; Ford, Tim N.; Hourtoule, Claire; Bartoo, Aaron C.; Singh, Satish K.; Mertz, Jerome

    2009-05-01

    We present an endomicroscope apparatus that exhibits out-of-focus background rejection based on wide-field illumination through a flexible imaging fiber bundle. Our technique, called HiLo microscopy, involves acquiring two images, one with grid-pattern illumination and another with standard uniform illumination. An evaluation of the image contrast with grid-pattern illumination provides an optically sectioned image with low resolution. This is complemented with high-resolution information from the uniform illumination image, leading to a full-resolution image that is optically sectioned. HiLo endomicroscope movies are presented of fluorescently labeled rat colonic mucosa.

  15. A Composite Medium Approximation for Moisture Tension-Dependent Anisotropy in Unsaturated Layered Sediments

    NASA Astrophysics Data System (ADS)

    Pruess, K.

    2001-12-01

    Sedimentary formations often have a layered structure in which hydrogeologic properties have substantially larger correlation length in the bedding plane than perpendicular to it. Laboratory and field experiments and observations have shown that even small-scale layering, down to millimeter-size laminations, can substantially alter and impede the downward migration of infiltrating liquids, while enhancing lateral flow. The fundamental mechanism is that of a capillary barrier: at increasingly negative moisture tension (capillary suction pressure), coarse-grained layers with large pores desaturate more quickly than finer-grained media. This strongly reduces the hydraulic conductivity of the coarser (higher saturated hydraulic conductivity) layers, which then act as barriers to downward flow, forcing water to accumulate and spread near the bottom of the overlying finer-grained material. We present a "composite medium approximation" (COMA) for anisotropic flow behavior on a typical grid block scale (0.1 - 1 m or larger) in finite-difference models. On this scale the medium is conceptualized as consisting of homogeneous horizontal layers with uniform thickness, and capillary equilibrium is assumed to prevail locally. Directionally-dependent relative permeabilities are obtained by considering horizontal flow to proceed via "conductors in parallel," while vertical flow involves "resistors in series." The model is formulated for the general case of N layers, and implementation of a simplified two-layer (fine-coarse) approximation in the multiphase flow simulator TOUGH2 is described. The accuracy of COMA is evaluated by comparing numerical simulations of plume migration in 1-D and 2-D unsaturated flow with results of fine-grid simulations in which all layers are discretized explicitly. Applications to water seepage and solute transport at the Hanford site are also described. This work was supported by the U.S. Department of Energy under Contract No. DE-AC03-76SF00098 through Memorandum Purchase Order 248861-A-B2 between Pacific Northwest National Laboratory and Lawrence Berkeley National Laboratory.

  16. Simulating the Agulhas system in global ocean models - nesting vs. multi-resolution unstructured meshes

    NASA Astrophysics Data System (ADS)

    Biastoch, Arne; Sein, Dmitry; Durgadoo, Jonathan V.; Wang, Qiang; Danilov, Sergey

    2018-01-01

    Many questions in ocean and climate modelling require the combined use of high resolution, global coverage and multi-decadal integration length. For this combination, even modern resources limit the use of traditional structured-mesh grids. Here we compare two approaches: A high-resolution grid nested into a global model at coarser resolution (NEMO with AGRIF) and an unstructured-mesh grid (FESOM) which allows to variably enhance resolution where desired. The Agulhas system around South Africa is used as a testcase, providing an energetic interplay of a strong western boundary current and mesoscale dynamics. Its open setting into the horizontal and global overturning circulations also requires global coverage. Both model configurations simulate a reasonable large-scale circulation. Distribution and temporal variability of the wind-driven circulation are quite comparable due to the same atmospheric forcing. However, the overturning circulation differs, owing each model's ability to represent formation and spreading of deep water masses. In terms of regional, high-resolution dynamics, all elements of the Agulhas system are well represented. Owing to the strong nonlinearity in the system, Agulhas Current transports of both configurations and in comparison with observations differ in strength and temporal variability. Similar decadal trends in Agulhas Current transport and Agulhas leakage are linked to the trends in wind forcing.

  17. Implications of sensor design for coral reef detection: Upscaling ground hyperspectral imagery in spatial and spectral scales

    NASA Astrophysics Data System (ADS)

    Caras, Tamir; Hedley, John; Karnieli, Arnon

    2017-12-01

    Remote sensing offers a potential tool for large scale environmental surveying and monitoring. However, remote observations of coral reefs are difficult especially due to the spatial and spectral complexity of the target compared to sensor specifications as well as the environmental implications of the water medium above. The development of sensors is driven by technological advances and the desired products. Currently, spaceborne systems are technologically limited to a choice between high spectral resolution and high spatial resolution, but not both. The current study explores the dilemma of whether future sensor design for marine monitoring should prioritise on improving their spatial or spectral resolution. To address this question, a spatially and spectrally resampled ground-level hyperspectral image was used to test two classification elements: (1) how the tradeoff between spatial and spectral resolutions affects classification; and (2) how a noise reduction by majority filter might improve classification accuracy. The studied reef, in the Gulf of Aqaba (Eilat), Israel, is heterogeneous and complex so the local substrate patches are generally finer than currently available imagery. Therefore, the tested spatial resolution was broadly divided into four scale categories from five millimeters to one meter. Spectral resolution resampling aimed to mimic currently available and forthcoming spaceborne sensors such as (1) Environmental Mapping and Analysis Program (EnMAP) that is characterized by 25 bands of 6.5 nm width; (2) VENμS with 12 narrow bands; and (3) the WorldView series with broadband multispectral resolution. Results suggest that spatial resolution should generally be prioritized for coral reef classification because the finer spatial scale tested (pixel size < 0.1 m) may compensate for some low spectral resolution drawbacks. In this regard, it is shown that the post-classification majority filtering substantially improves the accuracy of all pixel sizes up to the point where the kernel size reaches the average unit size (pixel < 0.25 m). However, careful investigation as to the effect of band distribution and choice could improve the sensor suitability for the marine environment task. This in mind, while the focus in this study was on the technologically limited spaceborne design, aerial sensors may presently provide an opportunity to implement the suggested setup.

  18. Globally Gridded Satellite (GridSat) Observations for Climate Studies

    NASA Technical Reports Server (NTRS)

    Knapp, Kenneth R.; Ansari, Steve; Bain, Caroline L.; Bourassa, Mark A.; Dickinson, Michael J.; Funk, Chris; Helms, Chip N.; Hennon, Christopher C.; Holmes, Christopher D.; Huffman, George J.; hide

    2012-01-01

    Geostationary satellites have provided routine, high temporal resolution Earth observations since the 1970s. Despite the long period of record, use of these data in climate studies has been limited for numerous reasons, among them: there is no central archive of geostationary data for all international satellites, full temporal and spatial resolution data are voluminous, and diverse calibration and navigation formats encumber the uniform processing needed for multi-satellite climate studies. The International Satellite Cloud Climatology Project set the stage for overcoming these issues by archiving a subset of the full resolution geostationary data at approx.10 km resolution at 3 hourly intervals since 1983. Recent efforts at NOAA s National Climatic Data Center to provide convenient access to these data include remapping the data to a standard map projection, recalibrating the data to optimize temporal homogeneity, extending the record of observations back to 1980, and reformatting the data for broad public distribution. The Gridded Satellite (GridSat) dataset includes observations from the visible, infrared window, and infrared water vapor channels. Data are stored in the netCDF format using standards that permit a wide variety of tools and libraries to quickly and easily process the data. A novel data layering approach, together with appropriate satellite and file metadata, allows users to access GridSat data at varying levels of complexity based on their needs. The result is a climate data record already in use by the meteorological community. Examples include reanalysis of tropical cyclones, studies of global precipitation, and detection and tracking of the intertropical convergence zone.

  19. Influence of model grid size on the simulation of PM2.5 and the related excess mortality in Japan

    NASA Astrophysics Data System (ADS)

    Goto, D.; Ueda, K.; Ng, C. F.; Takami, A.; Ariga, T.; Matsuhashi, K.; Nakajima, T.

    2016-12-01

    Aerosols, especially PM2.5, can affect air pollution, climate change, and human health. The estimation of health impacts due to PM2.5 is often performed using global and regional aerosol transport models with various horizontal resolutions. To investigate the dependence of the simulated PM2.5 on model grid sizes, we executed two simulations using a high-resolution model ( 10km; HRM) and a low-resolution model ( 100km; LRM, which is a typical value for general circulation models). In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a stretched grid system in HRM and a uniform grid system in LRM for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). These calculations were performed by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 for the elderly. Results showed the LRM underestimated by approximately 30 % (of PM2.5 concentrations in the 2000 and 2030), approximately 60 % (excess mortality in the 2000) and approximately 90 % (excess mortality in 2030) compared to the HRM results. The estimation of excess mortality therefore performed better with high-resolution grid sizes. In addition, we also found that our nesting method could be a useful tool to obtain better estimation results.

  20. Some lessons and thoughts from development of an old-fashioned high-resolution atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Ohfuchi, Wataru; Enomoto, Takeshi; Yoshioka, Mayumi K.; Takaya, Koutarou

    2014-05-01

    Some high-resolution simulations with a conventional atmospheric general circulation model (AGCM) were conducted right after the first Earth Simulator started operating in the spring of 2002. More simulations with various resolutions followed. The AGCM in this study, AFES (Agcm For the Earth Simulator), is a primitive equation spectral transform method model with a cumulus convection parameterization. In this presentation, some findings from comparisons between high and low-resolution simulations, and some future perspectives of old-fashioned AGCMs will be discussed. One obvious advantage of increasing resolution is capability of resolving the fine structures of topography and atmospheric flow. By increasing resolution from T39 (about 320 km horizontal grid interval) to T79 (160 km), to T159 (80 km) to T319 (40 km), topographic precipitation over Japan becomes increasingly realistic. This feature is necessary for climate and weather studies involving both global and local aspects. In order to resolve submesoscale (about 100 km horizontal scale) atmospheric circulation, about 10-km grid interval is necessary. Comparing T1279 (10 km) simulations with T319 ones, it is found that, for example, the intensity of heavy rain associated with Baiu front and the central pressure of typhoon become more realistic. These realistic submesoscale phenomena should have impact on larger-sale flow through dynamics and thermodynamics. An interesting finding by increasing horizontal resolution of a conventional AGCM is that some cumulus convection parameterizations, such as Arakawa-Schubert type scheme, gradually stop producing precipitation, while some others, such as Emanuel type, do not. With the former, the grid condensation increases with the model resolution to compensate. Which characteristics are more desirable is arguable but it is an important feature one has to consider when developing a high-resolution conventional AGCM. Many may think that conventional primitive equation spectral transform AGCMs, such as AFES, have no future. Developing globally homogeneous nonhydrostatic cloud resolving grid AGCMs is obviously a straightforward direction for the future. However these models will be very expensive for many users for a while, perhaps for the next some decades. On the other hand, old-fashioned AGCMs with a grid interval of 20-100 km will remain to be accurate and efficient tools for many users for many years to come. Also by coupling with a fine-resolution regional nonhydrostatic model, a conventional AGCM may overcome its limitation for use in climate and weather studies in the future.

  1. Grid-based mapping: A method for rapidly determining the spatial distributions of small features over very large areas

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas

    2017-06-01

    The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.

  2. Signal to Noise Ratio for Different Gridded Rainfall Products of Indian Monsoon

    NASA Astrophysics Data System (ADS)

    Nehra, P.; Shastri, H. K.; Ghosh, S.; Mishra, V.; Murtugudde, R. G.

    2014-12-01

    Gridded rainfall datasets provide useful information of spatial and temporal distribution of precipitation over a region. For India, there are 3 gridded rainfall data products available from India Meteorological Department (IMD), Tropical Rainfall Measurement Mission (TRMM) and Asian Precipitation - Highly Resolved Observational Data Integration towards Evaluation of Water Resources (APHRODITE), these compile precipitation information obtained through satellite based measurement and ground station based data. The gridded rainfall data from IMD is available at spatial resolution of 1°, 0.5° and 0.25° where as TRMM and APHRODITE is available at 0.25°. Here, we employ 7 years (1998-2004) of common time period amongst the 3 data products for the south-west monsoon season, i.e., the months June to September. We examine temporal mean and standard deviation of these 3 products to observe substantial variation amongst them at 1° resolution whereas for 0.25° resolution, all the data types are nearly identical. We determine the Signal to Noise Ratio (SNR) of the 3 products at 1° and 0.25° resolution based on noise separation technique adopting horizontal separation of the power spectrum generated with the Fast Fourier Transformation (FFT). A methodology is developed for threshold based separation of signal and noise from the power spectrum, treating the noise as white. The variance of signal to that of noise is computed to obtain SNR. Determination of SNR for different regions over the country shows the highest SNR with APHRODITE at 0.25° resolution. It is observed that the eastern part of India has the highest SNR in all cases considered whereas the northern and southern most Indian regions have lowest SNR. An incremental linear trend is observed among the SNR values and the spatial variance of corresponding region. Relationship between the computed SNR values and the interpolation method used with the dataset is analyzed. The SNR analysis provides an effective tool to evaluate the gridded precipitation data products. However detailed analysis is needed to determine the processes that lead to these SNR distributions so that the quality of the gridded rainfall data products can be further improved and transferability of the gridding algorithms can be explored to produce a unified high-quality rainfall dataset.

  3. Global Multi-Resolution Topography (GMRT) Synthesis - Version 2.0

    NASA Astrophysics Data System (ADS)

    Ferrini, V.; Coplan, J.; Carbotte, S. M.; Ryan, W. B.; O'Hara, S.; Morton, J. J.

    2010-12-01

    The detailed morphology of the global ocean floor is poorly known, with most areas mapped only at low resolution using satellite-based measurements. Ship-based sonars provide data at resolution sufficient to quantify seafloor features related to the active processes of erosion, sediment flow, volcanism, and faulting. To date, these data have been collected in a small fraction of the global ocean (<10%). The Global Multi-Resolution Topography (GMRT) synthesis makes use of sonar data collected by scientists and institutions worldwide, merging them into a single continuously updated compilation of high-resolution seafloor topography. Several applications, including GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org), make use of the GMRT Synthesis and provide direct access to images and underlying gridded data. Source multibeam files included in the compilation can also accessed through custom functionality in GeoMapApp. The GMRT Synthesis began in 1992 as the Ridge Multibeam Synthesis. It was subsequently expanded to include bathymetry data from the Southern Ocean, and now includes data from throughout the global oceans. Our design strategy has been to make data available at the full native resolution of shipboard sonar systems, which historically has been ~100 m in the deep sea (Ryan et al., 2009). A new release of the GMRT Synthesis in Fall of 2010 includes several significant improvements over our initial strategy. In addition to increasing the number of cruises included in the compilation by over 25%, we have developed a new protocol for handling multibeam source data, which has improved the overall quality of the compilation. The new tileset also includes a discrete layer of sonar data in the public domain that are gridded to the full resolution of the sonar system, with data gridded 25 m in some areas. This discrete layer of sonar data has been provided to Google for integration into Google’s default ocean base map. NOAA coastal grids and numerous grids contributed by the international science community are also integrated into the GMRT Synthesis. Finally, terrestrial elevation data from NASA’s ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) global DEM, and the USGS National Elevation Dataset have been included in the synthesis, providing resolution of up to 10 m in some areas of the US.

  4. Simulation of the Summer Monsoon Rainfall over East Asia using the NCEP GFS Cumulus Parameterization at Different Horizontal Resolutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Kyo-Sun; Hong, Song You; Yoon, Jin-Ho

    2014-10-01

    The most recent version of Simplified Arakawa-Schubert (SAS) cumulus scheme in National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) (GFS SAS) has been implemented into the Weather and Research Forecasting (WRF) model with a modification of triggering condition and convective mass flux to become depending on model’s horizontal grid spacing. East Asian Summer Monsoon of 2006 from June to August is selected to evaluate the performance of the modified GFS SAS scheme. Simulated monsoon rainfall with the modified GFS SAS scheme shows better agreement with observation compared to the original GFS SAS scheme. The original GFS SAS schememore » simulates the similar ratio of subgrid-scale precipitation, which is calculated from a cumulus scheme, against total precipitation regardless of model’s horizontal grid spacing. This is counter-intuitive because the portion of resolved clouds in a grid box should be increased as the model grid spacing decreases. This counter-intuitive behavior of the original GFS SAS scheme is alleviated by the modified GFS SAS scheme. Further, three different cumulus schemes (Grell and Freitas, Kain and Fritsch, and Betts-Miller-Janjic) are chosen to investigate the role of a horizontal resolution on simulated monsoon rainfall. The performance of high-resolution modeling is not always enhanced as the spatial resolution becomes higher. Even though improvement of probability density function of rain rate and long wave fluxes by the higher-resolution simulation is robust regardless of a choice of cumulus parameterization scheme, the overall skill score of surface rainfall is not monotonically increasing with spatial resolution.« less

  5. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2014-08-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.

  6. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  7. The impact of the resolution of meteorological datasets on catchment-scale drought studies

    NASA Astrophysics Data System (ADS)

    Hellwig, Jost; Stahl, Kerstin

    2017-04-01

    Gridded meteorological datasets provide the basis to study drought at a range of scales, including catchment scale drought studies in hydrology. They are readily available to study past weather conditions and often serve real time monitoring as well. As these datasets differ in spatial/temporal coverage and spatial/temporal resolution, for most studies there is a tradeoff between these features. Our investigation examines whether biases occur when studying drought on catchment scale with low resolution input data. For that, a comparison among the datasets HYRAS (covering Central Europe, 1x1 km grid, daily data, 1951 - 2005), E-OBS (Europe, 0.25° grid, daily data, 1950-2015) and GPCC (whole world, 0.5° grid, monthly data, 1901 - 2013) is carried out. Generally, biases in precipitation increase with decreasing resolution. Most important variations are found during summer. In low mountain range of Central Europe the datasets of sparse resolution (E-OBS, GPCC) overestimate dry days and underestimate total precipitation since they are not able to describe high spatial variability. However, relative measures like the correlation coefficient reveal good consistencies of dry and wet periods, both for absolute precipitation values and standardized indices like the Standardized Precipitation Index (SPI) or Standardized Precipitation Evaporation Index (SPEI). Particularly the most severe droughts derived from the different datasets match very well. These results indicate that absolute values of sparse resolution datasets applied to catchment scale might be critical to use for an assessment of the hydrological drought at catchment scale, whereas relative measures for determining periods of drought are more trustworthy. Therefore, studies on drought, that downscale meteorological data, should carefully consider their data needs and focus on relative measures for dry periods if sufficient for the task.

  8. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  9. A Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF)

    NASA Astrophysics Data System (ADS)

    Trotta, Francesco; Fenu, Elisa; Pinardi, Nadia; Bruciaferri, Diego; Giacomelli, Luca; Federico, Ivan; Coppini, Giovanni

    2016-11-01

    We present a numerical platform named Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF). The platform is developed for short-time forecasts and is designed to be embedded in any region of the large-scale Mediterranean Forecasting System (MFS) via downscaling. We employ CTD data collected during a campaign around the Elba island to calibrate and validate SURF. The model requires an initial spin up period of a few days in order to adapt the initial interpolated fields and the subsequent solutions to the higher-resolution nested grids adopted by SURF. Through a comparison with the CTD data, we quantify the improvement obtained by SURF model compared to the coarse-resolution MFS model.

  10. A LES-Langevin model for turbulence

    NASA Astrophysics Data System (ADS)

    Dolganov, Rostislav; Dubrulle, Bérengère; Laval, Jean-Philippe

    2006-11-01

    The rationale for Large Eddy Simulation is rooted in our inability to handle all degrees of freedom (N˜10^16 for Re˜10^7). ``Deterministic'' models based on eddy-viscosity seek to reproduce the intensification of the energy transport. However, they fail to reproduce backward energy transfer (backscatter) from small to large scale, which is an essentiel feature of the turbulence near wall or in boundary layer. To capture this backscatter, ``stochastic'' strategies have been developed. In the present talk, we shall discuss such a strategy, based on a Rapid Distorsion Theory (RDT). Specifically, we first divide the small scale contribution to the Reynolds Stress Tensor in two parts: a turbulent viscosity and the pseudo-Lamb vector, representing the nonlinear cross terms of resolved and sub-grid scales. We then estimate the dynamics of small-scale motion by the RDT applied to Navier-Stockes equation. We use this to model the cross term evolution by a Langevin equation, in which the random force is provided by sub-grid pressure terms. Our LES model is thus made of a truncated Navier-Stockes equation including the turbulent force and a generalized Langevin equation for the latter, integrated on a twice-finer grid. The backscatter is automatically included in our stochastic model of the pseudo-Lamb vector. We apply this model to the case of homogeneous isotropic turbulence and turbulent channel flow.

  11. Prediction of Root Zone Soil Moisture using Remote Sensing Products and In-Situ Observation under Climate Change Scenario

    NASA Astrophysics Data System (ADS)

    Singh, G.; Panda, R. K.; Mohanty, B.

    2015-12-01

    Prediction of root zone soil moisture status at field level is vital for developing efficient agricultural water management schemes. In this study, root zone soil moisture was estimated across the Rana watershed in Eastern India, by assimilation of near-surface soil moisture estimate from SMOS satellite into a physically-based Soil-Water-Atmosphere-Plant (SWAP) model. An ensemble Kalman filter (EnKF) technique coupled with SWAP model was used for assimilating the satellite soil moisture observation at different spatial scales. The universal triangle concept and artificial intelligence techniques were applied to disaggregate the SMOS satellite monitored near-surface soil moisture at a 40 km resolution to finer scale (1 km resolution), using higher spatial resolution of MODIS derived vegetation indices (NDVI) and land surface temperature (Ts). The disaggregated surface soil moisture were compared to ground-based measurements in diverse landscape using portable impedance probe and gravimetric samples. Simulated root zone soil moisture were compared with continuous soil moisture profile measurements at three monitoring stations. In addition, the impact of projected climate change on root zone soil moisture were also evaluated. The climate change projections of rainfall were analyzed for the Rana watershed from statistically downscaled Global Circulation Models (GCMs). The long-term root zone soil moisture dynamics were estimated by including a rainfall generator of likely scenarios. The predicted long term root zone soil moisture status at finer scale can help in developing efficient agricultural water management schemes to increase crop production, which lead to enhance the water use efficiency.

  12. Synthesis of Road Networks by Data Conflation

    DTIC Science & Technology

    2014-04-01

    Transform requires basic trigonometric properties. Suppose we have a line oriented as shown in Figure 9 then by defining the parameters, ρ, and θ we...location. Rather than searching for the remaining three parameters, the major and minor axes and the orientation angle, the axes ratio is utilized to...axes ratio and orientation angle are searched on a coarse quantization level and then the local maxima are obtained and a finer resolution area is

  13. Mapping near-surface air temperature, pressure, relative humidity and wind speed over Mainland China with high spatiotemporal resolution

    NASA Astrophysics Data System (ADS)

    Li, Tao; Zheng, Xiaogu; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Zhang, Shupeng; Wu, Guocan; Wang, Zhonglei; Huang, Chengcheng; Shen, Yan; Liao, Rongwei

    2014-09-01

    As part of a joint effort to construct an atmospheric forcing dataset for mainland China with high spatiotemporal resolution, a new approach is proposed to construct gridded near-surface temperature, relative humidity, wind speed and surface pressure with a resolution of 1 km×1 km. The approach comprises two steps: (1) fit a partial thin-plate smoothing spline with orography and reanalysis data as explanatory variables to ground-based observations for estimating a trend surface; (2) apply a simple kriging procedure to the residual for trend surface correction. The proposed approach is applied to observations collected at approximately 700 stations over mainland China. The generated forcing fields are compared with the corresponding components of the National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis dataset and the Princeton meteorological forcing dataset. The comparison shows that, both within the station network and within the resolutions of the two gridded datasets, the interpolation errors of the proposed approach are markedly smaller than the two gridded datasets.

  14. NCAR global model topography generation software for unstructured grids

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.

    2015-06-01

    It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.

  15. Derivation and Error Analysis of the Earth Magnetic Anomaly Grid at 2 arc min Resolution Version 3 (EMAG2v3)

    NASA Astrophysics Data System (ADS)

    Meyer, B.; Chulliat, A.; Saltus, R.

    2017-12-01

    The Earth Magnetic Anomaly Grid at 2 arc min resolution version 3, EMAG2v3, combines marine and airborne trackline observations, satellite data, and magnetic observatory data to map the location, intensity, and extent of lithospheric magnetic anomalies. EMAG2v3 includes over 50 million new data points added to NCEI's Geophysical Database System (GEODAS) in recent years. The new grid relies only on observed data, and does not utilize a priori geologic structure or ocean-age information. Comparing this grid to other global magnetic anomaly compilations (e.g., EMAG2 and WDMAM), we can see that the inclusion of a priori ocean-age patterns forces an artificial linear pattern to the grid; the data-only approach allows for greater complexity in representing the evolution along oceanic spreading ridges and continental margins. EMAG2v3 also makes use of the satellite-derived lithospheric field model MF7 in order to accurately represent anomalies with wavelengths greater than 300 km and to create smooth grid merging boundaries. The heterogeneous distribution of errors in the observations used in compiling the EMAG2v3 was explored, and is reported in the final distributed grid. This grid is delivered at both 4 km continuous altitude above WGS84, as well as at sea level for all oceanic and coastal regions.

  16. LandScan 2016 High-Resolution Global Population Data Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bright, Edward A; Rose, Amy N; Urban, Marie L

    The LandScan data set is a worldwide population database compiled on a 30" x 30" latitude/longitude grid. Census counts (at sub-national level) were apportioned to each grid cell based on likelihood coefficients, which are based on land cover, slope, road proximity, high-resolution imagery, and other data sets. The LandScan data set was developed as part of Oak Ridge National Laboratory (ORNL) Global Population Project for estimating ambient populations at risk.

  17. Spectral Topography Generation for Arbitrary Grids

    NASA Astrophysics Data System (ADS)

    Oh, T. J.

    2015-12-01

    A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).

  18. Computation of Flow Over a Drag Prediction Workshop Wing/Body Transport Configuration Using CFL3D

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Biedron, Robert T.

    2001-01-01

    A Drag Prediction Workshop was held in conjunction with the 19th AIAA Applied Aerodynamics Conference in June 2001. The purpose of the workshop was to assess the prediction of drag by computational methods for a wing/body configuration (DLR-F4) representative of subsonic transport aircraft. This report details computed results submitted to this workshop using the Reynolds-averaged Navier-Stokes code CFL3D. Two supplied grids were used: a point-matched 1-to-1 multi-block grid, and an overset multi-block grid. The 1-to-1 grid, generally of much poorer quality and with less streamwise resolution than the overset grid, is found to be too coarse to adequately resolve the surface pressures. However, the global forces and moments are nonetheless similar to those computed using the overset grid. The effect of three different turbulence models is assessed using the 1-to-1 grid. Surface pressures are very similar overall, and the drag variation due to turbulence model is 18 drag counts. Most of this drag variation is in the friction component, and is attributed in part to insufficient grid resolution of the 1-to-1 grid. The misnomer of 'fully turbulent' computations is discussed; comparisons are made using different transition locations and their effects on the global forces and moments are quantified. Finally, the effect of two different versions of a widely used one-equation turbulence model is explored.

  19. Monthly fractional green vegetation cover associated with land cover classes of the conterminous USA

    USGS Publications Warehouse

    Gallo, Kevin P.; Tarpley, Dan; Mitchell, Ken; Csiszar, Ivan; Owen, Timothy W.; Reed, Bradley C.

    2001-01-01

    The land cover classes developed under the coordination of the International Geosphere-Biosphere Programme Data and Information System (IGBP-DIS) have been analyzed for a study area that includes the Conterminous United States and portions of Mexico and Canada. The 1-km resolution data have been analyzed to produce a gridded data set that includes within each 20-km grid cell: 1) the three most dominant land cover classes, 2) the fractional area associated with each of the three dominant classes, and 3) the fractional area covered by water. Additionally, the monthly fraction of green vegetation cover (fgreen) associated with each of the three dominant land cover classes per grid cell was derived from a 5-year climatology of 1-km resolution NOAA-AVHRR data. The variables derived in this study provide a potential improvement over the use of monthly fgreen linked to a single land cover class per model grid cell.

  20. A grid-enabled web service for low-resolution crystal structure refinement.

    PubMed

    O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr

    2012-03-01

    Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.

  1. A Comparative Study of Simulated and Measured Main Landing Gear Noise for Large Civil Transports

    NASA Technical Reports Server (NTRS)

    Konig, Benedikt; Fares, Ehab; Ravetta, Patricio; Khorrami, Mehdi R.

    2017-01-01

    Computational results for the NASA 26%-scale model of a six-wheel main landing gear with and without a toboggan-shaped noise reduction fairing are presented. The model is a high-fidelity representation of a Boeing 777-200 aircraft main landing gear. A lattice Boltzmann method was used to simulate the unsteady flow around the model in isolation. The computations were conducted in free-air at a Mach number of 0.17, matching a recent acoustic test of the same gear model in the Virginia Tech Stability Wind Tunnel in its anechoic configuration. Results obtained on a set of grids with successively finer spatial resolution demonstrate the challenge in resolving/capturing the flow field for the smaller components of the gear and their associated interactions, and the resulting effects on the high-frequency segment of the farfield noise spectrum. Farfield noise spectra were computed based on an FWH integral approach, with simulated pressures on the model solid surfaces or flow-field data extracted on a set of permeable surfaces enclosing the model as input. Comparison of these spectra with microphone array measurements obtained in the tunnel indicated that, for the present complex gear model, the permeable surfaces provide a more accurate representation of farfield noise, suggesting that volumetric effects are not negligible. The present study also demonstrates that good agreement between simulated and measured farfield noise can be achieved if consistent post-processing is applied to both physical and synthetic pressure records at array microphone locations.

  2. Development of an Unsaturated Region Below a Perennial River

    NASA Astrophysics Data System (ADS)

    Su, G. W.; Zhou, Q.; Constantz, J.; Hatch, C.

    2004-12-01

    Field observations at the Russian River Bank Filtration Facility in Sonoma County, California indicate that an unsaturated region exists below the streambed near two adjacent groundwater pumping wells located along the riverbank. Understanding the conditions that give rise to unsaturated flow below the streambed is critical for improving and optimizing riverbank well pumping operations. To investigate the development of an unsaturated region below a perennial river near pumping wells, a three-dimensional model was developed using the multi-phase subsurface flow model, TOUGH2. The model is based on the region around the two pumping wells in the Russian River Bank Filtration Facility. The pumping wells consist of 9 perforated pipes that are projected horizontally into the aquifer at a depth of approximately 20 m below the land surface. A grid was developed for the TOUGH2 model with finer resolution near the wells to represent individual pipes. The effect of varying the pumping operation and the streambed permeability on the extent of the unsaturated region was investigated with the TOUGH2 model. The formation remained saturated below the streambed when only one of the wells was pumped at a rate of 1600 m3/hr, but an unsaturated region developed below the streambed when the two wells each pumped at a rate of 1600 m3/hr. This unsaturated region was deeper when the permeability of the streambed was lower than the aquifer material compared to when the streambed and aquifer permeabilities were the same.

  3. Simulation of all-scale atmospheric dynamics on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Szmelter, Joanna; Xiao, Feng

    2016-10-01

    The advance of massively parallel computing in the nineteen nineties and beyond encouraged finer grid intervals in numerical weather-prediction models. This has improved resolution of weather systems and enhanced the accuracy of forecasts, while setting the trend for development of unified all-scale atmospheric models. This paper first outlines the historical background to a wide range of numerical methods advanced in the process. Next, the trend is illustrated with a technical review of a versatile nonoscillatory forward-in-time finite-volume (NFTFV) approach, proven effective in simulations of atmospheric flows from small-scale dynamics to global circulations and climate. The outlined approach exploits the synergy of two specific ingredients: the MPDATA methods for the simulation of fluid flows based on the sign-preserving properties of upstream differencing; and the flexible finite-volume median-dual unstructured-mesh discretisation of the spatial differential operators comprising PDEs of atmospheric dynamics. The paper consolidates the concepts leading to a family of generalised nonhydrostatic NFTFV flow solvers that include soundproof PDEs of incompressible Boussinesq, anelastic and pseudo-incompressible systems, common in large-eddy simulation of small- and meso-scale dynamics, as well as all-scale compressible Euler equations. Such a framework naturally extends predictive skills of large-eddy simulation to the global atmosphere, providing a bottom-up alternative to the reverse approach pursued in the weather-prediction models. Theoretical considerations are substantiated by calculations attesting to the versatility and efficacy of the NFTFV approach. Some prospective developments are also discussed.

  4. An application of a two-equation model of turbulence to three-dimensional chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Lee, J.

    1994-01-01

    A numerical study of three dimensional chemically reacting and non-reacting flowfields is conducted using a two-equation model of turbulence. A generalized flow solver using an implicit Lower-Upper (LU) diagonal decomposition numerical technique and finite-rate chemistry has been coupled with a low-Reynolds number two-equation model of turbulence. This flow solver is then used to study chemically reacting turbulent supersonic flows inside combustors with synergetic fuel injectors. The reacting and non-reacting turbulent combustor solutions obtained are compared with zero-equation turbulence model solutions and with available experimental data. The hydrogen-air chemistry is modeled using a nine-species/eighteen reaction model. A low-Reynolds number k-epsilon model was used to model the effect of turbulence because, in general, the low-Reynolds number k-epsilon models are easier to implement numerically and are far more general than algebraic models. However, low-Reynolds number k-epsilon models require a much finer near-wall grid resolution than high-Reynolds number models to resolve accurately the near-wall physics. This is especially true in complex flowfields, where the stiff nature of the near-wall turbulence must be resolved. Therefore, the limitations imposed by the near-wall characteristics and compressible model corrections need to be evaluated further. The gradient-diffusion hypothesis is used to model the effects of turbulence on the mass diffusion process. The influence of this low-Reynolds number turbulence model on the reacting flowfield predictions was studied parametrically.

  5. Modeling North Atlantic Nor'easters With Modern Wave Forecast Models

    NASA Astrophysics Data System (ADS)

    Perrie, Will; Toulany, Bechara; Roland, Aron; Dutour-Sikiric, Mathieu; Chen, Changsheng; Beardsley, Robert C.; Qi, Jianhua; Hu, Yongcun; Casey, Michael P.; Shen, Hui

    2018-01-01

    Three state-of-the-art operational wave forecast model systems are implemented on fine-resolution grids for the Northwest Atlantic. These models are: (1) a composite model system consisting of SWAN implemented within WAVEWATCHIII® (the latter is hereafter, WW3) on a nested system of traditional structured grids, (2) an unstructured grid finite-volume wave model denoted "SWAVE," using SWAN physics, and (3) an unstructured grid finite element wind wave model denoted as "WWM" (for "wind wave model") which uses WW3 physics. Models are implemented on grid systems that include relatively large domains to capture the wave energy generated by the storms, as well as including fine-resolution nearshore regions of the southern Gulf of Maine with resolution on the scale of 25 m to simulate areas where inundation and coastal damage have occurred, due to the storms. Storm cases include three intense midlatitude cases: a spring Nor'easter storm in May 2005, the Patriot's Day storm in 2007, and the Boxing Day storm in 2010. Although these wave model systems have comparable overall properties in terms of their performance and skill, it is found that there are differences. Models that use more advanced physics, as presented in recent versions of WW3, tuned to regional characteristics, as in the Gulf of Maine and the Northwest Atlantic, can give enhanced results.

  6. Comparison of SeaWinds Backscatter Imaging Algorithms

    PubMed Central

    Long, David G.

    2017-01-01

    This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, J.C.

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  8. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  9. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    NASA Astrophysics Data System (ADS)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  10. Gridded national inventory of U.S. methane emissions

    DOE PAGES

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...

    2016-11-16

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  11. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  12. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDzORINCA model for 137Cs reinforces the importance of atmospheric modeling in emergency cases to gather information for protecting the population from the adverse effects of radiation.

  13. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl Nuclear Power Plant accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-07-01

    The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDZORINCA model for 137Cs reinforces the importance of atmospheric modelling in emergency cases to gather information for protecting the population from the adverse effects of radiation.

  14. Evaluation of arctic multibeam sonar data quality using nadir crossover error analysis and compilation of a full-resolution data product

    NASA Astrophysics Data System (ADS)

    Flinders, Ashton F.; Mayer, Larry A.; Calder, Brian A.; Armstrong, Andrew A.

    2014-05-01

    We document a new high-resolution multibeam bathymetry compilation for the Canada Basin and Chukchi Borderland in the Arctic Ocean - United States Arctic Multibeam Compilation (USAMBC Version 1.0). The compilation preserves the highest native resolution of the bathymetric data, allowing for more detailed interpretation of seafloor morphology than has been previously possible. The compilation was created from multibeam bathymetry data available through openly accessible government and academic repositories. Much of the new data was collected during dedicated mapping cruises in support of the United States effort to map extended continental shelf regions beyond the 200 nm Exclusive Economic Zone. Data quality was evaluated using nadir-beam crossover-error statistics, making it possible to assess the precision of multibeam depth soundings collected from a wide range of vessels and sonar systems. Data were compiled into a single high-resolution grid through a vertical stacking method, preserving the highest quality data source in any specific grid cell. The crossover-error analysis and method of data compilation can be applied to other multi-source multibeam data sets, and is particularly useful for government agencies targeting extended continental shelf regions but with limited hydrographic capabilities. Both the gridded compilation and an easily distributed geospatial PDF map are freely available through the University of New Hampshire's Center for Coastal and Ocean Mapping (ccom.unh.edu/theme/law-sea). The geospatial pdf is a full resolution, small file-size product that supports interpretation of Arctic seafloor morphology without the need for specialized gridding/visualization software.

  15. Modeling surface trapped river plumes: A sensitivity study

    USGS Publications Warehouse

    Hyatt, Jason; Signell, Richard P.

    2000-01-01

    To better understand the requirements for realistic regional simulation of river plumes in the Gulf of Maine, we test the sensitivity of the Blumberg-Mellor hydrodynamic model to choice of advection scheme, grid resolution, and wind, using idealized geometry and forcing. The test case discharges 1500 m3/s of fresh water into a uniform 32 psu ocean along a straight shelf at 43?? north. The water depth is 15 m at the coast and increases linearly to 190 m at a distance 100 km offshore. Constant discharge runs are conducted in the presence of ambient alongshore current and with and without periodic alongshore wind forcing. Advection methods tested are CENTRAL, UPWIND, the standard Smolarkiewicz MPDATA and a recursive MPDATA scheme. For the no-wind runs, the UPWIND advection scheme performs poorly for grid resolutions typically used in regional simulations (grid spacing of 1-2 km, comparable to or slightly less than the internal Rossby radius, and vertical resolution of 10% of the water column), damping out much of the plume structure. The CENTRAL difference scheme also has problems when wind forcing is neglected, and generates too much structure, shedding eddies of numerical origin. When a weak 5 cm/s ambient current is present in the no-wind case, both the CENTRAL and standard MPDATA schemes produce a false fresh- and dense-water source just upstream of the river inflow due to a standing two-grid length oscillation in the salinity field. The recursive MPDATA scheme completely eliminates the false dense water source, and produces results closest to the grid-converged solution. The results are shown to be very sensitive to vertical grid resolution, and the presence of wind forcing dramatically changes the nature of the plume simulations. The implication of these idealized tests for realistic simulations is discussed, as well as ramifications on previous studies of idealized plume models.

  16. TopoSCALE v.1.0: downscaling gridded climate data in complex terrain

    NASA Astrophysics Data System (ADS)

    Fiddes, J.; Gruber, S.

    2014-02-01

    Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).

  17. Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi

    2014-03-27

    This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modelingmore » approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land-atmosphere modeling.« less

  18. Final Report: Closeout of the Award NO. DE-FG02-98ER62618 (M.S. Fox-Rabinovitz, P.I.)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox-Rabinovitz, M. S.

    The final report describes the study aimed at exploring the variable-resolution stretched-grid (SG) approach to decadal regional climate modeling using advanced numerical techniques. The obtained results have shown that variable-resolution SG-GCMs using stretched grids with fine resolution over the area(s) of interest, is a viable established approach to regional climate modeling. The developed SG-GCMs have been extensively used for regional climate experimentation. The SG-GCM simulations are aimed at studying the U.S. regional climate variability with an emphasis on studying anomalous summer climate events, the U.S. droughts and floods.

  19. Fabricating High-Resolution X-Ray Collimators

    NASA Technical Reports Server (NTRS)

    Appleby, Michael; Atkinson, James E.; Fraser, Iain; Klinger, Jill

    2008-01-01

    A process and method for fabricating multi-grid, high-resolution rotating modulation collimators for arcsecond and sub-arcsecond x-ray and gamma-ray imaging involves photochemical machining and precision stack lamination. The special fixturing and etching techniques that have been developed are used for the fabrication of multiple high-resolution grids on a single array substrate. This technology has application in solar and astrophysics and in a number of medical imaging applications including mammography, computed tomography (CT), single photon emission computed tomography (SPECT), and gamma cameras used in nuclear medicine. This collimator improvement can also be used in non-destructive testing, hydrodynamic weapons testing, and microbeam radiation therapy.

  20. High-quality weather data for grid integration studies

    NASA Astrophysics Data System (ADS)

    Draxl, C.

    2016-12-01

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. In this talk we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather prediction to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets will be presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The Solar Integration National Dataset (SIND) is available as time synchronized with the WIND Toolkit, and will allow for combined wind-solar grid integration studies. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. Grid integration studies are also carried out in various countries, which aim at increasing their wind and solar penetration through combined wind and solar integration data sets. We will present a multi-year effort to directly support India's 24x7 energy access goal through a suite of activities aimed at enabling large-scale deployment of clean energy and energy efficiency. Another current effort is the North-American-Renewable-Integration-Study, with the aim of providing a seamless data set across borders for a whole continent, to simulate and analyze the impacts of potential future large wind and solar power penetrations on bulk power system operations.

  1. A gain-loss framework based on ensemble flow forecasts to switch the urban drainage-wastewater system management towards energy optimization during dry periods

    NASA Astrophysics Data System (ADS)

    Courdent, Vianney; Grum, Morten; Munk-Nielsen, Thomas; Mikkelsen, Peter S.

    2017-05-01

    Precipitation is the cause of major perturbation to the flow in urban drainage and wastewater systems. Flow forecasts, generated by coupling rainfall predictions with a hydrologic runoff model, can potentially be used to optimize the operation of integrated urban drainage-wastewater systems (IUDWSs) during both wet and dry weather periods. Numerical weather prediction (NWP) models have significantly improved in recent years, having increased their spatial and temporal resolution. Finer resolution NWP are suitable for urban-catchment-scale applications, providing longer lead time than radar extrapolation. However, forecasts are inevitably uncertain, and fine resolution is especially challenging for NWP. This uncertainty is commonly addressed in meteorology with ensemble prediction systems (EPSs). Handling uncertainty is challenging for decision makers and hence tools are necessary to provide insight on ensemble forecast usage and to support the rationality of decisions (i.e. forecasts are uncertain and therefore errors will be made; decision makers need tools to justify their choices, demonstrating that these choices are beneficial in the long run). This study presents an economic framework to support the decision-making process by providing information on when acting on the forecast is beneficial and how to handle the EPS. The relative economic value (REV) approach associates economic values with the potential outcomes and determines the preferential use of the EPS forecast. The envelope curve of the REV diagram combines the results from each probability forecast to provide the highest relative economic value for a given gain-loss ratio. This approach is traditionally used at larger scales to assess mitigation measures for adverse events (i.e. the actions are taken when events are forecast). The specificity of this study is to optimize the energy consumption in IUDWS during low-flow periods by exploiting the electrical smart grid market (i.e. the actions are taken when no events are forecast). Furthermore, the results demonstrate the benefit of NWP neighbourhood post-processing methods to enhance the forecast skill and increase the range of beneficial uses.

  2. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE PAGES

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby; ...

    2016-10-22

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  3. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  4. Computation of UH-60A Airloads Using CFD/CSD Coupling on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Lee-Rausch, Elizabeth M.

    2011-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids is used to compute the rotor airloads on the UH-60A helicopter at high-speed and high thrust conditions. The flow solver is coupled to a rotorcraft comprehensive code in order to account for trim and aeroelastic deflections. Simulations are performed both with and without the fuselage, and the effects of grid resolution, temporal resolution and turbulence model are examined. Computed airloads are compared to flight data.

  5. Global Swath and Gridded Data Tiling

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.

    2012-01-01

    This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.

  6. Regional analysis of ground-water recharge: Chapter B in Ground-water recharge in the arid and semiarid southwestern United States (Professional Paper 1703)

    USGS Publications Warehouse

    Flint, Lorraine E.; Flint, Alan L.; Stonestrom, David A.; Constantz, Jim; Ferré, Ty P.A.; Leake, Stanley A.

    2007-01-01

    A modeling analysis of runoff and ground-water recharge for the arid and semiarid southwestern United States was performed to investigate the interactions of climate and other controlling factors and to place the eight study-site investigations into a regional context. A distributed-parameter water-balance model (the Basin Characterization Model, or BCM) was used in the analysis. Data requirements of the BCM included digital representations of topography, soils, geology, and vegetation, together with monthly time-series of precipitation and air-temperature data. Time-series of potential evapotranspiration were generated by using a submodel for solar radiation, taking into account topographic shading, cloudiness, and vegetation density. Snowpack accumulation and melting were modeled using precipitation and air-temperature data. Amounts of water available for runoff and ground-water recharge were calculated on the basis of water-budget considerations by using measured- and generated-meteorologic time series together with estimates of soil-water storage and saturated hydraulic conductivity of subsoil geologic units. Calculations were made on a computational grid with a horizontal resolution of about 270 meters for the entire 1,033,840 square-kilometer study area. The modeling analysis was composed of 194 basins, including the eight basins containing ground-water recharge-site investigations. For each grid cell, the BCM computed monthly values of potential evapotranspiration, soil-water storage, in-place ground-water recharge, and runoff (potential stream flow). A fixed percentage of runoff was assumed to become recharge beneath channels operating at a finer resolution than the computational grid of the BCM. Monthly precipitation and temperature data from 1941 to 2004 were used to explore climatic variability in runoff and ground-water recharge.The selected approach provided a framework for classifying study-site basins with respect to climate and dominant recharge processes. The average climate for all 194 basins ranged from hyperarid to humid, with arid and semiarid basins predominating (fig. 6, chapter A, this volume). Four of the 194 basins had an aridity index of dry subhumid; two of the basins were humid. Of the eight recharge-study sites, six were in semiarid basins, and two were in arid basins. Average-annual potential evapotranspiration showed a regional gradient from less than 1 m/yr in the northeastern part of the study area to more than 2 m/yr in the southwestern part of the study area. Average-annual precipitation was lowest in the two arid-site basins and highest in the two study-site basins in southern Arizona. The relative amount of runoff to in-place recharge varied throughout the study area, reflecting differences primarily in soil water-holding capacity, saturated hydraulic conductivity of subsoil materials, and snowpack dynamics. Climatic forcing expressed in El Niño and Pacific Decadal Oscillation indices strongly influenced the generation of precipitation throughout the study area. Positive values of both indices correlated with the highest amounts of runoff and ground-water recharge.

  7. Grid Quality and Resolution Issues from the Drag Prediction Workshop Series

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Brodersen, Olaf P.; Eisfeld, Bernhard; Wahls, Richard A.; Morrison, Joseph H.; Zickuhr, Tom; Levy, David; hide

    2008-01-01

    The drag prediction workshop series (DPW), held over the last six years, and sponsored by the AIAA Applied Aerodynamics Committee, has been extremely useful in providing an assessment of the state-of-the-art in computationally based aerodynamic drag prediction. An emerging consensus from the three workshop series has been the identification of spatial discretization errors as a dominant error source in absolute as well as incremental drag prediction. This paper provides an overview of the collective experience from the workshop series regarding the effect of grid-related issues on overall drag prediction accuracy. Examples based on workshop results are used to illustrate the effect of grid resolution and grid quality on drag prediction, and grid convergence behavior is examined in detail. For fully attached flows, various accurate and successful workshop results are demonstrated, while anomalous behavior is identified for a number of cases involving substantial regions of separated flow. Based on collective workshop experiences, recommendations for improvements in mesh generation technology which have the potential to impact the state-of-the-art of aerodynamic drag prediction are given.

  8. A principle of economy predicts the functional architecture of grid cells.

    PubMed

    Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay

    2015-09-03

    Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.

  9. Documentation and analysis of a geographic information system application for combining data layers, using nonpoint-source pollution as an example

    USGS Publications Warehouse

    Kiesler, James L.

    2002-01-01

    An analysis of the application indicates that the selected data layers to be combined should be at the greatest spatial resolution possible; however, all data layers do not have to be at the same spatial resolution. The spatial variation of the data layers should be adequately defined. The size of each grid cell should be small enough to maintain the spatial definition of smaller features within the data layers. The most accurate results are shown to occur when the values for the grid cells representing the individual data layers are summed and the mean of the summed grid-cell values is used to describe the watershed of interest.

  10. Overflow Simulations using MPAS-Ocean in Idealized and Realistic Domains

    NASA Astrophysics Data System (ADS)

    Reckinger, S.; Petersen, M. R.; Reckinger, S. J.

    2016-02-01

    MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Also, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates. Additionally, preliminary measurements of overflow diagnostics on global simulations using a realistic oceanic domain are presented.

  11. Airborne laser scanning for forest health status assessment and radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Novotny, Jan; Zemek, Frantisek; Pikl, Miroslav; Janoutova, Ruzena

    2013-04-01

    Structural parameters of forest stands/ecosystems are an important complementary source of information to spectral signatures obtained from airborne imaging spectroscopy when quantitative assessment of forest stands are in the focus, such as estimation of forest biomass, biochemical properties (e.g. chlorophyll /water content), etc. The parameterization of radiative transfer (RT) models used in latter case requires three-dimensional spatial distribution of green foliage and woody biomass. Airborne LiDAR data acquired over forest sites bears these kinds of 3D information. The main objective of the study was to compare the results from several approaches to interpolation of digital elevation model (DEM) and digital surface model (DSM). We worked with airborne LiDAR data with different density (TopEye Mk II 1,064nm instrument, 1-5 points/m2) acquired over the Norway spruce forests situated in the Beskydy Mountains, the Czech Republic. Three different interpolation algorithms with increasing complexity were tested: i/Nearest neighbour approach implemented in the BCAL software package (Idaho Univ.); ii/Averaging and linear interpolation techniques used in the OPALS software (Vienna Univ. of Technology); iii/Active contour technique implemented in the TreeVis software (Univ. of Freiburg). We defined two spatial resolutions for the resulting coupled raster DEMs and DSMs outputs: 0.4 m and 1 m, calculated by each algorithm. The grids correspond to the same spatial resolutions of hyperspectral imagery data for which the DEMs were used in a/geometrical correction and b/building a complex tree models for radiative transfer modelling. We applied two types of analyses when comparing between results from the different interpolations/raster resolution: 1/calculated DEM or DSM between themselves; 2/comparison with field data: DEM with measurements from referential GPS, DSM - field tree alometric measurements, where tree height was calculated as DSM-DEM. The results of the analyses show that: 1/averaging techniques tend to underestimate the tree height and the generated surface does not follow the first LiDAR echoes both for 1 m and 0.4 m pixel size; 2/we did not find any significant difference between tree heights calculated by nearest neighbour algorithm and the active contour technique for 1 m pixel output but the difference increased with finer resolution (0.4 m); 3/the accuracy of the DEMs calculated by tested algorithms is similar.

  12. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  13. Evaluation of MODIS Albedo Product (MCD43A) over Grassland, Agriculture and Forest Surface Types During Dormant and Snow-Covered Periods

    NASA Technical Reports Server (NTRS)

    Wang, Zhousen; Schaaf, Crystal B.; Strahler, Alan H.; Chopping, Mark J.; Roman, Miguel O.; Shuai, Yanmin; Woodcock, Curtis E.; Hollinger, David Y.; Fitzjarrald, David R.

    2013-01-01

    This study assesses the Moderate-resolution Imaging Spectroradiometer (MODIS) BRDF/albedo 8 day standard product and products from the daily Direct Broadcast BRDF/albedo algorithm, and shows that these products agree well with ground-based albedo measurements during the more difficult periods of vegetation dormancy and snow cover. Cropland, grassland, deciduous and coniferous forests are considered. Using an integrated validation strategy, analyses of the representativeness of the surface heterogeneity under both dormant and snow-covered situations are performed to decide whether direct comparisons between ground measurements and 500-m satellite observations can be made or whether finer spatial resolution airborne or spaceborne data are required to scale the results at each location. Landsat Enhanced Thematic Mapper Plus (ETM +) data are used to generate finer scale representations of albedo at each location to fully link ground data with satellite data. In general, results indicate the root mean square errors (RMSEs) are less than 0.030 over spatially representative sites of agriculture/grassland during the dormant periods and less than 0.050 during the snow-covered periods for MCD43A albedo products. For forest, the RMSEs are less than 0.020 during the dormant period and 0.025 during the snow-covered periods. However, a daily retrieval strategy is necessary to capture ephemeral snow events or rapidly changing situations such as the spring snow melt.

  14. Mesoscale data assimilation for a local severe rainfall event with the NHM-LETKF system

    NASA Astrophysics Data System (ADS)

    Kunii, M.

    2013-12-01

    This study aims to improve forecasts of local severe weather events through data assimilation and ensemble forecasting approaches. Here, the local ensemble transform Kalman filter (LETKF) is implemented with the Japan Meteorological Agency's nonhydrostatic model (NHM). The newly developed NHM-LETKF contains an adaptive inflation scheme and a spatial covariance localization scheme with physical distance. One-way nested analysis in which a finer-resolution LETKF is conducted by using the outputs of an outer model also becomes feasible. These new contents should enhance the potential of the LETKF for convective scale events. The NHM-LETKF is applied to a local severe rainfall event in Japan in 2012. Comparison of the root mean square errors between the model first guess and analysis reveals that the system assimilates observations appropriately. Analysis ensemble spreads indicate a significant increase around the time torrential rainfall occurred, which would imply an increase in the uncertainty of environmental fields. Forecasts initialized with LETKF analyses successfully capture intense rainfalls, suggesting that the system can work effectively for local severe weather. Investigation of probabilistic forecasts by ensemble forecasting indicates that this could become a reliable data source for decision making in the future. A one-way nested data assimilation scheme is also tested. The experiment results demonstrate that assimilation with a finer-resolution model provides an advantage in the quantitative precipitation forecasting of local severe weather conditions.

  15. Mission Concept for the Single Aperture Far-Infrared (SAFIR) Observatory

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Amato, Michael J.; Mather, John C.; Moseley, S. Harvey, Jr.

    2004-01-01

    We have developed a preliminary but comprehensive mission concept for SAFIR, as a 10 m-class far-infrared and submillimeter observatory that would begin development later in this decade to meet the needs outlined above. Its operating temperature (< or = 4K) and instrument complement would be optimized to reach the natural sky confusion limit in the far-infrared with diffraction-limited performance down to at least the atmospheric cutoff, lambda > or approx. 40 microns. This would provide a point source sensitivity improvement of several orders of magnitude over that of the Spitzer Space Telescope (previously SIRTF) or the Herschel Space Observatory. Additionally, it would have an angular resolution 12 times finer than that of Spitzer and three times finer than Herschel. This sensitivity and angular resolution are necessary to perform imaging and spectroscopic studies of individual galaxies in the early universe. We have considered many aspects of the SAFIR mission, including the telescope technology (optical design, materials, and packaging), detector needs and technologies, cooling method and required technology developments, attitude and pointing, power systems, launch vehicle, and mission operations. The most challenging requirements for this mission are operating temperature and aperture size of the telescope, and the development of detector arrays. SAFIR can take advantage of much of the technology under development for JWST, but with much less stringent requirements on optical accuracy.

  16. Impact of different convection permitting resolutions on the representation of heavy rainfall over the UK

    NASA Astrophysics Data System (ADS)

    Fosser, Giorgia; Kendon, Elizabeth; Chan, Steven

    2017-04-01

    Previous studies (e.g. Ban et al, 2015; Fosser et al, 2015 and 2016; Kendon et al, 2015) have shown that convection permitting models are able to give a much more realistic representation of convection, and are needed to provide reliable projections of future changes in hourly precipitation extremes. In this context, the UKCP18 project aims to provide policy makers with new UK climate change projections at hourly and local scales, thanks to the first ensemble of runs at convection permitting resolution. As a first step, we need to identify a suitable UK domain, resolution and experimental design for the convective-scale ensemble. Thus, a set of 12-years long simulations driven by ERA Interim reanalysis data has been carried out over the UK using the Met Office Unified Model (UM) at different convection permitting resolutions, namely 1.5 km, 2.2 km and 4km. Different nesting strategy and physical adjustments are also tested. Two observational gridded datasets, based on rain gauges and radar, are used for validation. The analysis aims to identify the impacts of the different convection permitting resolutions (as well as domain size and physical settings) on the representation of precipitation, especially when convection is a predominant feature. Moreover, this study tries to determine the physical reasons behind the found differences and hence to determine if there are any benefits of increasing the horizontal resolution within the convection permitting regime in a climatological context. First results show that the 4km model realises many of the benefits of convection-permitting resolution, namely the rainfall fields are much more realistic and the daily timing of rainfall is better captured compared to convection-parameterised models. For mean precipitation metrics, including precipitation conditioned on circulation type, there is little benefit in moving to resolutions finer than 4km. However, there are some key deficiencies at convection-permitting resolution which are notably worse at 4km, namely the tendency for the heaviest events to be too intense and convective showers to be too "blobby". The use of different nesting strategy seems to have a big impact on the results. Bibliography Ban N, Schmidli J, Schär C (2015) Heavy precipitation in a changing climate: Does short-term summer precipitation increase faster? Geophys Res Lett 42:1165-1172. doi: 10.1002/2014GL062588 Fosser G, Khodayar S, Berg P (2015) Benefit of convection permitting climate model simulations in the representation of convective precipitation. Clim Dyn. doi: 10.1007/s00382-014-2242-1 Fosser G, Khodayar S, Berg P (2015) Climate change in the next 30 years: what can a convection-permitting model tell us that we did not already know? Clim Dyn. accepted Kendon EJ, Roberts NM, Fowler HJ, et al (2014) Heavier summer downpours with climate change revealed by weather forecast resolution model. Nat Clim Chang 4:570-576. doi: 10.1038/nclimate2258

  17. Sensitivities of the hydrologic cycle to model physics, grid resolution, and ocean type in the aquaplanet Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Benedict, James J.; Medeiros, Brian; Clement, Amy C.; Pendergrass, Angeline G.

    2017-06-01

    Precipitation distributions and extremes play a fundamental role in shaping Earth's climate and yet are poorly represented in many global climate models. Here, a suite of idealized Community Atmosphere Model (CAM) aquaplanet simulations is examined to assess the aquaplanet's ability to reproduce hydroclimate statistics of real-Earth configurations and to investigate sensitivities of precipitation distributions and extremes to model physics, horizontal grid resolution, and ocean type. Little difference in precipitation statistics is found between aquaplanets using time-constant sea-surface temperatures and those implementing a slab ocean model with a 50 m mixed-layer depth. In contrast, CAM version 5.3 (CAM5.3) produces more time mean, zonally averaged precipitation than CAM version 4 (CAM4), while CAM4 generates significantly larger precipitation variance and frequencies of extremely intense precipitation events. The largest model configuration-based precipitation sensitivities relate to choice of horizontal grid resolution in the selected range 1-2°. Refining grid resolution has significant physics-dependent effects on tropical precipitation: for CAM4, time mean zonal mean precipitation increases along the Equator and the intertropical convergence zone (ITCZ) narrows, while for CAM5.3 precipitation decreases along the Equator and the twin branches of the ITCZ shift poleward. Increased grid resolution also reduces light precipitation frequencies and enhances extreme precipitation for both CAM4 and CAM5.3 resulting in better alignment with observational estimates. A discussion of the potential implications these hydrologic cycle sensitivities have on the interpretation of precipitation statistics in future climate projections is also presented.Plain Language SummaryPrecipitation plays a fundamental role in shaping Earth's climate. Global climate models predict the average precipitation reasonably well but often struggle to accurately represent how often it precipitates and at what intensity. Model precipitation errors are closely tied to imperfect representations of physical processes too small to be resolved on the model grid. The problem is compounded by the complexity of contemporary climate models and the many model configuration options available. In this study, we use an aquaplanet, a simplified global climate model entirely devoid of land masses, to explore the response of precipitation to several aspects of model configuration in a present-day climate state. Our results suggest that critical precipitation patterns, including extreme precipitation events that have large socio-economic impacts, are strongly sensitive to horizontal grid resolution and the representation of unresolved physical processes. Identification and understanding of such model configuration-related precipitation responses in the present-day climate will provide a more accurate estimate of model uncertainty necessary for an improved interpretation of precipitation changes in global warming projections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5462465','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5462465"><span>Segmented Separable Footprint Projector for Digital Breast Tomosynthesis and Its application for Subpixel Reconstruction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zheng, Jiabei; Fessler, Jeffrey A; Chan, Heang-Ping</p> <p>2017-01-01</p> <p>Purpose Digital forward and back projectors play a significant role in iterative image reconstruction. The accuracy of the projector affects the quality of the reconstructed images. Digital breast tomosynthesis (DBT) often uses the ray-tracing (RT) projector that ignores finite detector element size. This paper proposes a modified version of the separable footprint (SF) projector, called the segmented separable footprint (SG) projector, that calculates efficiently the Radon transform mean value over each detector element. The SG projector is specifically designed for DBT reconstruction because of the large height-to-width ratio of the voxels generally used in DBT. This study evaluates the effectiveness of the SG projector in reducing projection error and improving DBT reconstruction quality. Methods We quantitatively compared the projection error of the RT and the SG projector at different locations and their performance in regular and subpixel DBT reconstruction. Subpixel reconstructions used finer voxels in the imaged volume than the detector pixel size. Subpixel reconstruction with RT projector uses interpolated projection views as input to provide adequate coverage of the finer voxel grid with the traced rays. Subpixel reconstruction with the SG projector, however, uses the measured projection views without interpolation. We simulated DBT projections of a test phantom using CatSim (GE Global Research, Niskayuna, NY) under idealized imaging conditions without noise and blur, to analyze the effects of the projectors and subpixel reconstruction without other image degrading factors. The phantom contained an array of horizontal and vertical line pair patterns (1 to 9.5 line pairs/mm) and pairs of closely spaced spheres (diameters 0.053 to 0.5 mm) embedded at the mid-plane of a 5-cm-thick breast-tissue-equivalent uniform volume. The images were reconstructed with regular simultaneous algebraic reconstruction technique (SART) and subpixel SART using different projectors. The resolution and contrast of the test objects in the reconstructed images and the computation times were compared under different reconstruction conditions. Results The SG projector reduced the projector error by 1 to 2 orders of magnitude at most locations. In the worst case, the SG projector still reduced the projection error by about 50%. In the DBT reconstructed slices parallel to the detector plane, the SG projector not only increased the contrast of the line pairs and spheres, but also produced more smooth and continuous reconstructed images whereas the discrete and sparse nature of the RT projector caused artifacts appearing as patterned noise. For subpixel reconstruction, the SG projector significantly increased object contrast and computation speed, especially for high subpixel ratios, compared with the RT projector implemented with accelerated Siddon’s algorithm. The difference in the depth resolution among the projectors is negligible under the conditions studied. Our results also demonstrated that subpixel reconstruction can improve the spatial resolution of the reconstructed images, and can exceed the Nyquist limit of the detector under some conditions. Conclusions The SG projector was more accurate and faster than the RT projector. The SG projector also substantially reduced computation time and improved the image quality for the tomosynthesized images with and without subpixel reconstruction. PMID:28058719</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20030066757&hterms=DENNIS+ALAN&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3DDENNIS%252C%2BALAN','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20030066757&hterms=DENNIS+ALAN&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3DDENNIS%252C%2BALAN"><span>MRO High Resolution Imaging Science Experiment (HiRISE): Instrument Development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Delamere, Alan; Becker, Ira; Bergstrom, Jim; Burkepile, Jon; Day, Joe; Dorn, David; Gallagher, Dennis; Hamp, Charlie; Lasco, Jeffrey; Meiers, Bill</p> <p>2003-01-01</p> <p>The primary functional requirement of the HiRISE imager is to allow identification of both predicted and unknown features on the surface of Mars to a much finer resolution and contrast than previously possible. This results in a camera with a very wide swath width, 6km at 300km altitude, and a high signal to noise ratio, >100:1. Generation of terrain maps, 30 cm vertical resolution, from stereo images requires very accurate geometric calibration. The project limitations of mass, cost and schedule make the development challenging. In addition, the spacecraft stability must not be a major limitation to image quality. The nominal orbit for the science phase of the mission is a 3pm orbit of 255 by 320 km with periapsis locked to the south pole. The track velocity is approximately 3,400 m/s.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910031816&hterms=land+use+change&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dland%2Buse%2Bchange','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910031816&hterms=land+use+change&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dland%2Buse%2Bchange"><span>Land use change detection based on multi-date imagery from different satellite sensor systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stow, Douglas A.; Collins, Doretta; Mckinsey, David</p> <p>1990-01-01</p> <p>An empirical study is conducted to assess the accuracy of land use change detection using satellite image data acquired ten years apart by sensors with differing spatial resolutions. The primary goals of the investigation were to (1) compare standard change detection methods applied to image data of varying spatial resolution, (2) assess whether to transform the raster grid of the higher resolution image data to that of the lower resolution raster grid or vice versa in the registration process, (3) determine if Landsat/Thermatic Mapper or SPOT/High Resolution Visible multispectral data provide more accurate detection of land use changes when registered to historical Landsat/MSS data. It is concluded that image ratioing of multisensor, multidate satellite data produced higher change detection accuracies than did principal components analysis, and that it is useful as a land use change enhancement method.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120011837','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120011837"><span>NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rumsey, C. L.; Lee-Rausch, E. M.</p> <p>2012-01-01</p> <p>Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170004548','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170004548"><span>Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chaderjian, Neal M.</p> <p>2017-01-01</p> <p>High-resolution simulations of rotor/vortex-wake interaction for a UH60-A rotor under BVI and dynamic stallconditions were carried out with the OVERFLOW Navier-Stokes code.a. The normal force and pitching moment variation with azimuth angle were in good overall agreementwith flight-test data, similar to other CFD results reported in the literature.b. The wake-grid resolution did not have a significant effect on the rotor-blade airloads. This surprisingresult indicates that a wake grid spacing of (Delta)S=10% ctip is sufficient for engineering airloads predictionfor hover and forward flight. This assumes high-resolution body grids, high-order spatial accuracy, anda hybrid RANS/DDES turbulence model.c. Three-dimensional dynamic stall was found to occur due the presence of blade-tip vortices passing overa rotor blade on the retreating side. This changed the local airfoil angle of attack, causing stall, unlikethe 2D perspective of pure pitch oscillation of the local airfoil section.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ThApC.126..617N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ThApC.126..617N"><span>Sensitivity studies of high-resolution RegCM3 simulations of precipitation over the European Alps: the effect of lateral boundary conditions and domain size</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nadeem, Imran; Formayer, Herbert</p> <p>2016-11-01</p> <p>A suite of high-resolution (10 km) simulations were performed with the International Centre for Theoretical Physics (ICTP) Regional Climate Model (RegCM3) to study the effect of various lateral boundary conditions (LBCs), domain size, and intermediate domains on simulated precipitation over the Great Alpine Region. The boundary conditions used were ECMWF ERA-Interim Reanalysis with grid spacing 0.75∘, the ECMWF ERA-40 Reanalysis with grid spacing 1.125 and 2.5∘, and finally the 2.5∘ NCEP/DOE AMIP-II Reanalysis. The model was run in one-way nesting mode with direct nesting of the high-resolution RCM (horizontal grid spacing Δx = 10 km) with driving reanalysis, with one intermediate resolution nest (Δx = 30 km) between high-resolution RCM and reanalysis forcings, and also with two intermediate resolution nests (Δx = 90 km and Δx = 30 km) for simulations forced with LBC of resolution 2.5∘. Additionally, the impact of domain size was investigated. The results of multiple simulations were evaluated using different analysis techniques, e.g., Taylor diagram and a newly defined useful statistical parameter, called Skill-Score, for evaluation of daily precipitation simulated by the model. It has been found that domain size has the major impact on the results, while different resolution and versions of LBCs, e.g., 1.125∘ ERA40 and 0.7∘ ERA-Interim, do not produce significantly different results. It is also noticed that direct nesting with reasonable domain size, seems to be the most adequate method for reproducing precipitation over complex terrain, while introducing intermediate resolution nests seems to deteriorate the results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3234991','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3234991"><span>Accelerated High-Resolution Differential Ion Mobility Separations Using Hydrogen</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shvartsburg, Alexandre A.; Smith, Richard D.</p> <p>2011-01-01</p> <p>The resolving power of differential ion mobility spectrometry (FAIMS) was dramatically increased recently by carrier gases comprising up to 75% He or various vapors, enabling many new applications. However, the need for resolution of complex mixtures is virtually open-ended and many topical analyses demand yet finer separations. Also, the resolving power gains are often at the expense of speed, in particular making high-resolution FAIMS incompatible with online liquid-phase separations. Here, we report FAIMS employing hydrogen, specifically in mixtures with N2 containing up to 90% H2. Such compositions raise the mobilities of all ions and thus the resolving power beyond that previously feasible, while avoiding the electrical breakdown inevitable in He-rich mixtures. The increases in resolving power and ensuing peak resolution are especially significant at H2 fractions above ~50%. Higher resolution can be exchanged for acceleration of the analyses by up to ~4 times, at least. For more mobile species such as multiply-charged peptides, this exchange is presently forced by the constraints of existing FAIMS devices, but future designs optimized for H2 should consistently improve resolution for all analytes. PMID:22074292</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5928238','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5928238"><span>Temporal Resolution Needed for Auditory Communication: Measurement With Mosaic Speech</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nakajima, Yoshitaka; Matsuda, Mizuki; Ueda, Kazuo; Remijn, Gerard B.</p> <p>2018-01-01</p> <p>Temporal resolution needed for Japanese speech communication was measured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for healthy listeners to perceive speech is introduced. As a first step, we report listeners' intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called “mosaic speech”: speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of <100 ms can be remarkably robust in terms of linguistic-content perception against drastic manipulations in each segment, such as partial signal omission or temporal reversal. The human perceptual system thus can extract meaning from unexpectedly rough temporal information in speech. The process resembles that of the visual system stringing together static movie frames of ~40 ms into vivid motion. PMID:29740295</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ACP....13.6807Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ACP....13.6807Z"><span>Application of WRF/Chem-MADRID and WRF/Polyphemus in Europe - Part 1: Model description, evaluation of meteorological predictions, and aerosol-meteorology interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Y.; Sartelet, K.; Wu, S.-Y.; Seigneur, C.</p> <p>2013-07-01</p> <p>Comprehensive model evaluation and comparison of two 3-D air quality modeling systems (i.e., the Weather Research and Forecast model (WRF)/Polyphemus and WRF with chemistry and the Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) (WRF/Chem-MADRID)) are conducted over Western Europe. Part 1 describes the background information for the model comparison and simulation design, the application of WRF for January and July 2001 over triple-nested domains in Western Europe at three horizontal grid resolutions: 0.5°, 0.125°, and 0.025°, and the effect of aerosol/meteorology interactions on meteorological predictions. Nine simulated meteorological variables (i.e., downward shortwave and longwave radiation fluxes (SWDOWN and LWDOWN), outgoing longwave radiation flux (OLR), temperature at 2 m (T2), specific humidity at 2 m (Q2), relative humidity at 2 m (RH2), wind speed at 10 m (WS10), wind direction at 10 m (WD10), and precipitation (Precip)) are evaluated using available observations in terms of spatial distribution, domainwide daily and site-specific hourly variations, and domainwide performance statistics. The vertical profiles of temperature, dew points, and wind speed/direction are also evaluated using sounding data. WRF demonstrates its capability in capturing diurnal/seasonal variations and spatial gradients and vertical profiles of major meteorological variables. While the domainwide performance of LWDOWN, OLR, T2, Q2, and RH2 at all three grid resolutions is satisfactory overall, large positive or negative biases occur in SWDOWN, WS10, and Precip even at 0.125° or 0.025° in both months and in WD10 in January. In addition, discrepancies between simulations and observations exist in T2, Q2, WS10, and Precip at mountain/high altitude sites and large urban center sites in both months, in particular, during snow events or thunderstorms. These results indicate the model's difficulty in capturing meteorological variables in complex terrain and subgrid-scale meteorological phenomena, due to inaccuracies in model initialization parameterization (e.g., lack of soil temperature and moisture nudging), limitations in the physical parameterizations (e.g., shortwave radiation, cloud microphysics, cumulus parameterizations, and ice nucleation treatments) as well as limitations in surface heat and moisture budget parameterizations (e.g., snow-related processes, subgrid-scale surface roughness elements, and urban canopy/heat island treatments and CO2 domes). While the use of finer grid resolutions of 0.125° and 0.025° shows some improvements for WS10, WD10, Precip, and some mesoscale events (e.g., strong forced convection and heavy precipitation), it does not significantly improve the overall statistical performance for all meteorological variables except for Precip. The WRF/Chem simulations with and without aerosols show that aerosols lead to reduced net shortwave radiation fluxes, 2 m temperature, 10 m wind speed, planetary boundary layer (PBL) height, and precipitation and increase aerosol optical depth, cloud condensation nuclei, cloud optical depth, and cloud droplet number concentrations over most of the domain. These results indicate a need to further improve the model representations of the above parameterizations as well as aerosol-meteorology interactions at all scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1910468V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1910468V"><span>Gridded precipitation fields at high temporal and spatial resolution for operational flood forecasting in the Rhine basin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Osnabrugge, Bart; Weerts, Albrecht; Uijlenhoet, Remko</p> <p>2017-04-01</p> <p>Gridded areal precipitation, as one of the most important hydrometeorological input variables for initial state estimation in operational hydrological forecasting, is available in the form of raster data sets (e.g. HYRAS and EOBS) for the River Rhine basin. These datasets are compiled off-line on a daily time step using station data with the highest possible spatial density. However, such a product is not available operationally and at an hourly discretisation. Therefore, we constructed an hourly gridded precipitation dataset at 1.44 km2 resolution for the Rhine basin for the period from 1998 to present using a REGNIE-like interpolation procedure (Weerts et al., 2008) using a low and a high density rain gauge network. The datasets were validated against daily HYRAS (Rauthe, 2013) and EOBS (Haylock, 2008) data. The main goal of the operational procedure is to emulate the HYRAS dataset as good as possible, as the daily HYRAS dataset is used in the off-line calibration of the hydrological model. Our main findings are that even with low station density, the spatial patterns found in the HYRAS data set are well reproduced. With low station density (years 1999-2006) our dataset underestimates precipitation compared to HYRAS and EOBS, notably during the winter. However, interpolation based on the same set of stations overestimates precipitation compared to EOBS for the years 2006-2014. This discrepancy disappears when switching to the high station density. We also analyze the robustness of the hourly precipitation fields by comparing with stations not used during interpolation. Specific issues regarding the data when creating the gridded precipitation fields will be highlighted. Finally, the datasets are used to drive an hourly and daily gridded WFLOW_HBV model of the Rhine at the same spatial resolution. Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, doi:10.1029/2008JD10201 Rauthe, M., Steiner, H., Riediger, U., Mazurkiewicz, A., Gratzki, A. 2013: A Central European precipitation climatology - Part 1: Generation and validation of a high-resolution gridded daily data set (HYRAS). Meteorologische Zeitschrift, 22(3), 235 256 Weerts, A.H., D. Meißner, and S. Rademacher, 2008. Input data rainfall-runoff model operational system FEWS-NL & FEWS-DE. Technical report, Deltares.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ERL....13e5002S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ERL....13e5002S"><span>A global map of mangrove forest soil carbon at 30 m spatial resolution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanderman, Jonathan; Hengl, Tomislav; Fiske, Greg; Solvik, Kylen; Adame, Maria Fernanda; Benson, Lisa; Bukoski, Jacob J.; Carnell, Paul; Cifuentes-Jara, Miguel; Donato, Daniel; Duncan, Clare; Eid, Ebrahem M.; Ermgassen, Philine zu; Ewers Lewis, Carolyn J.; Macreadie, Peter I.; Glass, Leah; Gress, Selena; Jardine, Sunny L.; Jones, Trevor G.; Ndemem Nsombo, Eugéne; Mizanur Rahman, Md; Sanders, Christian J.; Spalding, Mark; Landis, Emily</p> <p>2018-05-01</p> <p>With the growing recognition that effective action on climate change will require a combination of emissions reductions and carbon sequestration, protecting, enhancing and restoring natural carbon sinks have become political priorities. Mangrove forests are considered some of the most carbon-dense ecosystems in the world with most of the carbon stored in the soil. In order for mangrove forests to be included in climate mitigation efforts, knowledge of the spatial distribution of mangrove soil carbon stocks are critical. Current global estimates do not capture enough of the finer scale variability that would be required to inform local decisions on siting protection and restoration projects. To close this knowledge gap, we have compiled a large georeferenced database of mangrove soil carbon measurements and developed a novel machine-learning based statistical model of the distribution of carbon density using spatially comprehensive data at a 30 m resolution. This model, which included a prior estimate of soil carbon from the global SoilGrids 250 m model, was able to capture 63% of the vertical and horizontal variability in soil organic carbon density (RMSE of 10.9 kg m‑3). Of the local variables, total suspended sediment load and Landsat imagery were the most important variable explaining soil carbon density. Projecting this model across the global mangrove forest distribution for the year 2000 yielded an estimate of 6.4 Pg C for the top meter of soil with an 86–729 Mg C ha‑1 range across all pixels. By utilizing remotely-sensed mangrove forest cover change data, loss of soil carbon due to mangrove habitat loss between 2000 and 2015 was 30–122 Tg C with >75% of this loss attributable to Indonesia, Malaysia and Myanmar. The resulting map products from this work are intended to serve nations seeking to include mangrove habitats in payment-for- ecosystem services projects and in designing effective mangrove conservation strategies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMSM51A1755F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMSM51A1755F"><span>The Ensemble Space Weather Modeling System (eSWMS): Status, Capabilities and Challenges</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fry, C. D.; Eccles, J. V.; Reich, J. P.</p> <p>2010-12-01</p> <p>Marking a milestone in space weather forecasting, the Space Weather Modeling System (SWMS) successfully completed validation testing in advance of operational testing at Air Force Weather Agency’s primary space weather production center. This is the first coupling of stand-alone, physics-based space weather models that are currently in operations at AFWA supporting the warfighter. Significant development effort went into ensuring the component models were portable and scalable while maintaining consistent results across diverse high performance computing platforms. Coupling was accomplished under the Earth System Modeling Framework (ESMF). The coupled space weather models are the Hakamada-Akasofu-Fry version 2 (HAFv2) solar wind model and GAIM1, the ionospheric forecast component of the Global Assimilation of Ionospheric Measurements (GAIM) model. The SWMS was developed by team members from AFWA, Explorations Physics International, Inc. (EXPI) and Space Environment Corporation (SEC). The successful development of the SWMS provides new capabilities beyond enabling extended lead-time, data-driven ionospheric forecasts. These include ingesting diverse data sets at higher resolution, incorporating denser computational grids at finer time steps, and performing probability-based ensemble forecasts. Work of the SWMS development team now focuses on implementing the ensemble-based probability forecast capability by feeding multiple scenarios of 5 days of solar wind forecasts to the GAIM1 model based on the variation of the input fields to the HAFv2 model. The ensemble SWMS (eSWMS) will provide the most-likely space weather scenario with uncertainty estimates for important forecast fields. The eSWMS will allow DoD mission planners to consider the effects of space weather on their systems with more advance warning than is currently possible. The payoff is enhanced, tailored support to the warfighter with improved capabilities, such as point-to-point HF propagation forecasts, single-frequency GPS error corrections, and high cadence, high-resolution Space Situational Awareness (SSA) products. We present the current status of eSWMS, its capabilities, limitations and path of transition to operational use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1711853F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1711853F"><span>Simulating the propagation of sulphur dioxide emissions from the fissure eruption in the Holuhraun lava field (Iceland) with the EURAD-IM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fröhlich, Luise; Franke, Philipp; Friese, Elmar; Haas, Sarah; Lange, Anne Caroline; Elbern, Hendrik</p> <p>2015-04-01</p> <p>In the emergency case of a volcano eruption accurate forecasts of the transport of ash and gas emissions are crucial for health protection and aviation safety. In the frame of Earth System Knowledge Platform (ESKP) near real-time forecasts of ash and SO2 dispersion emitted by active volcanoes are simulated by the European Air pollution Dispersion Inverse Model (EURAD-IM). The model is driven by the Weather Research and Forecasting Model (WRF) and includes detailed gas phase and particle dynamics modules, which allow for quantitative estimates of measured volcano releases. Former simulations, for example related to the Eyjafjallajökull outbreak in 2010, were in good agreement with measurement records of particle number and SO2 at several European stations. At the end of August 2014 an fissure eruption has begun on Iceland in the Holuhraun lava field to the north-east of the Bardarbunga volcano system. In contrast to the explosive eruption of the Eyjafjallajökull in 2010, the Holuhraun eruption is rather effusive with a large and continuous flow of lava and a significant release of sulphur dioxide (SO2) in the lower troposphere, while ash emissions are insignificant. Since the Holuhraun fissure eruption has started, daily forecasts of SO2 dispersion are produced for the European region (15 km horizontal resolution grid) and published on our website (http://apps.fz-juelich.de/iek-8/RIU/vorhersage_node.php). To simulate the transport of volcanic emissions, realistic source terms like mass release rates of ash and SO2 or plume heights are required. Since no representative measurements are currently available for the simulations, rough qualitative assumptions, based on reports from the Icelandic Met Office (IMO), are used. However, frequent comparisons with satellite observations show that the actual propagation of the volcanic emissions is generally well reflected by the model. In the middle of September 2014 several European measurement sides recorded extremely high SO2 concentrations at ground level which were predicted quite accurately in advance by the EURAD-IM. Further more, the simulations indicate that the unusual high SO2 values are due to the transport of sulphur dioxide rich air from the Bardarbunga towards continental Europe. Presently, SO2 dispersion forecasts are also conducted on a finer spatial resolution grid (1 km) for the Icelandic region. These simulations will be validated against measurements from different observation sides in Iceland.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=105199&keyword=System+AND+recommendation&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=105199&keyword=System+AND+recommendation&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>EXAMINATION OF MODEL PREDICTIONS AT DIFFERENT HORIZONTAL GRID RESOLUTIONS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>While fluctuations in meteorological and air quality variables occur on a continuum of spatial scales, the horizontal grid spacing of coupled meteorological and photochemical models sets a lower limit on the spatial scales that they can resolve. However, both computational costs ...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010069265','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010069265"><span>MODIS Snow-Cover Products</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hall, Dorothy K.; Riggs, George A.; Salomonson, Vinvent V.; DiGirolamo, Nicolo; Bayr, Klaus J.; Houser, Paul (Technical Monitor)</p> <p>2001-01-01</p> <p>On December 18, 1999, the Terra satellite was launched with a complement of five instruments including the Moderate Resolution Imaging Spectroradiometer (MODIS). Many geophysical products are derived from MODIS data including global snow-cover products. These products have been available through the National Snow and Ice Data Center (NSIDC) Distributed Active Archive Center (DAAC) since September 13, 2000. MODIS snow-cover products represent potential improvement to the currently available operation products mainly because the MODIS products are global and 500-m resolution, and have the capability to separate most snow and clouds. Also the snow-mapping algorithms are automated which means that a consistent data set is generated for long-term climates studies that require snow-cover information. Extensive quality assurance (QA) information is stored with the product. The snow product suite starts with a 500-m resolution swath snow-cover map which is gridded to the Integerized Sinusoidal Grid to produce daily and eight-day composite tile products. The sequence then proceeds to a climate-modeling grid product at 5-km spatial resolution, with both daily and eight-day composite products. A case study from March 6, 2000, involving MODIS data and field and aircraft measurements, is presented. Near-term enhancements include daily snow albedo and fractional snow cover.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1373814-nested-mesoscale-les-modeling-atmospheric-boundary-layer-presence-under-resolved-convective-structures','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1373814-nested-mesoscale-les-modeling-atmospheric-boundary-layer-presence-under-resolved-convective-structures"><span>Nested mesoscale-to-LES modeling of the atmospheric boundary layer in the presence of under-resolved convective structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.; ...</p> <p>2017-07-06</p> <p>Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1373814-nested-mesoscale-les-modeling-atmospheric-boundary-layer-presence-under-resolved-convective-structures','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1373814-nested-mesoscale-les-modeling-atmospheric-boundary-layer-presence-under-resolved-convective-structures"><span>Nested mesoscale-to-LES modeling of the atmospheric boundary layer in the presence of under-resolved convective structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.</p> <p></p> <p>Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040027958','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040027958"><span>Grid-Independent Large-Eddy Simulation in Turbulent Channel Flow using Three-Dimensional Explicit Filtering</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gullbrand, Jessica</p> <p>2003-01-01</p> <p>In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT.......336M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT.......336M"><span>Navigating Earthquake Physics with High-Resolution Array Back-Projection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meng, Lingsen</p> <p></p> <p>Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The high-frequency subevents are located at the edges of geodetic slip regions, which are correlated to the stopping phases associated with rupture speed reduction when the earthquake arrests.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25477277','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25477277"><span>Improved human observer performance in digital reconstructed radiograph verification in head and neck cancer radiotherapy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sturgeon, Jared D; Cox, John A; Mayo, Lauren L; Gunn, G Brandon; Zhang, Lifei; Balter, Peter A; Dong, Lei; Awan, Musaddiq; Kocak-Uzel, Esengul; Mohamed, Abdallah Sherif Radwan; Rosenthal, David I; Fuller, Clifton David</p> <p>2015-10-01</p> <p>Digitally reconstructed radiographs (DRRs) are routinely used as an a priori reference for setup correction in radiotherapy. The spatial resolution of DRRs may be improved to reduce setup error in fractionated radiotherapy treatment protocols. The influence of finer CT slice thickness reconstruction (STR) and resultant increased resolution DRRs on physician setup accuracy was prospectively evaluated. Four head and neck patient CT-simulation images were acquired and used to create DRR cohorts by varying STRs at 0.5, 1, 2, 2.5, and 3 mm. DRRs were displaced relative to a fixed isocenter using 0-5 mm random shifts in the three cardinal axes. Physician observers reviewed DRRs of varying STRs and displacements and then aligned reference and test DRRs replicating daily KV imaging workflow. A total of 1,064 images were reviewed by four blinded physicians. Observer errors were analyzed using nonparametric statistics (Friedman's test) to determine whether STR cohorts had detectably different displacement profiles. Post hoc bootstrap resampling was applied to evaluate potential generalizability. The observer-based trial revealed a statistically significant difference between cohort means for observer displacement vector error ([Formula: see text]) and for [Formula: see text]-axis [Formula: see text]. Bootstrap analysis suggests a 15% gain in isocenter translational setup error with reduction of STR from 3 mm to [Formula: see text]2 mm, though interobserver variance was a larger feature than STR-associated measurement variance. Higher resolution DRRs generated using finer CT scan STR resulted in improved observer performance at shift detection and could decrease operator-dependent geometric error. Ideally, CT STRs [Formula: see text]2 mm should be utilized for DRR generation in the head and neck.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28518519','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28518519"><span>SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yuan, Y; Duan, J; Popple, R; Brezovich, I</p> <p>2012-06-01</p> <p>To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22266059-proposal-grid-computing-nuclear-applications','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22266059-proposal-grid-computing-nuclear-applications"><span>Proposal for grid computing for nuclear applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.</p> <p>2014-02-12</p> <p>The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1905c0039T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1905c0039T"><span>Integrating bathymetric and topographic data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat</p> <p>2017-11-01</p> <p>The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910033602&hterms=1075&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3D%2526%25231075','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910033602&hterms=1075&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3D%2526%25231075"><span>Global convergence of inexact Newton methods for transonic flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.</p> <p>1990-01-01</p> <p>In computational fluid dynamics, nonlinear differential equations are essential to represent important effects such as shock waves in transonic flow. Discretized versions of these nonlinear equations are solved using iterative methods. In this paper an inexact Newton method using the GMRES algorithm of Saad and Schultz is examined in the context of the full potential equation of aerodynamics. In this setting, reliable and efficient convergence of Newton methods is difficult to achieve. A poor initial solution guess often leads to divergence or very slow convergence. This paper examines several possible solutions to these problems, including a standard local damping strategy for Newton's method and two continuation methods, one of which utilizes interpolation from a coarse grid solution to obtain the initial guess on a finer grid. It is shown that the continuation methods can be used to augment the local damping strategy to achieve convergence for difficult transonic flow problems. These include simple wings with shock waves as well as problems involving engine power effects. These latter cases are modeled using the assumption that each exhaust plume is isentropic but has a different total pressure and/or temperature than the freestream.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110013215','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110013215"><span>CFD Computations for a Generic High-Lift Configuration Using TetrUSS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pandya, Mohagna J.; Abdol-Hamid, Khaled S.; Parlette, Edward B.</p> <p>2011-01-01</p> <p>Assessment of the accuracy of computational results for a generic high-lift trapezoidal wing with a single slotted flap and slat is presented. The paper is closely aligned with the focus of the 1st AIAA CFD High Lift Prediction Workshop (HiLiftPW-1) which was to assess the accuracy of CFD methods for multi-element high-lift configurations. The unstructured grid Reynolds-Averaged Navier-Stokes solver TetrUSS/USM3D is used for the computational results. USM3D results are obtained assuming fully turbulent flow using the Spalart-Allmaras (SA) and Shear Stress Transport (SST) turbulence models. Computed solutions have been obtained at seven different angles-of-attack ranging from 6 -37 . Three grids providing progressively higher grid resolution are used to quantify the effect of grid resolution on the lift, drag, pitching moment, surface pressure and stall angle. SA results, as compared to SST results, exhibit better agreement with the measured data. However, both turbulence models under-predict upper surface pressures near the wing tip region.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5759340','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5759340"><span>New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Scheinert, M.; Ferraccioli, F.; Schwabe, J.; Bell, R.; Studinger, M.; Damaske, D.; Jokat, W.; Aleshkova, N.; Jordan, T.; Leitchenkov, G.; Blankenship, D. D.; Damiani, T. M.; Young, D.; Cochran, J. R.; Richter, T. D.</p> <p>2018-01-01</p> <p>Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, airborne and shipborne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million km2, which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated levelling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica. PMID:29326484</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20160013717&hterms=gravity&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dgravity','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20160013717&hterms=gravity&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dgravity"><span>New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Scheinert, M.; Ferraccioli, F.; Schwabe, J.; Bell, R.; Studinger, M.; Damaske, D.; Jokat, W.; Aleshkova, N.; Jordan, T.; Leitchenkov, G.; <a style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20160013717'); toggleEditAbsImage('author_20160013717_show'); toggleEditAbsImage('author_20160013717_hide'); "> <img style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20160013717_show"> <img style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20160013717_hide"></p> <p>2016-01-01</p> <p>Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, air-borne and ship-borne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million sq km, which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated leveling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1404697-influence-grid-resolution-parcel-size-drag-models-bubbling-fluidized-bed-simulation','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1404697-influence-grid-resolution-parcel-size-drag-models-bubbling-fluidized-bed-simulation"><span>Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane</p> <p>2017-06-02</p> <p>Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29326484','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29326484"><span>New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Scheinert, M; Ferraccioli, F; Schwabe, J; Bell, R; Studinger, M; Damaske, D; Jokat, W; Aleshkova, N; Jordan, T; Leitchenkov, G; Blankenship, D D; Damiani, T M; Young, D; Cochran, J R; Richter, T D</p> <p>2016-01-28</p> <p>Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, airborne and shipborne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million km 2 , which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated levelling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.9907D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.9907D"><span>The Solomon Sea eddy activity from a 1/36° regional model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Djath, Bughsin; Babonneix, Antoine; Gourdeau, Lionel; Marin, Frédéric; Verron, Jacques</p> <p>2013-04-01</p> <p>In the South West Pacific, the Solomon Sea exhibits the highest levels of eddy kinetic energy but relatively little is known about the eddy activity in this region. This Sea is directly influenced by a monsoonal regime and ENSO variability, and occupies a strategical location as the Western Boundary Currents exiting it are known to feed the warm pool and to be the principal sources of the Equatorial UnderCurrent. During their transit in the Solomon Sea, meso-scale eddies are suspected to notably interact and influence these water masses. The goal of this study is to give an exhaustive description of this eddy activity. A dual approach, based both on altimetric data and high resolution modeling, has then been chosen for this purpose. First, an algorithm is applied on nearly 20 years of 1/3° x 1/3° gridded SLA maps (provided by the AVISO project). This allows eddies to be automatically detected and tracked, thus providing some basic eddy properties. The preliminary results show that two main and distinct types of eddies are detected. Eddies in the north-eastern part shows a variability associated with the mean structure, while those in the southern part are associated with generation/propagation processes. However, the resolution of the AVISO dataset is not very well suited to observe fine structures and to match with the numerous islands bordering the Solomon Sea. For this reason, we will confront these observations with the outputs of a 1/36° resolution realistic model of the Solomon Sea. The high resolution numerical model (1/36°) indeed permits to reproduce very fine scale features, such as eddies and filaments. The model is two-way embedded in a 1/12° regional model which is itself one-way embedded in the DRAKKAR 1/12° global model. The NEMO code is used as well as the AGRIF software for model nestings. Validation is realized by comparison with AVISO observations and available in situ data. In preparing the future wide-swath altimetric SWOT mission that is expected to provide observations of small-scale sea level variability, spectral analysis is performed from the 1/36° resolution realistic model in order to characterize the finer scale signals in the Solomon sea region. The preliminary SSH spectral analysis shows a k-4 slope, in good agreement with the suface quasigeostrophic (SQG) turbulence theory. Keywords: Solomon Sea; meso-scale activity; eddy detection, tracking and properties; wavenumber spectrum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1222892-exploring-multi-resolution-approach-using-amip-simulations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1222892-exploring-multi-resolution-approach-using-amip-simulations"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun</p> <p></p> <p>This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040085981','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040085981"><span>A Study of Grid Resolution, Transition and Turbulence Model Using the Transonic Simple Straked Delta Wing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bartels, Robert E.</p> <p>2001-01-01</p> <p>Three-dimensional transonic flow over a delta wing is investigated using several turbulence models. The performance of linear eddy viscosity models and an explicit algebraic stress model is assessed at the start of vortex flow, and the results compared with experimental data. To assess the effect of transition location, computations that either fix transition aft of the leading edge or are fully turbulent are performed. These computations show that grid resolution, transition location and turbulence model significantly affect the 3D flowfield.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17608008V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17608008V"><span>Optical instruments synergy in determination of optical depth of thin clouds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Viviana Vlăduţescu, Daniela; Schwartz, Stephen E.; Huang, Dong</p> <p>2018-04-01</p> <p>Optically thin clouds have a strong radiative effect and need to be represented accurately in climate models. Cloud optical depth of thin clouds was retrieved using high resolution digital photography, lidar, and a radiative transfer model. The Doppler Lidar was operated at 1.5 μm, minimizing return from Rayleigh scattering, emphasizing return from aerosols and clouds. This approach examined cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opening new avenues for examination of cloud structure and evolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA625215','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA625215"><span>Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-01-01</p> <p>each method on a 2.53 GHz Intel i5 laptop. All our algorithms are hand-optimized, implemented in Java and single threaded. To determine which algorithm...approach would be to label all the pixels in the image with an x, y, z point. However, the angular resolution of the camera is finer than that of the...edge criterion. That is, each edge is either present or absent. In [42], edge existence is further screened by a fixed threshold for angular</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA564650','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA564650"><span>DEPSCOR06: A Dispersed Monopropellant Microslug Approach for Discrete Satellite Micropropulsion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2010-08-01</p> <p>microfluidics , a controlled slug formation process represents a virtual ’self- valving ’ mechanism which affords finer resolution than a micro- valve for a... microfluidic flow system to study the effects of geometry and material properties on the microslug formation phenomena. The inspiration for this work is derived...the-shelf microfluidic chip, manufactured by Micralyne, Inc. was used as shown in Figure A-1.1. Figure 1.A.1: Geometry of the Micralyne 50 µm x 20 µm</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1434762','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1434762"><span>Optical Instruments Synergy in Determination of Optical Depth of Thin Clouds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Vladutescu, Daniela V.; Schwartz, Stephen E.</p> <p></p> <p>Optically thin clouds have a strong radiative effect and need to be represented accurately in climate models. Cloud optical depth of thin clouds was retrieved using high resolution digital photography, lidar, and a radiative transfer model. The Doppler Lidar was operated at 1.5 μm, minimizing return from Rayleigh scattering, emphasizing return from aerosols and clouds. This approach examined cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opening new avenues for examination of cloud structure and evolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1343122','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1343122"><span>Scalar excursions in large-eddy simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Matheou, Georgios; Dimotakis, Paul E.</p> <p></p> <p>Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1343122-scalar-excursions-large-eddy-simulations','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1343122-scalar-excursions-large-eddy-simulations"><span>Scalar excursions in large-eddy simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Matheou, Georgios; Dimotakis, Paul E.</p> <p>2016-08-31</p> <p>Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhCS.762a2027B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhCS.762a2027B"><span>Scaling up ATLAS Event Service to production levels on opportunistic computing platforms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benjamin, D.; Caballero, J.; Ernst, M.; Guan, W.; Hover, J.; Lesny, D.; Maeno, T.; Nilsson, P.; Tsulaia, V.; van Gemmeren, P.; Vaniachine, A.; Wang, F.; Wenaus, T.; ATLAS Collaboration</p> <p>2016-10-01</p> <p>Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA508400','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA508400"><span>Analysis of Long Wave Infrared (LWIR) Soil Data to Predict Reflectance Response</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2009-08-01</p> <p>Aridisol red-orange sandy soil 6% x 16% 61 12% smectite Aridisol grey calcareous silty soil x 19% 49 22% smectite ...trace 16% 59 20% smectite ; grain size analysis of fraction finer than 2 mm indicates 35% finer than 20 micrometer (12% finer than 5 micrometer...Entisol red-orange sandy loam/alluvium see comment 8% x 10% 72 7% smectite ; 47% finer than 20 μm (22% finer than 5 μm) Entisol sandy</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20005464-middle-atmosphere-simulated-high-vertical-horizontal-resolution-versions-gcm-improvements-cold-pole-bias-generation-qbo-like-oscillation-tropics','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20005464-middle-atmosphere-simulated-high-vertical-horizontal-resolution-versions-gcm-improvements-cold-pole-bias-generation-qbo-like-oscillation-tropics"><span>Middle atmosphere simulated with high vertical and horizontal resolution versions of a GCM: Improvements in the cold pole bias and generation of a QBO-like oscillation in the tropics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hamilton, K.; Wilson, R.J.; Hemler, R.S.</p> <p>1999-11-15</p> <p>The large-scale circulation in the Geophysical Fluid Dynamics Laboratory SKYHI troposphere-stratosphere-mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of {approximately}35 km and another with {approximately}100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder thanmore » observed, but the bias is reduced with improved horizontal resolution (from {approximately}70 C in a version with {approximately}300 km grid spacing to less than 10 C in the {approximately}35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere {approximately}1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AGUFM.V32A..08N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AGUFM.V32A..08N"><span>Integrating TITAN2D Geophysical Mass Flow Model with GIS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Namikawa, L. M.; Renschler, C.</p> <p>2005-12-01</p> <p>TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930039277&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DChimera','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930039277&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DChimera"><span>Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat</p> <p>1993-01-01</p> <p>The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>