The FORBIO Climate data set for climate analyses
NASA Astrophysics Data System (ADS)
Delvaux, C.; Journée, M.; Bertrand, C.
2015-06-01
In the framework of the interdisciplinary FORBIO Climate research project, the Royal Meteorological Institute of Belgium is in charge of providing high resolution gridded past climate data (i.e. temperature and precipitation). This climate data set will be linked to the measurements on seedlings, saplings and mature trees to assess the effects of climate variation on tree performance. This paper explains how the gridded daily temperature (minimum and maximum) data set was generated from a consistent station network between 1980 and 2013. After station selection, data quality control procedures were developed and applied to the station records to ensure that only valid measurements will be involved in the gridding process. Thereafter, the set of unevenly distributed validated temperature data was interpolated on a 4 km × 4 km regular grid over Belgium. The performance of different interpolation methods has been assessed. The method of kriging with external drift using correlation between temperature and altitude gave the most relevant results.
Paciorek, Christopher J; Goring, Simon J; Thurman, Andrew L; Cogbill, Charles V; Williams, John W; Mladenoff, David J; Peters, Jody A; Zhu, Jun; McLachlan, Jason S
2016-01-01
We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio) and midwestern (Indiana to Minnesota). Public Land Survey point data in the midwestern region (ca. 0.8-km resolution) are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0.
Saranya, K R L; Reddy, C Sudhakar; Rao, P V V Prasada; Jha, C S
2014-05-01
Analyzing the spatial extent and distribution of forest fires is essential for sustainable forest resource management. There is no comprehensive data existing on forest fires on a regular basis in Biosphere Reserves of India. The present work have been carried out to locate and estimate the spatial extent of forest burnt areas using Resourcesat-1 data and fire frequency covering decadal fire events (2004-2013) in Similipal Biosphere Reserve. The anomalous quantity of forest burnt area was recorded during 2009 as 1,014.7 km(2). There was inconsistency in the fire susceptibility across the different vegetation types. The spatial analysis of burnt area shows that an area of 34.2 % of dry deciduous forests, followed by tree savannah, shrub savannah, and grasslands affected by fires in 2013. The analysis based on decadal time scale satellite data reveals that an area of 2,175.9 km(2) (59.6 % of total vegetation cover) has been affected by varied rate of frequency of forest fires. Fire density pattern indicates low count of burnt area patches in 2013 estimated at 1,017 and high count at 1,916 in 2004. An estimate of fire risk area over a decade identifies 12.2 km(2) is experiencing an annual fire damage. Summing the fire frequency data across the grids (each 1 km(2)) indicates 1,211 (26 %) grids are having very high disturbance regimes due to repeated fires in all the 10 years, followed by 711 grids in 9 years and 418 in 8 years and 382 in 7 years. The spatial database offers excellent opportunities to understand the ecological impact of fires on biodiversity and is helpful in formulating conservation action plans.
Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.
2014-01-01
Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894
Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core
NASA Astrophysics Data System (ADS)
Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey
2017-05-01
SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.
- CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic Multiscale Model on the B grid AWIPS grid 212 Regional - CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic 132 - Double Resolution (Lambert Conformal - 16km) NEMS Non-hydrostatic Multiscale Model on the B grid
Schnek: A C++ library for the development of parallel simulation codes on regular grids
NASA Astrophysics Data System (ADS)
Schmitz, Holger
2018-05-01
A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.
NASA Astrophysics Data System (ADS)
Kabas, T.; Leuprecht, A.; Bichler, C.; Kirchengast, G.
2010-12-01
South-eastern Austria is characteristic for experiencing a rich variety of weather and climate patterns. For this reason, the county of Feldbach was selected by the Wegener Center as a focus area for a pioneering observation experiment at very high resolution: The WegenerNet climate station network (in brief WegenerNet) comprises 151 meteorological stations within an area of about 20 km × 15 km (~ 1.4 km × 1.4 km station grid). All stations measure the main parameters temperature, humidity and precipitation with 5 minute sampling. Selected further stations include measurements of wind speed and direction completed by soil parameters as well as air pressure and net radiation. The collected data is integrated in an automatic processing system including data transfer, quality control, product generation, and visualization. Each station is equipped with an internet-attached data logger and the measurements are transferred as binary files via GPRS to the WegenerNet server in 1 hour intervals. The incoming raw data files of measured parameters as well as several operating values of the data logger are stored in a relational database (PostgreSQL). Next, the raw data pass the Quality Control System (QCS) in which the data are checked for its technical and physical plausibility (e.g., sensor specifications, temporal and spatial variability). In consideration of the data quality (quality flag), the Data Product Generator (DPG) results in weather and climate data products on various temporal scales (from 5 min to annual) for single stations and regular grids. Gridded data are derived by vertical scaling and squared inverse distance interpolation (1 km × 1 km and 0.01° × 0.01° grids). Both subsystems (QCS and DPG) are realized by the programming language Python. For application purposes the resulting data products are available via the bi-lingual (dt, en) WegenerNet data portal (www.wegenernet.org). At this time, the main interface is still online in a system in which MapServer is used to import spatial data by its database interface and to generate images of static geographic formats. However, a Java applet is additionally needed to display these images on the users local host. Furthermore, station data are visualized as time series by the scripting language PHP. Since February 2010, the visualization of gridded data products is a first step to a new data portal based on OpenLayers. In this GIS framework, all geographic information (e.g., OpenStreetMap) is displayed with MapServer. Furthermore, the visualization of all meteorological parameters are generated on the fly by a Python CGI script and transparently overlayed on the maps. Hence, station data and gridded data are visualized and further prepared for download in common data formats (csv, NetCDF). In conclusion, measured data and generated data products are provided with a data latency less than 1-2 hours in standard operation (near real time). Following an introduction of the processing system along the lines above, resulting data products are presented online at the WegenerNet data portal.
NASA Astrophysics Data System (ADS)
Baker, Kirk R.; Hawkins, Andy; Kelly, James T.
2014-12-01
Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.
NPP-VIIRS DNB-based reallocating subpopulations to mercury in Urumqi city cluster, central Asia
NASA Astrophysics Data System (ADS)
Zhou, X.; Feng, X. B.; Dai, W.; Li, P.; Ju, C. Y.; Bao, Z. D.; Han, Y. L.
2017-02-01
Accurate and update assignment of population-related environmental matters onto fine grid cells in oasis cities of arid areas remains challenging. We present the approach based on Suomi National Polar-orbiting Partnership (S-NPP) -Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) to reallocate population onto a regular finer surface. The number of potential population to the mercury were reallocated onto 0.1x0.1 km reference grid in Urumqi city cluster of China’s Xinjiang, central Asia. The result of Monte Carlo modelling indicated that the range of 0.5 to 2.4 million people was reliable. The study highlights that the NPP-VIIRS DNB-based multi-layered, dasymetric, spatial method enhances our abilities to remotely estimate the distribution and size of target population at the street-level scale and has the potential to transform control strategies for epidemiology, public policy and other socioeconomic fields.
PDF added value of a high resolution climate simulation for precipitation
NASA Astrophysics Data System (ADS)
Soares, Pedro M. M.; Cardoso, Rita M.
2015-04-01
General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.
NASA Astrophysics Data System (ADS)
Penven, Pierrick; Debreu, Laurent; Marchesiello, Patrick; McWilliams, James C.
What most clearly distinguishes near-shore and off-shore currents is their dominant spatial scale, O (1-30) km near-shore and O (30-1000) km off-shore. In practice, these phenomena are usually both measured and modeled with separate methods. In particular, it is infeasible for any regular computational grid to be large enough to simultaneously resolve well both types of currents. In order to obtain local solutions at high resolution while preserving the regional-scale circulation at an affordable computational cost, a 1-way grid embedding capability has been integrated into the Regional Oceanic Modeling System (ROMS). It takes advantage of the AGRIF (Adaptive Grid Refinement in Fortran) Fortran 90 package based on the use of pointers. After a first evaluation in a baroclinic vortex test case, the embedding procedure has been applied to a domain that covers the central upwelling region off California, around Monterey Bay, embedded in a domain that spans the continental U.S. Pacific Coast. Long-term simulations (10 years) have been conducted to obtain mean-seasonal statistical equilibria. The final solution shows few discontinuities at the parent-child domain boundary and a valid representation of the local upwelling structure, at a CPU cost only slightly greater than for the inner region alone. The solution is assessed by comparison with solutions for the whole US Pacific Coast at both low and high resolutions and to solutions for only the inner region at high resolution with mean-seasonal boundary conditions.
Redistribution population data across a regular spatial grid according to buildings characteristics
NASA Astrophysics Data System (ADS)
Calka, Beata; Bielecka, Elzbieta; Zdunkiewicz, Katarzyna
2016-12-01
Population data are generally provided by state census organisations at the predefined census enumeration units. However, these datasets very are often required at userdefined spatial units that differ from the census output levels. A number of population estimation techniques have been developed to address these problems. This article is one of those attempts aimed at improving county level population estimates by using spatial disaggregation models with support of buildings characteristic, derived from national topographic database, and average area of a flat. The experimental gridded population surface was created for Opatów county, sparsely populated rural region located in Central Poland. The method relies on geolocation of population counts in buildings, taking into account the building volume and structural building type and then aggregation the people total in 1 km quadrilateral grid. The overall quality of population distribution surface expressed by the mean of RMSE equals 9 persons, and the MAE equals 0.01. We also discovered that nearly 20% of total county area is unpopulated and 80% of people lived on 33% of the county territory.
1985-10-01
grid points on 1 /20 lat /long meshI 4 c. SST global scale analysis ( 1 or 100 km lat /long grid) .. d. SST climatic scale analysis (50 or 500 km lat ...long grid) e. SST monthly means (2 1 /20 or 250 km lat /long grid) 3. Analog Sea Surface Temperature Product Set ’-%". V" " a. GOSSTCOMP charts - weekly...Mercator contour charts, each a ., 500 by 500 lat /long segment, 1 °C contour interval b. Regional charts - set of three charts covering the U.S
NASA Technical Reports Server (NTRS)
Brucker, Ludovic; Dinnat, Emmanuel Phillippe; Koenig, Lora S.
2014-01-01
Passive and active observations at L band (frequency (is) approximately 1.4 GHz) from the Aquarius/SAC-D mission offer new capabilities to study the polar regions. Due to the lack of polar-gridded products, however, applications over the cryosphere have been limited. We present three weekly polar-gridded products of Aquarius data to improve our understanding of L-band observations of ice sheets, sea ice, permafrost, and the polar oceans. Additionally, these products intend to facilitate access to L-band data, and can be used to assist in algorithm developments. Aquarius data at latitudes higher than 50 degrees are averaged and gridded into weekly products of brightness temperature (TB), normalized radar cross section (NRCS), and sea surface salinity (SSS). Each grid cell also contains sea ice fraction, the standard deviation of TB, NRCS, and SSS, and the number of footprint observations collected during the seven-day cycle. The largest 3 dB footprint dimensions are 97 km×156 km and 74 km×122 km (along × across track) for the radiometers and scatterometer, respectively. The data is gridded to the Equal-Area Scalable Earth version 2.0 (EASE2.0) grid, with a grid cell resolution of 36 km. The data sets start in August 2011, with the first Aquarius observations and will be updated on a monthly basis following the release schedule of the Aquarius Level 2 data sets. The weekly gridded products are distributed by the US National Snow and Ice Data Center at http://nsidc.org/data/aquarius/index.html
An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping
NASA Astrophysics Data System (ADS)
Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare
2017-04-01
Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.
High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...
Clouds Optically Gridded by Stereo COGS product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oktem, Rusen; Romps, David
COGS product is a 4D grid of cloudiness covering a 6 km × 6 km × 6 km cube centered at the central facility of SGP site at a spatial resolution of 50 meters and a temporal resolution of 20 seconds. The dimensions are X, Y, Z, and time, where X,Y, Z, correspond to east-west, north-south, and altitude of the grid point, respectively. COGS takes on values 0, 1, and -1 denoting "cloud", "no cloud", and "not available".
Incompressible flow simulations on regularized moving meshfree grids
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2017-11-01
A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
NASA Astrophysics Data System (ADS)
Brucker, L.; Dinnat, E. P.; Koenig, L. S.
2014-05-01
Passive and active observations at L band (frequency ~1.4 GHz) from the Aquarius/SAC-D mission offer new capabilities to study the polar regions. Due to the lack of polar-gridded products, however, applications over the cryosphere have been limited. We present three weekly polar-gridded products of Aquarius data to improve our understanding of L-band observations of ice sheets, sea ice, permafrost, and the polar oceans. Additionally, these products intend to facilitate access to L-band data, and can be used to assist in algorithm developments. Aquarius data at latitudes higher than 50° are averaged and gridded into weekly products of brightness temperature (TB), normalized radar cross section (NRCS), and sea surface salinity (SSS). Each grid cell also contains sea ice fraction, the standard deviation of TB, NRCS, and SSS, and the number of footprint observations collected during the seven-day cycle. The largest 3 dB footprint dimensions are 97 km × 156 km and 74 km × 122 km (along × across track) for the radiometers and scatterometer, respectively. The data is gridded to the Equal-Area Scalable Earth version 2.0 (EASE2.0) grid, with a grid cell resolution of 36 km. The data sets start in August 2011, with the first Aquarius observations and will be updated on a monthly basis following the release schedule of the Aquarius Level 2 data sets. The weekly gridded products are distributed by the US National Snow and Ice Data Center at http://nsidc.org/data/aquarius/index.html .
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
Hellsten, S; Dragosits, U; Place, C J; Dore, A J; Tang, Y S; Sutton, M A
2018-05-09
Ammonia emissions vary greatly at a local scale, and effects (eutrophication, acidification) occur primarily close to sources. Therefore it is important that spatially distributed emission estimates are located as accurately as possible. The main source of ammonia emissions is agriculture, and therefore agricultural survey statistics are the most important input data to an ammonia emission inventory alongside per activity estimates of emission potential. In the UK, agricultural statistics are collected at farm level, but are aggregated to parish level, NUTS-3 level or regular grid resolution for distribution to users. In this study, the Modifiable Areal Unit Problem (MAUP), associated with such amalgamation, is investigated in the context of assessing the spatial distribution of ammonia sources for emission inventories. England was used as a test area to study the effects of the MAUP. Agricultural survey data at farm level (point data) were obtained under license and amalgamated to different areal units or zones: regular 1-km, 5-km, 10-km grids and parish level, before they were imported into the emission model. The results of using the survey data at different levels of amalgamation were assessed to estimate the effects of the MAUP on the spatial inventory. The analysis showed that the size and shape of aggregation zones applied to the farm-level agricultural statistics strongly affect the location of the emissions estimated by the model. If the zones are too small, this may result in false emission "hot spots", i.e., artificially high emission values that are in reality not confined to the zone to which they are allocated. Conversely, if the zones are too large, detail may be lost and emissions smoothed out, which may give a false impression of the spatial patterns and magnitude of emissions in those zones. The results of the study indicate that the MAUP has a significant effect on the location and local magnitude of emissions in spatial inventories where amalgamated, zonal data are used. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Stochastic dynamic modeling of regular and slow earthquakes
NASA Astrophysics Data System (ADS)
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.
A Detailed Examination of the GPM Core Satellite Gridded Text Product
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, Owen A.; Kummerow, C.; Huffman, George; Olson, William S.; Kwiatowski, John M.
2015-01-01
The Global Precipitation Measurement (GPM) mission quarter-degree gridded-text product has a similar file format and a similar purpose as the Tropical Rainfall Measuring Mission (TRMM) 3G68 quarter-degree product. The GPM text-grid format is an hourly summary of surface precipitation retrievals from various GPM instruments and combinations of GPM instruments. The GMI Goddard Profiling (GPROF) retrieval provides the widest swath (800 km) and does the retrieval using the GPM Microwave Imager (GMI). The Ku radar provides the widest radar swath (250 km swath) and also provides continuity with the TRMM Ku Precipitation Radar. GPM's Ku+Ka band matched swath (125 km swath) provides a dual-frequency precipitation retrieval. The "combined" retrieval (125 km swath) provides a multi-instrument precipitation retrieval based on the GMI, the DPR Ku radar, and the DPR Ka radar. While the data are reported in hourly grids, all hours for a day are packaged into a single text file that is g-zipped to reduce file size and to speed up downloading. The data are reported on a 0.25deg x 0.25 deg grid.
NASA Astrophysics Data System (ADS)
Gärdenäs, A.; Jarvis, N.; Alavi, G.
The spatial variability of soil characteristics was studied in a small agricultural catch- ment (Vemmenhög, 9 km2) at the field and catchment scales. This analysis serves as a basis for assumptions concerning upscaling approaches used to model pesticide leaching from the catchment with the MACRO model (Jarvis et al., this meeting). The work focused on the spatial variability of two key soil properties for pesticide fate in soil, organic carbon and clay content. The Vemmenhög catchment (9 km2) is formed in a glacial till deposit in southernmost Sweden. The landscape is undulating (30 - 65 m a.s.l.) and 95 % of the area is used for crop production (winter rape, winter wheat, sugar beet and spring barley). The climate is warm temperate. Soil samples for or- ganic C and texture were taken on a small regular grid at Näsby Farm, (144 m x 144 m, sampling distance: 6-24 m, 77 points) and on an irregular large grid covering the whole catchment (sampling distance: 333 m, 46 points). At the field scale, it could be shown that the organic C content was strongly related to landscape position and height (R2= 73 %, p < 0.001, n=50). The organic C content of hollows in the landscape is so high that they contribute little to the total loss of pesticides (Jarvis et al., this meeting). Clay content is also related to landscape position, being larger at the hilltop locations resulting in lower near-saturated hydraulic conductivity. Hence, macropore flow can be expected to be more pronounced (see also Roulier & Jarvis, this meeting). The variability in organic C was similar for the field and catchment grids, which made it possible to krige the organic C content of the whole catchment using data from both grids and an uneven lag distance.
Constructing a Climatology of Whistler Wave Energy from Lightning in Low Earth Orbit
2011-12-16
geocentric coordinates are not equal area) and a great circle distance between the grid centers, an additional normalization is included to account for the...calculated at each corner of a 1°x1° geocentric grid as discussed for the apex latitude calculations; pseudopower was calculated within each grid...1°x1° geocentric grid at the altitude in question. Along with the model at 660 km (Figure 15a and 15b) and the conjugate location at 660 km (Figure
Satellite gravity gradient grids for geophysics
Bouman, Johannes; Ebbing, Jörg; Fuchs, Martin; Sebera, Josef; Lieb, Verena; Szwillus, Wolfgang; Haagmans, Roger; Novak, Pavel
2016-01-01
The Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite aimed at determining the Earth’s mean gravity field. GOCE delivered gravity gradients containing directional information, which are complicated to use because of their error characteristics and because they are given in a rotating instrument frame indirectly related to the Earth. We compute gravity gradients in grids at 225 km and 255 km altitude above the reference ellipsoid corresponding to the GOCE nominal and lower orbit phases respectively, and find that the grids may contain additional high-frequency content compared with GOCE-based global models. We discuss the gradient sensitivity for crustal depth slices using a 3D lithospheric model of the North-East Atlantic region, which shows that the depth sensitivity differs from gradient to gradient. In addition, the relative signal power for the individual gradient component changes comparing the 225 km and 255 km grids, implying that using all components at different heights reduces parameter uncertainties in geophysical modelling. Furthermore, since gravity gradients contain complementary information to gravity, we foresee the use of the grids in a wide range of applications from lithospheric modelling to studies on dynamic topography, and glacial isostatic adjustment, to bedrock geometry determination under ice sheets. PMID:26864314
Maus, S.; Barckhausen, U.; Berkenbosch, H.; Bournas, N.; Brozena, J.; Childers, V.; Dostaler, F.; Fairhead, J.D.; Finn, C.; von Frese, R.R.B; Gaina, C.; Golynsky, S.; Kucks, R.; Lu, Hai; Milligan, P.; Mogren, S.; Muller, R.D.; Olesen, O.; Pilkington, M.; Saltus, R.; Schreckenberger, B.; Thebault, E.; Tontini, F.C.
2009-01-01
A global Earth Magnetic Anomaly Grid (EMAG2) has been compiled from satellite, ship, and airborne magnetic measurements. EMAG2 is a significant update of our previous candidate grid for the World Digital Magnetic Anomaly Map. The resolution has been improved from 3 arc min to 2 arc min, and the altitude has been reduced from 5 km to 4 km above the geoid. Additional grid and track line data have been included, both over land and the oceans. Wherever available, the original shipborne and airborne data were used instead of precompiled oceanic magnetic grids. Interpolation between sparse track lines in the oceans was improved by directional gridding and extrapolation, based on an oceanic crustal age model. The longest wavelengths (>330 km) were replaced with the latest CHAMP satellite magnetic field model MF6. EMAG2 is available at http://geomag.org/models/EMAG2 and for permanent archive at http://earthref.org/ cgi-bin/er.cgi?s=erda.cgi?n=970. ?? 2009 by the American Geophysical Union.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Qian, Yun; Fast, Jerome D.
2011-07-13
Recent improvements to many global climate models include detailed, prognostic aerosol calculations intended to better reproduce the observed climate. However, the trace gas and aerosol fields are treated at the grid-cell scale with no attempt to account for sub-grid impacts on the aerosol fields. This paper begins to quantify the error introduced by the neglected sub-grid variability for the shortwave aerosol radiative forcing for a representative climate model grid spacing of 75 km. An analysis of the value added in downscaling aerosol fields is also presented to give context to the WRF-Chem simulations used for the sub-grid analysis. We foundmore » that 1) the impact of neglected sub-grid variability on the aerosol radiative forcing is strongest in regions of complex topography and complicated flow patterns, and 2) scale-induced differences in emissions contribute strongly to the impact of neglected sub-grid processes on the aerosol radiative forcing. The two of these effects together, when simulated at 75 km vs. 3 km in WRF-Chem, result in an average daytime mean bias of over 30% error in top-of-atmosphere shortwave aerosol radiative forcing for a large percentage of central Mexico during the MILAGRO field campaign.« less
Xu, Hui Qiu; Huang, Yin Hua; Wu, Zhi Feng; Cheng, Jiong; Li, Cheng
2016-10-01
Based on 641 agricultural top soil samples (0-20 cm) and land use map in 2005 of Guangzhou, we used single-factor pollution indices and Pearson/Spearman correlation and partial redundancy analyses and quantified the soil contamination with As and Cd and their relationships with landscape heterogeneity at three grid scales of 2 km×2 km, 5 km×5 km, and 10 km×10 km as well as the determinant landscape heterogeneity factors at a certain grid scale. 5.3% and 7.2% of soil samples were contaminated with As and Cd, respectively. At the three scales, the agricultural soil As and Cd contamination were generally significantly correlated with parent materials' composition, river/road density and landscape patterns of several land use types, indicating the parent materials, sewage irrigation and human activities (e.g., industrial and traffic activities, and the additions of pesticides and fertilizers) were possibly the main input pathways of trace metals. Three subsets of landscape heterogeneity variables (i.e., parent materials, distance-density variables, and landscape patterns) could explain 12.7%-42.9% of the variation of soil contamination with As and Cd, of which the explanatory power increased with the grid scale and the determinant factors varied with scales. Parent materials had higher contribution to the variations of soil contamination at the 2 and 10 km grid scales, while the contributions of landscape patterns and distance-density variables generally increased with the grid scale. Adjusting the distribution of cropland and optimizing the landscape pattern of land use types are important ways to reduce soil contamination at local scales, which urban planners and decision makers should pay more attention to.
NASA Astrophysics Data System (ADS)
Dore, A. J.; Kryza, M.; Hall, J. R.; Hallsworth, S.; Keller, V. J. D.; Vieno, M.; Sutton, M. A.
2011-12-01
The Fine Resolution Atmospheric Multi-pollutant Exchange model (FRAME) has been applied to model the spatial distribution of nitrogen deposition and air concentration over the UK at a 1 km spatial resolution. The modelled deposition and concentration data were gridded at resolutions of 1 km, 5 km and 50 km to test the sensitivity of calculations of the exceedance of critical loads for nitrogen deposition to the deposition data resolution. The modelled concentrations of NO2 were validated by comparison with measurements from the rural sites in the national monitoring network and were found to achieve better agreement with the high resolution 1 km data. High resolution plots were found to represent a more physically realistic distribution of nitrogen air concentrations and deposition resulting from use of 1 km resolution precipitation and emissions data as compared to 5 km resolution data. Summary statistics for national scale exceedance of the critical load for nitrogen deposition were not highly sensitive to the grid resolution of the deposition data but did show greater area exceedance with coarser grid resolution due to spatial averaging of high nitrogen deposition hot spots. Local scale deposition at individual Sites of Special Scientific Interest and high precipitation upland sites was sensitive to choice of grid resolution of deposition data. Use of high resolution data tended to generate lower deposition values in sink areas for nitrogen dry deposition (Sites of Scientific Interest) and higher values in high precipitation upland areas. In areas with generally low exceedance (Scotland) and for certain vegetation types (montane), the exceedance statistics were more sensitive to model data resolution.
NASA Astrophysics Data System (ADS)
Dore, A. J.; Kryza, M.; Hall, J. R.; Hallsworth, S.; Keller, V. J. D.; Vieno, M.; Sutton, M. A.
2012-05-01
The Fine Resolution Atmospheric Multi-pollutant Exchange model (FRAME) was applied to model the spatial distribution of reactive nitrogen deposition and air concentration over the United Kingdom at a 1 km spatial resolution. The modelled deposition and concentration data were gridded at resolutions of 1 km, 5 km and 50 km to test the sensitivity of calculations of the exceedance of critical loads for nitrogen deposition to the deposition data resolution. The modelled concentrations of NO2 were validated by comparison with measurements from the rural sites in the national monitoring network and were found to achieve better agreement with the high resolution 1 km data. High resolution plots were found to represent a more physically realistic distribution of reactive nitrogen air concentrations and deposition resulting from use of 1 km resolution precipitation and emissions data as compared to 5 km resolution data. Summary statistics for national scale exceedance of the critical load for nitrogen deposition were not highly sensitive to the grid resolution of the deposition data but did show greater area exceedance with coarser grid resolution due to spatial averaging of high nitrogen deposition hot spots. Local scale deposition at individual Sites of Special Scientific Interest and high precipitation upland sites was sensitive to choice of grid resolution of deposition data. Use of high resolution data tended to generate lower deposition values in sink areas for nitrogen dry deposition (Sites of Scientific Interest) and higher values in high precipitation upland areas. In areas with generally low exceedance (Scotland) and for certain vegetation types (montane), the exceedance statistics were more sensitive to model data resolution.
Coverage-maximization in networks under resource constraints.
Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy
2010-06-01
Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.
Mu, Guangyu; Liu, Ying; Wang, Limin
2015-01-01
The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
Yaski, Osnat; Portugali, Juval; Eilam, David
2012-04-01
The physical structure of the surrounding environment shapes the paths of progression, which in turn reflect the structure of the environment and the way that it shapes behavior. A regular and coherent physical structure results in paths that extend over the entire environment. In contrast, irregular structure results in traveling over a confined sector of the area. In this study, rats were tested in a dark arena in which half the area contained eight objects in a regular grid layout, and the other half contained eight objects in an irregular layout. In subsequent trials, a salient landmark was placed first within the irregular half, and then within the grid. We hypothesized that rats would favor travel in the area with regular order, but found that activity in the area with irregular object layout did not differ from activity in the area with grid layout, even when the irregular half included a salient landmark. Thus, the grid impact in one arena half extended to the other half and overshadowed the presumed impact of the salient landmark. This could be explained by mechanisms that control spatial behavior, such as grid cells and odometry. However, when objects were spaced irregularly over the entire arena, the salient landmark became dominant and the paths converged upon it, especially from objects with direct access to the salient landmark. Altogether, three environmental properties: (i) regular and predictable structure; (ii) salience of landmarks; and (iii) accessibility, hierarchically shape the paths of progression in a dark environment. Copyright © 2012 Elsevier B.V. All rights reserved.
Effects of high-frequency damping on iterative convergence of implicit viscous solver
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko
2017-11-01
This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.
NASA Astrophysics Data System (ADS)
Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.
2016-12-01
Disposal of high-level radioactive waste in a deep geological repository in crystalline host rock is one of the potential options for long term isolation. Characterization of the natural barrier system is an important component of the disposal option. In this study we present numerical modeling of flow and transport in fractured crystalline rock using an updated fracture continuum model (FCM). The FCM is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The original method by McKenna and Reeves (2005) has been updated to provide capabilities that enhance representation of fractured rock. As reported in Hadgu et al. (2015) the method was first modified to include fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation. More recently the FCM has been extended to include three different methods. (1) The Sequential Gaussian Simulation (SGSIM) method uses spatial correlation to generate fractures and define their properties for FCM (2) The ELLIPSIM method randomly generates a specified number of ellipses with properties defined by probability distributions. Each ellipse represents a single fracture. (3) Direct conversion of discrete fracture network (DFN) output. Test simulations were conducted to simulate flow and transport using ELLIPSIM and direct conversion of DFN methods. The simulations used a 1 km x 1km x 1km model domain and a structured with grid block of size of 10 m x 10m x 10m, resulting in a total of 106 grid blocks. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the different methods were applied to generate representative permeability fields. The PFLOTRAN (Hammond et al., 2014) code was used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains. SAND2016-7509 A
NASA Astrophysics Data System (ADS)
MacDonald, I. R.; Garcia-Pineda, O. G.; Solow, A.; Daneshgar, S.; Beet, A.
2013-12-01
Oil discharged as a result of the Deepwater Horizon disaster was detected on the surface of the Gulf of Mexico by synthetic aperture radar satellites from 25 April 2010 until 4 August 2010. SAR images were not restricted by daylight or cloud-cover. Distribution of this material is a tracer for potential environmental impacts and an indicator of impact mitigation due to response efforts and physical forcing factors. We used a texture classifying neural network algorithm for semi-supervised processing of 176 SAR images from the ENVISAT, RADARSAT I, and COSMO-SKYMED satellites. This yielded an estimate the proportion of oil-covered water within the region sampled by each image with a nominal resolution of 10,000 sq m (100m pixels), which was compiled as a 5-km equal area grid covering the northern Gulf of Mexico. Few images covered the entire impact area, so analysis was required to compile a regular time-series of the oil cover. A Gaussian kernel using a bandwidth of 2 d was used to estimate oil cover percent in each grid at noon and midnight throughout the interval. Variance and confidence intervals were calculated for each grid and for the global 12-h totals. Results animated across the impact region show the spread of oil under the influence of physical factors. Oil cover reached an early peak of 17032.26 sq km (sd 460.077) on 18 May, decreasing to 27% of this total on 4 June, following by sharp increase to an overall maximum of 18424.56 sq km (sd 424.726) on 19 June. There was a significant negative correlation between average wind stress and the total area of oil cover throughout the time-series. Correlation between response efforts including aerial and subsurface application of dispersants and burning of gathered oil was negative, positive, or indeterminate at different time segments during the event. Daily totals for oil-covered surface waters of the Gulf of Mexico during 25 April - 9 August 2010 with upper and lower 0.95 confidence limits on estimate. (No oil visible after 4 August.)
Monthly fractional green vegetation cover associated with land cover classes of the conterminous USA
Gallo, Kevin P.; Tarpley, Dan; Mitchell, Ken; Csiszar, Ivan; Owen, Timothy W.; Reed, Bradley C.
2001-01-01
The land cover classes developed under the coordination of the International Geosphere-Biosphere Programme Data and Information System (IGBP-DIS) have been analyzed for a study area that includes the Conterminous United States and portions of Mexico and Canada. The 1-km resolution data have been analyzed to produce a gridded data set that includes within each 20-km grid cell: 1) the three most dominant land cover classes, 2) the fractional area associated with each of the three dominant classes, and 3) the fractional area covered by water. Additionally, the monthly fraction of green vegetation cover (fgreen) associated with each of the three dominant land cover classes per grid cell was derived from a 5-year climatology of 1-km resolution NOAA-AVHRR data. The variables derived in this study provide a potential improvement over the use of monthly fgreen linked to a single land cover class per model grid cell.
Snow and Ice Products from the Moderate Resolution Imaging Spectroradiometer
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Salomonson, Vincent V.; Riggs, George A.; Klein, Andrew G.
2003-01-01
Snow and sea ice products, derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument, flown on the Terra and Aqua satellites, are or will be available through the National Snow and Ice Data Center Distributed Active Archive Center (DAAC). The algorithms that produce the products are automated, thus providing a consistent global data set that is suitable for climate studies. The suite of MODIS snow products begins with a 500-m resolution, 2330-km swath snow-cover map that is then projected onto a sinusoidal grid to produce daily and 8-day composite tile products. The sequence proceeds to daily and 8-day composite climate-modeling grid (CMG) products at 0.05 resolution. A daily snow albedo product will be available in early 2003 as a beta test product. The sequence of sea ice products begins with a swath product at 1-km resolution that provides sea ice extent and ice-surface temperature (IST). The sea ice swath products are then mapped onto the Lambert azimuthal equal area or EASE-Grid projection to create a daily and 8-day composite sea ice tile product, also at 1 -km resolution. Climate-Modeling Grid (CMG) sea ice products in the EASE-Grid projection at 4-km resolution are planned for early 2003.
Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui
2009-01-01
The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.
A modified S-DIMM+: applying additional height grids for characterizing daytime seeing profiles
NASA Astrophysics Data System (ADS)
Wang, Zhiyong; Zhang, Lanqiang; Kong, Lin; Bao, Hua; Guo, Youming; Rao, Xuejun; Zhong, Libo; Zhu, Lei; Rao, Changhui
2018-07-01
Characterization of daytime atmospheric turbulence profiles is needed for the design of a multi-conjugate adaptive optical system. S-DIMM+ (solar differential image motion monitor+) is a technique to measure vertical seeing profiles. However, the number of height grids will be limited by the lenslet array of the wide-field Shack-Hartmann wavefront sensor (SHWFS). A small number of subaperture lenslet arrays will lead to a coarse height grid over the atmosphere, which can result in difficulty in finding the location of strong-turbulence layers and overestimates of the turbulence strength for the measured layers. To address this problem, we propose a modified S-DIMM+ method to measure seeing profiles iteratively with decreasing altitude range for a given number of height grids; finally they will be combined as a new seeing profile, with a denser and more uniform distribution of height grids. This method is tested with simulations and recovers the input height and contribution perfectly. Furthermore, this method is applied to the 102 data-sequences recorded from the 1-m New Vacuum Solar Telescope at Fuxian Solar Observatory, 55 of which were recorded at local time between 13:40 and 14:35 on 2016 October 6, and the other 47 between 12:50 and 13:40 on 2017 October 5. A 7x7 lenslet array of SHWFS is used to generate a 16-layer height grid to 15 km, each with 1 km height separation. The experimental results show that the turbulence has three origins in the lower (0-2 km) layers, the higher (3-6 km) layers and the uppermost (≥7 km) layers.
Climate change scenarios of heat waves in Central Europe and their uncertainties
NASA Astrophysics Data System (ADS)
Lhotka, Ondřej; Kyselý, Jan; Farda, Aleš
2018-02-01
The study examines climate change scenarios of Central European heat waves with a focus on related uncertainties in a large ensemble of regional climate model (RCM) simulations from the EURO-CORDEX and ENSEMBLES projects. Historical runs (1970-1999) driven by global climate models (GCMs) are evaluated against the E-OBS gridded data set in the first step. Although the RCMs are found to reproduce the frequency of heat waves quite well, those RCMs with the coarser grid (25 and 50 km) considerably overestimate the frequency of severe heat waves. This deficiency is improved in higher-resolution (12.5 km) EURO-CORDEX RCMs. In the near future (2020-2049), heat waves are projected to be nearly twice as frequent in comparison to the modelled historical period, and the increase is even larger for severe heat waves. Uncertainty originates mainly from the selection of RCMs and GCMs because the increase is similar for all concentration scenarios. For the late twenty-first century (2070-2099), a substantial increase in heat wave frequencies is projected, the magnitude of which depends mainly upon concentration scenario. Three to four heat waves per summer are projected in this period (compared to less than one in the recent climate), and severe heat waves are likely to become a regular phenomenon. This increment is primarily driven by a positive shift of temperature distribution, but changes in its scale and enhanced temporal autocorrelation of temperature also contribute to the projected increase in heat wave frequencies.
Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Papadopoulos, G. A.
2017-12-01
The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.
Evaluation of tropical channel refinement using MPAS-A aquaplanet simulations
Martini, Matus N.; Gustafson, Jr., William I.; O'Brien, Travis A.; ...
2015-09-13
Climate models with variable-resolution grids offer a computationally less expensive way to provide more detailed information at regional scales and increased accuracy for processes that cannot be resolved by a coarser grid. This study uses the Model for Prediction Across Scales–Atmosphere (MPAS22A), consisting of a nonhydrostatic dynamical core and a subset of Advanced Research Weather Research and Forecasting (ARW-WRF) model atmospheric physics that have been modified to include the Community Atmosphere Model version 5 (CAM5) cloud fraction parameterization, to investigate the potential benefits of using increased resolution in an tropical channel. The simulations are performed with an idealized aquaplanet configurationmore » using two quasi-uniform grids, with 30 km and 240 km grid spacing, and two variable-resolution grids spanning the same grid spacing range; one with a narrow (20°S–20°N) and one with a wide (30°S–30°N) tropical channel refinement. Results show that increasing resolution in the tropics impacts both the tropical and extratropical circulation. Compared to the quasi-uniform coarse grid, the narrow-channel simulation exhibits stronger updrafts in the Ferrel cell as well as in the middle of the upward branch of the Hadley cell. The wider tropical channel has a closer correspondence to the 30 km quasi-uniform simulation. However, the total atmospheric poleward energy transports are similar in all simulations. The largest differences are in the low-level cloudiness. The refined channel simulations show improved tropical and extratropical precipitation relative to the global 240 km simulation when compared to the global 30 km simulation. All simulations have a single ITCZ. Furthermore, the relatively small differences in mean global and tropical precipitation rates among the simulations are a promising result, and the evidence points to the tropical channel being an effective method for avoiding the extraneous numerical artifacts seen in earlier studies that only refined portion of the tropics.« less
Evaluation of acoustic telemetry grids for determining aquatic animal movement and survival
Kraus, Richard T.; Holbrook, Christopher; Vandergoot, Christopher; Stewart, Taylor R.; Faust, Matthew D.; Watkinson, Douglas A.; Charles, Colin; Pegg, Mark; Enders, Eva C.; Krueger, Charles C.
2018-01-01
Acoustic telemetry studies have frequently prioritized linear configurations of hydrophone receivers, such as perpendicular from shorelines or across rivers, to detect the presence of tagged aquatic animals. This approach introduces unknown bias when receivers are stationed for convenience at geographic bottlenecks (e.g., at the mouth of an embayment or between islands) as opposed to deployments following a statistical sampling design.We evaluated two-dimensional acoustic receiver arrays (grids: receivers spread uniformly across space) as an alternative approach to provide estimates of survival, movement, and habitat use. Performance of variably-spaced receiver grids (5–25 km spacing) was evaluated by simulating (1) animal tracks as correlated random walks (speed: 0.1–0.9 m/s; turning angle standard deviation: 5–30 degrees); (2) variable tag transmission intervals along each track (nominal delay: 15–300 seconds); and (3) probability of detection of each transmission based on logistic detection range curves (midpoint: 200–1500 m). From simulations, we quantified i) time between successive detections on any receiver (detection time), ii) time between successive detections on different receivers (transit time), and iii) distance between successive detections on different receivers (transit distance).In the most restrictive detection range scenario (200 m), the 95th percentile of transit time was 3.2 days at 5 km grid spacing, 5.7 days at 7 km, and 15.2 days at 25 km; for the 1500 m detection range scenario, it was 0.1 days at 5 km, 0.5 days at 7 km, and 10.8 days at 25 km. These values represented upper bounds on the expected maximum time that an animal could go undetected. Comparison of the simulations with pilot studies on three fishes (walleye Sander vitreus, common carp Cyprinus carpio, and channel catfish Ictalurus punctatus) from two independent large lake ecosystems (lakes Erie and Winnipeg) revealed shorter detection and transit times than what simulations predicted.By spreading effort uniformly across space, grids can improve understanding of fish migration over the commonly employed receiver line approach, but at increased time cost for maintaining grids.
NASA Astrophysics Data System (ADS)
Pan, Shuai; Choi, Yunsoo; Roy, Anirban; Jeon, Wonbae
2017-09-01
A WRF-SMOKE-CMAQ air quality modeling system was used to investigate the impact of horizontal spatial resolution on simulated nitrogen oxides (NOx) and ozone (O3) in the Greater Houston area (a non-attainment area for O3). We employed an approach recommended by the United States Environmental Protection Agency to allocate county-based emissions to model grid cells in 1 km and 4 km horizontal grid resolutions. The CMAQ Integrated Process Rate analyses showed a substantial difference in emissions contributions between 1 and 4 km grids but similar NOx and O3 concentrations over urban and industrial locations. For example, the peak NOx emissions at an industrial and urban site differed by a factor of 20 for the 1 km and 8 for the 4 km grid, but simulated NOx concentrations changed only by a factor of 1.2 in both cases. Hence, due to the interplay of the atmospheric processes, we cannot expect a similar level of reduction of the gas-phase air pollutants as the reduction of emissions. Both simulations reproduced the variability of NASA P-3B aircraft measurements of NOy and O3 in the lower atmosphere (from 90 m to 4.5 km). Both simulations provided similar reasonable predictions at surface, while 1 km case depicted more detailed features of emissions and concentrations in heavily polluted areas, such as highways, airports, and industrial regions, which are useful in understanding the major causes of O3 pollution in such regions, and to quantify transport of O3 to populated communities in urban areas. The Integrated Reaction Rate analyses indicated a distinctive difference of chemistry processes between the model surface layer and upper layers, implying that correcting the meteorological conditions at the surface may not help to enhance the O3 predictions. The model-observation O3 bias in our studies (e.g., large over-prediction during the nighttime or along Gulf of Mexico coastline), were due to uncertainties in meteorology, chemistry or other processes. Horizontal grid resolution is unlikely the major contributor to these biases.
Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...
2015-01-20
Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less
NASA Astrophysics Data System (ADS)
Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.
2017-12-01
Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.
Real-time monitoring and short-term forecasting of drought in Norway
NASA Astrophysics Data System (ADS)
Kwok Wong, Wai; Hisdal, Hege
2013-04-01
Drought is considered to be one of the most costly natural disasters. Drought monitoring and forecasting are thus important for sound water management. In this study hydrological drought characteristics applicable for real-time monitoring and short-term forecasting of drought in Norway were developed. A spatially distributed hydrological model (HBV) implemented in a Web-based GIS framework provides a platform for drought analyses and visualizations. A number of national drought maps can be produced, which is a simple and effective way to communicate drought conditions to decision makers and the public. The HBV model is driven by precipitation and air temperature data. On a daily time step it calculates the water balance for 1 x 1 km2 grid cells characterized by their elevation and land use. Drought duration and areal drought coverage for runoff and subsurface storage (sum of soil moisture and groundwater) were derived. The threshold level method was used to specify drought conditions on a grid cell basis. The daily 10th percentile thresholds were derived from seven-day windows centered on that calendar day from the reference period 1981-2010 (threshold not exceeded 10% of the time). Each individual grid cell was examined to determine if it was below its respective threshold level. Daily drought-stricken areas can then be easily identified when visualized on a map. The drought duration can also be tracked and calculated by a retrospective analysis. Real-time observations from synoptic stations interpolated to a regular grid of 1 km resolution constituted the forcing data for the current situation. 9-day meteorological forecasts were used as input to the HBV model to obtain short-term hydrological drought forecasts. Downscaled precipitation and temperature fields from two different atmospheric models were applied. The first two days of the forecast period adopted the forecasts from Unified Model (UM4) while the following seven days were based on the 9-day forecasts from ECMWF. The approach has been tested and is now available on the Web for operational water management.
Commerce Spectrum Management Advisory Committee (CSMAC) Working Group (WG) 3 Phase 2 Study Summary
2013-05-29
threshold Kauai Niihau 52 HTS Power Contours 1 kW transmitter power with 20 dB attenuation, 1 km grid spacing LTE base station received power (dBW...137.4 dBW LTE threshold Kauai Niihau 53 HTS LTE System Threshold Exceedance, 1755-1780 MHz 1 kW transmitter power, 1 km grid spacing
Available NAM 218 AWIPS Grid - CONUS (12-km Resolution; full complement of pressure level fields and some ; full complement of surface-based fields) Filename Inventory nam.tccz.awip12fh.tm00.grib2 FH00 FH01 fh.xxxx_tl.press_gr.grbgrd NAM 242 AWIPS Grid - Over Alaska (11.25 KM Resolution; full complement of pressure level fields
Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling
NASA Astrophysics Data System (ADS)
Reed, Seann M.
2003-09-01
The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.
Sudhakar Reddy, C; Vazeed Pasha, S; Jha, C S; Dadhwal, V K
2015-07-01
Conservation of biodiversity has been put to the highest priority throughout the world. The process of identifying threatened ecosystems will search for different drivers related to biodiversity loss. The present study aimed to generate spatial information on deforestation and ecological degradation indicators of fragmentation and forest fires using systematic conceptual approach in Telangana state, India. Identification of ecosystems facing increasing vulnerability can help to safeguard the extinctions of species and useful for conservation planning. The technological advancement of satellite remote sensing and Geographical Information System has increased greatly in assessment and monitoring of ecosystem-level changes. The areas of threat were identified by creating grid cells (5 × 5 km) in Geographical Information System (GIS). Deforestation was assessed using multi-source data of 1930, 1960, 1975, 1985, 1995, 2005 and 2013. The forest cover of 40,746 km(2), 29,299 km(2), 18,652 km(2), 18,368 km(2), 18,006 km(2), 17,556 km(2) and 17,520 km(2) was estimated during 1930, 1960, 1975, 1985, 1995, 2005 and 2013, respectively. Historical evaluation of deforestation revealed that major changes had occurred in forests of Telangana and identified 1095 extinct, 397 critically endangered, 523 endangered and 311 vulnerable ecosystem grid cells. The fragmentation analysis has identified 307 ecosystem grid cells under critically endangered status. Forest burnt area information was extracted using AWiFS data of 2005 to 2014. Spatial analysis indicates total fire-affected forest in Telangana as 58.9% in a decadal period. Conservation status has been recorded depending upon values of threat for each grid, which forms the basis for conservation priority hotspots. Of existing forest, 2.1% grids had severe ecosystem collapse and had been included under the category of conservation priority hotspot-I, followed by 27.2% in conservation priority hotspot-II and 51.5% in conservation priority hotspot-III. This analysis complements assessment of ecosystems undergoing multiple threats. An integrated approach involving the deforestation and degradation indicators is useful in formulating the strategies to take appropriate conservation measures.
CFD analysis of turbopump volutes
NASA Technical Reports Server (NTRS)
Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken
1993-01-01
An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.
Crăciun, Cora
2014-08-01
CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.
SoilGrids1km — Global Soil Information Based on Automated Mapping
Hengl, Tomislav; de Jesus, Jorge Mendes; MacMillan, Robert A.; Batjes, Niels H.; Heuvelink, Gerard B. M.; Ribeiro, Eloi; Samuel-Rosa, Alessandro; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Gonzalez, Maria Ruiperez
2014-01-01
Background Soils are widely recognized as a non-renewable natural resource and as biophysical carbon sinks. As such, there is a growing requirement for global soil information. Although several global soil information systems already exist, these tend to suffer from inconsistencies and limited spatial detail. Methodology/Principal Findings We present SoilGrids1km — a global 3D soil information system at 1 km resolution — containing spatial predictions for a selection of soil properties (at six standard depths): soil organic carbon (g kg−1), soil pH, sand, silt and clay fractions (%), bulk density (kg m−3), cation-exchange capacity (cmol+/kg), coarse fragments (%), soil organic carbon stock (t ha−1), depth to bedrock (cm), World Reference Base soil groups, and USDA Soil Taxonomy suborders. Our predictions are based on global spatial prediction models which we fitted, per soil variable, using a compilation of major international soil profile databases (ca. 110,000 soil profiles), and a selection of ca. 75 global environmental covariates representing soil forming factors. Results of regression modeling indicate that the most useful covariates for modeling soils at the global scale are climatic and biomass indices (based on MODIS images), lithology, and taxonomic mapping units derived from conventional soil survey (Harmonized World Soil Database). Prediction accuracies assessed using 5–fold cross-validation were between 23–51%. Conclusions/Significance SoilGrids1km provide an initial set of examples of soil spatial data for input into global models at a resolution and consistency not previously available. Some of the main limitations of the current version of SoilGrids1km are: (1) weak relationships between soil properties/classes and explanatory variables due to scale mismatches, (2) difficulty to obtain covariates that capture soil forming factors, (3) low sampling density and spatial clustering of soil profile locations. However, as the SoilGrids system is highly automated and flexible, increasingly accurate predictions can be generated as new input data become available. SoilGrids1km are available for download via http://soilgrids.org under a Creative Commons Non Commercial license. PMID:25171179
Evaluation of the National Solar Radiation Database (NSRDB) Using Ground-Based Measurements
NASA Astrophysics Data System (ADS)
Xie, Y.; Sengupta, M.; Habte, A.; Lopez, A.
2017-12-01
Solar resource is essential for a wide spectrum of applications including renewable energy, climate studies, and solar forecasting. Solar resource information can be obtained from ground-based measurement stations and/or from modeled data sets. While measurements provide data for the development and validation of solar resource models and other applications modeled data expands the ability to address the needs for increased accuracy and spatial and temporal resolution. The National Renewable Energy Laboratory (NREL) has developed and regular updates modeled solar resource through the National Solar Radiation Database (NSRDB). The recent NSRDB dataset was developed using the physics-based Physical Solar Model (PSM) and provides gridded solar irradiance (global horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance) at a 4-km by 4-km spatial and half-hourly temporal resolution covering 18 years from 1998-2015. A comprehensive validation of the performance of the NSRDB (1998-2015) was conducted to quantify the accuracy of the spatial and temporal variability of the solar radiation data. Further, the study assessed the ability of NSRDB (1998-2015) to accurately capture inter-annual variability, which is essential information for solar energy conversion projects and grid integration studies. Comparisons of the NSRDB (1998-2015) with nine selected ground-measured data were conducted under both clear- and cloudy-sky conditions. These locations provide a high quality data covering a variety of geographical locations and climates. The comparison of the NSRDB to the ground-based data demonstrated that biases were within +/- 5% for GHI and +/-10% for DNI. A comprehensive uncertainty estimation methodology was established to analyze the performance of the gridded NSRDB and includes all sources of uncertainty at various time-averaged periods, a method that is not often used in model evaluation. Further, the study analyzed the inter-annual and mean-anomaly of the 18 years of solar radiation data. This presentation will outline the validation methodology and provide detailed results of the comparison.
Assessing the prospective resource base for enhanced geothermal systems in Europe
NASA Astrophysics Data System (ADS)
Limberger, J.; Calcagno, P.; Manzella, A.; Trumpy, E.; Boxem, T.; Pluymaekers, M. P. D.; van Wees, J.-D.
2014-12-01
In this study the resource base for EGS (enhanced geothermal systems) in Europe was quantified and economically constrained, applying a discounted cash-flow model to different techno-economic scenarios for future EGS in 2020, 2030, and 2050. Temperature is a critical parameter that controls the amount of thermal energy available in the subsurface. Therefore, the first step in assessing the European resource base for EGS is the construction of a subsurface temperature model of onshore Europe. Subsurface temperatures were computed to a depth of 10 km below ground level for a regular 3-D hexahedral grid with a horizontal resolution of 10 km and a vertical resolution of 250 m. Vertical conductive heat transport was considered as the main heat transfer mechanism. Surface temperature and basal heat flow were used as boundary conditions for the top and bottom of the model, respectively. If publicly available, the most recent and comprehensive regional temperature models, based on data from wells, were incorporated. With the modeled subsurface temperatures and future technical and economic scenarios, the technical potential and minimum levelized cost of energy (LCOE) were calculated for each grid cell of the temperature model. Calculations for a typical EGS scenario yield costs of EUR 215 MWh-1 in 2020, EUR 127 MWh-1 in 2030, and EUR 70 MWh-1 in 2050. Cutoff values of EUR 200 MWh-1 in 2020, EUR 150 MWh-1 in 2030, and EUR 100 MWh-1 in 2050 are imposed to the calculated LCOE values in each grid cell to limit the technical potential, resulting in an economic potential for Europe of 19 GWe in 2020, 22 GWe in 2030, and 522 GWe in 2050. The results of our approach do not only provide an indication of prospective areas for future EGS in Europe, but also show a more realistic cost determined and depth-dependent distribution of the technical potential by applying different well cost models for 2020, 2030, and 2050.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-01-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-06-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multiscale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. The impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE, a CRM) model and Goddard MMF that uses the GCEs as its embedded CRMs. Both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the Goddard MMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feedback are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
NASA Technical Reports Server (NTRS)
Swinbank, Richard; Purser, James
2006-01-01
Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.
Balloon-based interferometric techniques
NASA Technical Reports Server (NTRS)
Rees, David
1985-01-01
A balloon-borne triple-etalon Fabry-Perot Interferometer, observing the Doppler shifts of absorption lines caused by molecular oxygen and water vapor in the far red/near infrared spectrum of backscattered sunlight, has been used to evaluate a passive spaceborne remote sensing technique for measuring winds in the troposphere and stratosphere. There have been two successful high altitude balloon flights of the prototype UCL instrument from the National Scientific Balloon Facility at Palestine, TE (May 80, Oct. 83). The results from these flights have demonstrated that an interferometer with adequate resolution, stability and sensitivity can be built. The wind data are of comparable quality to those obtained from operational techniques (balloon and rocket sonde, cloud-top drift analysis, and from the gradient wind analysis of satellite radiance measurements). However, the interferometric data can provide a regular global grid, over a height range from 5 to 50 km in regions of clear air. Between the middle troposphere (5 km) and the upper stratosphere (40 to 50 km), an optimized instrument can make wind measurements over the daylit hemisphere with an accuracy of about 3 to 5 m/sec (2 sigma). It is possible to obtain full height profiles between altitudes of 5 and 50 km, with 4 km height resolution, and a spatial resolution of about 200 km, along the orbit track. Below an altitude of about 10 km, Fraunhofer lines of solar origin are possible targets of the Doppler wind analysis. Above an altitude of 50 km, the weakness of the backscattered solar spectrum (decreasing air density) is coupled with the low absorption crosssection of all atmospheric species in the spectral region up to 800 nm (where imaging photon detectors can be used), causing the along-the-track resolution (or error) to increase beyond values useful for operational purposes. Within the region of optimum performance (5 to 50 km), however, the technique is a valuable potential complement to existing wind measuring systems and can provide a low cost addition to powerful active (LIDAR) wind measuring systems now under development.
If Pythagoras Had a Geoboard...
ERIC Educational Resources Information Center
Ewbank, William A.
1973-01-01
Finding areas on square grid and on isometric grid geoboards is explained, then the Pythagorean Theorem is investigated when regular n-gons and when similar figures are erected on the sides of a right triangle. (DT)
Air quality real-time forecast before and during the G-20 ...
The 2016 G-20 Hangzhou summit, the eleventh annual meeting of the G-20 heads of government, will be held during September 3-5, 2016 in Hangzhou, China. For a successful summit, it is important to ensure good air quality. To achieve this goal, governments of Hangzhou and its surrounding provinces will enforce a series of emission reductions, such as a forced closure of major highly-polluting industries and also limiting car and construction emissions in the cities and surroundings during the 2016 G-20 Hangzhou summit. Air quality forecast systems consisting of the two-way coupled WRF-CMAQ and online-coupled WRF-Chem have been applied to forecast air quality in Hangzhou regularly. This study will present the results of real-time forecasts of air quality over eastern China using 12-km grid spacing and for Hangzhou area using 4-km grid spacing with these two modeling systems using emission inventories for base and 2016 G-20 scenarios before and during the 2016 G-20 Hangzhou summit. Evaluations of models’ performance for both cases for PM2.5, PM10, O3, SO2, NO2, CO, air quality index (AQI), and aerosol optical depth (AOD) are carried out by comparing them with observations obtained from satellites, such as MODIS, and surface monitoring networks. The effects of the emission reduction efforts on expected air quality improvements during the2016 G-20 Hangzhou summit will be studied in depth. This study provides insights on how air quality will be improved by a plan
Method of assembly of molecular-sized nets and scaffolding
Michl, Josef; Magnera, Thomas F.; David, Donald E.; Harrison, Robin M.
1999-01-01
The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures.
Method of assembly of molecular-sized nets and scaffolding
Michl, J.; Magnera, T.F.; David, D.E.; Harrison, R.M.
1999-03-02
The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures. 9 figs.
Development of a gridded meteorological dataset over Java island, Indonesia 1985-2014.
Yanto; Livneh, Ben; Rajagopalan, Balaji
2017-05-23
We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985-2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology.
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
Turner, D.P.; Dodson, R.; Marks, D.
1996-01-01
Spatially distributed biogeochemical models may be applied over grids at a range of spatial resolutions, however, evaluation of potential errors and loss of information at relatively coarse resolutions is rare. In this study, a georeferenced database at the 1-km spatial resolution was developed to initialize and drive a process-based model (Forest-BGC) of water and carbon balance over a gridded 54976 km2 area covering two river basins in mountainous western Oregon. Corresponding data sets were also prepared at 10-km and 50-km spatial resolutions using commonly employed aggregation schemes. Estimates were made at each grid cell for climate variables including daily solar radiation, air temperature, humidity, and precipitation. The topographic structure, water holding capacity, vegetation type and leaf area index were likewise estimated for initial conditions. The daily time series for the climatic drivers was developed from interpolations of meteorological station data for the water year 1990 (1 October 1989-30 September 1990). Model outputs at the 1-km resolution showed good agreement with observed patterns in runoff and productivity. The ranges for model inputs at the 10-km and 50-km resolutions tended to contract because of the smoothed topography. Estimates for mean evapotranspiration and runoff were relatively insensitive to changing the spatial resolution of the grid whereas estimates of mean annual net primary production varied by 11%. The designation of a vegetation type and leaf area at the 50-km resolution often subsumed significant heterogeneity in vegetation, and this factor accounted for much of the difference in the mean values for the carbon flux variables. Although area wide means for model outputs were generally similar across resolutions, difference maps often revealed large areas of disagreement. Relatively high spatial resolution analyses of biogeochemical cycling are desirable from several perspectives and may be particularly important in the study of the potential impacts of climate change.
On Improving 4-km Mesoscale Model Simulations
NASA Astrophysics Data System (ADS)
Deng, Aijun; Stauffer, David R.
2006-03-01
A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6) Although use of current CPSs may violate their underlying assumptions related to the size of the convective element relative to the grid size, the gridpoint storm problem was greatly reduced by applying a CPS to the 4-km grid.
Variable Grid Traveltime Tomography for Near-surface Seismic Imaging
NASA Astrophysics Data System (ADS)
Cai, A.; Zhang, J.
2017-12-01
We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.
NASA Astrophysics Data System (ADS)
Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando
2016-04-01
Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.
Exploration Gap Assessment (FY13 Update)
Dan Getman
2013-09-30
This submission contains an update to the previous Exploration Gap Assessment funded in 2012, which identify high potential hydrothermal areas where critical data are needed (gap analysis on exploration data). The uploaded data are contained in two data files for each data category: A shape (SHP) file containing the grid, and a data file (CSV) containing the individual layers that intersected with the grid. This CSV can be joined with the map to retrieve a list of datasets that are available at any given site. A grid of the contiguous U.S. was created with 88,000 10-km by 10-km grid cells, and each cell was populated with the status of data availability corresponding to five data types: 1. well data 2. geologic maps 3. fault maps 4. geochemistry data 5. geophysical data
NASA Astrophysics Data System (ADS)
Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.
2018-07-01
Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.
Procedure for locating 10 km UTM grid on Alabama County general highway maps
NASA Technical Reports Server (NTRS)
Paludan, C. T. N.
1975-01-01
Each county highway map has a geographic grid of degrees and tens of minutes in both longitude and latitude in the margins and within the map as intersection crosses. These will be used to locate the universal transverse mercator (UTM) grid at 10 km intervals. Since the maps used may have stretched or shrunk in height and/or width, interpolation should be done between the 10 min intersections when possible. A table of UTM coordinates of 10 min intersections is required and included. In Alabama, all eastings are referred to a false easting of 500,000 m at 87 deg W longitude (central meridian, CM).
WRF-Cordex simulations for Europe: mean and extreme precipitation for present and future climates
NASA Astrophysics Data System (ADS)
Cardoso, Rita M.; Soares, Pedro M. M.; Miranda, Pedro M. A.
2013-04-01
The Weather Research and Forecast (WRF-ARW) model, version 3.3.1, was used to perform the European domain Cordex simulations, at 50km resolution. A first simulation, forced by ERA-Interim (1989-2009), was carried out to evaluate the models performance to represent the mean and extreme precipitation in present European climate. This evaluation is based in the comparison of WRF results against the ECAD regular gridded dataset of daily precipitation. Results are comparable to recent studies with other models for the European region, at this resolution. For the same domain a control and a future scenario (RCP8.5) simulation was performed to assess the climate change impact on the mean and extreme precipitation. These regional simulations were forced by EC-EARTH model results, and, encompass the periods from 1960-2006 and 2006-2100, respectively.
The ARM Best Estimate 2-dimensional Gridded Surface
Xie,Shaocheng; Qi, Tang
2015-06-15
The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.
Distributed Wavelet Transform for Irregular Sensor Network Grids
2005-01-01
implement it in a multi-hop, wireless sensor network ; and illustrate with several simulations. The new transform performs on par with conventional wavelet methods in a head-to-head comparison on a regular grid of sensor nodes.
Detector shape in hexagonal sampling grids
NASA Astrophysics Data System (ADS)
Baronti, Stefano; Capanni, Annalisa; Romoli, Andrea; Santurri, Leonardo; Vitulli, Raffaele
2001-12-01
Recent improvements in CCD technology make hexagonal sampling attractive for practical applications and bring a new interest on this topic. In the following the performances of hexagonal sampling are analyzed under general assumptions and compared with the performances of conventional rectangular sampling. This analysis will take into account both the lattice form (squared, rectangular, hexagonal, and regular hexagonal), and the pixel shape. The analyzed hexagonal grid will not based a-priori on a regular hexagon tessellation, i.e., no constraints will be made on the ratio between the sampling frequencies in the two spatial directions. By assuming an elliptic support for the spectrum of the signal being sampled, sampling conditions will be expressed for a generic hexagonal sampling grid, and a comaprison with the well-known sampling conditions for a comparable rectangular lattice will be performed. Further, by considering for sake of clarity a spectrum with a circular support, the comparison will be performed under the assumption of same number of pixels for unity of surface, and the particular case of regular hexagonal sampling grid will also be considered. Regular hexagonal lattice with regular hexagonal sensitivity shape of the detector elements will result as the best trade-off between the proposed sampling requirement. Concerning the detector shape, the hexagonal is more advantageous than the rectangular. To show that a figure of merit is defined which takes into account that the MTF (modulation transfer function) of a hexagonal detector is not separable, conversely from that of a rectangular detector. As a final result, octagonal shape detectors are compared to those with rectangular and hexagonal shape in the two hypotheses of equal and ideal fill factor, respectively.
The impact of mesoscale convective systems on global precipitation: A modeling study
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo
2017-04-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.
Image stretching on a curved surface to improve satellite gridding
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1975-01-01
A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.
Development of a gridded meteorological dataset over Java island, Indonesia 1985–2014
Yanto; Livneh, Ben; Rajagopalan, Balaji
2017-01-01
We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985–2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology. PMID:28534871
New Global Bathymetry and Topography Model Grids
NASA Astrophysics Data System (ADS)
Smith, W. H.; Sandwell, D. T.; Marks, K. M.
2008-12-01
A new version of the "Smith and Sandwell" global marine topography model is available in two formats. A one-arc-minute Mercator projected grid covering latitudes to +/- 80.738 degrees is available in the "img" file format. Also available is a 30-arc-second version in latitude and longitude coordinates from pole to pole, supplied as tiles covering the same areas as the SRTM30 land topography data set. The new effort follows the Smith and Sandwell recipe, using publicly available and quality controlled single- and multi-beam echo soundings where possible and filling the gaps in the oceans with estimates derived from marine gravity anomalies observed by satellite altimetry. The altimeter data have been reprocessed to reduce the noise level and improve the spatial resolution [see Sandwell and Smith, this meeting]. The echo soundings database has grown enormously with new infusions of data from the U.S. Naval Oceanographic Office (NAVO), the National Geospatial-intelligence Agency (NGA), hydrographic offices around the world volunteering through the International Hydrographic Organization (IHO), and many other agencies and academic sources worldwide. These new data contributions have filled many holes: 50% of ocean grid points are within 8 km of a sounding point, 75% are within 24 km, and 90% are within 57 km. However, in the remote ocean basins some gaps still remain: 5% of the ocean grid points are more than 85 km from the nearest sounding control, and 1% are more than 173 km away. Both versions of the grid include a companion grid of source file numbers, so that control points may be mapped and traced to sources. We have compared the new model to multi-beam data not used in the compilation and find that 50% of differences are less than 25 m, 95% of differences are less than 130 m, but a few large differences remain in areas of poor sounding control and large-amplitude gravity anomalies. Land values in the solution are taken from SRTM30v2, GTOPO30 and ICESAT data. GEBCO has agreed to adopt this model and begin updating it in 2009. Ongoing tasks include building an uncertainty model and including information from the latest IBCAO map of the Arctic Ocean.
NASA Astrophysics Data System (ADS)
Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier
2018-02-01
This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.
Kousa, Anne; Puustinen, Niina; Karvonen, Marjatta; Moltchanova, Elena
2012-01-01
The incidence of type 2 diabetes is increasing among Finnish young adults. A slightly increased risk in men was found in the north-east and western part of the country. The higher risk areas in women were found in the western coastal area and in eastern Finland. The present register-based study aimed to evaluate the regional association of the incidence of type 2 diabetes among young adults with the concentration of magnesium in local ground water. The association was evaluated using Bayesian modeling of geo-referenced data aggregated into a regular 10 km × 10 km grid cells. No marked association was found, although suggestive findings were detected for magnesium in well water and diabetes in young adult women. The results of this register-based study did not completely rule out the association of well water magnesium with the geographical variation of type 2 diabetes. The incidence of type 2 diabetes was much higher among individuals aged 40 or over. These suggestive findings indicate that the association between magnesium and type 2 diabetes would also be worth examining among individuals over 40 years of age. Copyright © 2011 Elsevier Inc. All rights reserved.
Global Static Indexing for Real-Time Exploration of Very Large Regular Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pascucci, V; Frank, R
2001-07-23
In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less
NASA Astrophysics Data System (ADS)
Strong, Courtenay; Khatri, Krishna B.; Kochanski, Adam K.; Lewis, Clayton S.; Allen, L. Niel
2017-05-01
The main objective of this study was to investigate whether dynamically downscaled high resolution (4-km) climate data from the Weather Research and Forecasting (WRF) model provide physically meaningful additional information for reference evapotranspiration (E) calculation compared to the recently published GridET framework that uses interpolation from coarser-scale simulations run at 32-km resolution. The analysis focuses on complex terrain of Utah in the western United States for years 1985-2010, and comparisons were made statewide with supplemental analyses specifically for regions with irrigated agriculture. E was calculated using the standardized equation and procedures proposed by the American Society of Civil Engineers from hourly data, and climate inputs from WRF and GridET were debiased relative to the same set of observations. For annual mean values, E from WRF (EW) and E from GridET (EG) both agreed well with E derived from observations (r2 = 0.95, bias < 2 mm). Domain-wide, EW and EG were well correlated spatially (r2 = 0.89), however local differences ΔE =EW -EG were as large as +439 mm year-1 (+26%) in some locations, and ΔE averaged +36 mm year-1. After linearly removing the effects of contrasts in solar radiation and wind speed, which are characteristically less reliable under downscaling in complex terrain, approximately half the residual variance was accounted for by contrasts in temperature and humidity between GridET and WRF. These contrasts stemmed from GridET interpolating using an assumed lapse rate of Γ = 6.5 K km-1, whereas WRF produced a thermodynamically-driven lapse rate closer to 5 K km-1 as observed in mountainous terrain. The primary conclusions are that observed lapse rates in complex terrain differ markedly from the commonly assumed Γ = 6.5 K km-1, these lapse rates can be realistically resolved via dynamical downscaling, and use of constant Γ produces differences in E of order as large as 102 mm year-1.
NASA Astrophysics Data System (ADS)
Kim, Y.; Du, J.; Kimball, J. S.
2017-12-01
The landscape freeze-thaw (FT) status derived from satellite microwave remote sensing is closely linked to vegetation phenology and productivity, surface energy exchange, evapotranspiration, snow/ice melt dynamics, and trace gas fluxes over land areas affected by seasonally frozen temperatures. A long-term global satellite microwave Earth System Data Record of daily landscape freeze-thaw status (FT-ESDR) was developed using similar calibrated 37GHz, vertically-polarized (V-pol) brightness temperatures (Tb) from SMMR, SSM/I, and SSMIS sensors. The FT-ESDR shows mean annual spatial classification accuracies of 90.3 and 84.3 % for PM and AM overpass retrievals relative surface air temperature (SAT) measurement based FT estimates from global weather stations. However, the coarse FT-ESDR gridding (25-km) is insufficient to distinguish finer scale FT heterogeneity. In this study, we tested alternative finer scale FT estimates derived from two enhanced polar-grid (3.125-km and 6-km resolution), 36.5 GHz V-pol Tb records derived from calibrated AMSR-E and AMSR2 sensor observations. The daily FT estimates are derived using a modified seasonal threshold algorithm that classifies daily Tb variations in relation to grid cell-wise FT thresholds calibrated using ERA-Interim reanalysis based SAT, downscaled using a digital terrain map and estimated temperature lapse rates. The resulting polar-grid FT records for a selected study year (2004) show mean annual spatial classification accuracies of 90.1% (84.2%) and 93.1% (85.8%) for respective PM (AM) 3.125km and 6-km Tb retrievals relative to in situ SAT measurement based FT estimates from regional weather stations. Areas with enhanced FT accuracy include water-land boundaries and mountainous terrain. Differences in FT patterns and relative accuracy obtained from the enhanced grid Tb records were attributed to several factors, including different noise contributions from underlying Tb processing and spatial mismatches between Tb retrievals and SAT calibrated FT thresholds.
NASA Astrophysics Data System (ADS)
Megalingam, Mariammal; Hari Prakash, N.; Solomon, Infant; Sarma, Arun; Sarma, Bornali
2017-04-01
Experimental evidence of different kinds of oscillations in floating potential fluctuations of glow discharge magnetized plasma is being reported. A spherical gridded cage is inserted into the ambient plasma volume for creating plasma bubbles. Plasma is produced between a spherical mesh grid and chamber. The spherical mesh grid of 80% optical transparency is connected to the positive terminal of power supply and considered as anode. Two Langmuir probes are kept in the ambient plasma to measure the floating potential fluctuations in different positions within the system, viz., inside and outside the spherical mesh grid. At certain conditions of discharge voltage (Vd) and magnetic field, irregular to regular mode appears, and it shows chronological changes with respect to magnetic field. Further various nonlinear analyses such as Recurrence Plot, Hurst exponent, and Lyapunov exponent have been carried out to investigate the dynamics of oscillation at a range of discharge voltages and external magnetic fields. Determinism, entropy, and Lmax are important measures of Recurrence Quantification Analysis which indicate an irregular to regular transition in the dynamics of the fluctuations. Furthermore, behavior of the plasma oscillation is characterized by the technique called multifractal detrended fluctuation analysis to explore the nature of the fluctuations. It reveals that it has a multifractal nature and behaves as a long range correlated process.
Wave Information Studies of US Coastlines: Hindcast Wave Information for the Great Lakes: Lake Erie
1991-10-01
total ice cover) for individual grid cells measuring 5 km square. 42. The GLERL analyzed each half-month data set to provide the maximum, minimum...average, median, and modal ice concentrations for each 5-km cell . The median value, which represents an estimate of the 50-percent point of the ice...incorporating the progression and decay of the time-dependent ice cover was complicated by the fact that different grid cell sizes were used for mapping the ice
Hans T. Schreuder; Jin-Mann S. Lin; John Teply
2000-01-01
We estimate number of tree species in National Forest populations using the nonparametric estimator. Data from the Current Vegetation Survey (CVS) of Region 6 of the USDA Forest Service were used to estimate the number of tree species with a plot close in size to the Forest Inventory and Analysis (FIA) plot and the actual CVS plot for the 5.5 km FIA grid and the 2.7 km...
Hydro and morphodynamic simulations for probabilistic estimates of munitions mobility
NASA Astrophysics Data System (ADS)
Palmsten, M.; Penko, A.
2017-12-01
Probabilistic estimates of waves, currents, and sediment transport at underwater munitions remediation sites are necessary to constrain probabilistic predictions of munitions exposure, burial, and migration. To address this need, we produced ensemble simulations of hydrodynamic flow and morphologic change with Delft3D, a coupled system of wave, circulation, and sediment transport models. We have set up the Delft3D model simulations at the Army Corps of Engineers Field Research Facility (FRF) in Duck, NC, USA. The FRF is the prototype site for the near-field munitions mobility model, which integrates far-field and near-field field munitions mobility simulations. An extensive array of in-situ and remotely sensed oceanographic, bathymetric, and meteorological data are available at the FRF, as well as existing observations of munitions mobility for model testing. Here, we present results of ensemble Delft3D hydro- and morphodynamic simulations at Duck. A nested Delft3D simulation runs an outer grid that extends 12-km in the along-shore and 3.7-km in the cross-shore with 50-m resolution and a maximum depth of approximately 17-m. The inner nested grid extends 3.2-km in the along-shore and 1.2-km in the cross-shore with 5-m resolution and a maximum depth of approximately 11-m. The inner nested grid initial model bathymetry is defined as the most recent survey or remotely sensed estimate of water depth. Delft3D-WAVE and FLOW is driven with spectral wave measurements from a Waverider buoy in 17-m depth located on the offshore boundary of the outer grid. The spectral wave output and the water levels from the outer grid are used to define the boundary conditions for the inner nested high-resolution grid, in which the coupled Delft3D WAVE-FLOW-MORPHOLOGY model is run. The ensemble results are compared to the wave, current, and bathymetry observations collected at the FRF.
NASA Astrophysics Data System (ADS)
Im, Eun-Soon; Coppola, Erika; Giorgi, Filippo
2010-05-01
Since anthropogenic climate change is a rather important factor for the future human life all over the planet and its effects are not globally uniform, climate information at regional or local scales become more and more important for an accurate assessment of the potential impact of climate change on societies and ecosystems. High resolution information with suitably fine-scale for resolving complex geographical features could be a critical factor for successful linkage between climate models and impact assessment studies. However, scale mismatch between them still remains major problem. One method for overcoming the resolution limitations of global climate models and for adding regional details to coarse-grid global projections is to use dynamical downscaling by means of a regional climate model. In this study, the ECHAM5/MPI-OM (1.875 degree) A1B scenario simulation has been dynamically downscaled by using two different approaches within the framework of RegCM3 modeling system. First, a mosaic-type parameterization of subgrid-scale topography and land use (Sub-BATS) is applied over the European Alpine region. The Sub-BATS system is composed of 15 km coarse-grid cell and 3 km sub-grid cell. Second, we developed the RegCM3 one-way double-nested system, with the mother domain encompassing the eastern regions of Asia at 60 km grid spacing and the nested domain covering the Korean Peninsula at 20 km grid spacing. By comparing the regional climate model output and the driving global model ECHAM5/MPI-OM output, it is possible to estimate the added value of physically-based dynamical downscaling when for example impact studies at hydrological scale are performed.
NCAR global model topography generation software for unstructured grids
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.
2015-06-01
It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.
netCDF Operators for Rapid Analysis of Measured and Modeled Swath-like Data
NASA Astrophysics Data System (ADS)
Zender, C. S.
2015-12-01
Swath-like data (hereafter SLD) are defined by non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data on curvilinear grids. Researchers and data centers want user-friendly, fast, and powerful methods to specify, extract, serve, manipulate, and thus analyze, SLD. To meet these needs, large research-oriented agencies and modeling center such as NASA, DOE, and NOAA increasingly employ the netCDF Operators (NCO), an open-source scientific data analysis software package applicable to netCDF and HDF data. NCO includes extensive, fast, parallelized regridding features to facilitate analysis and intercomparison of SLD and model data. Remote sensing, weather and climate modeling and analysis communities face similar problems in handling SLD including how to easily: 1. Specify and mask irregular regions such as ocean basins and political boundaries in SLD (and rectangular) grids. 2. Bin, interpolate, average, or re-map SLD to regular grids. 3. Derive secondary data from given quality levels of SLD. These common tasks require a data extraction and analysis toolkit that is SLD-friendly and, like NCO, familiar in all these communities. With NCO users can 1. Quickly project SLD onto the most useful regular grids for intercomparison. 2. Access sophisticated statistical and regridding functions that are robust to missing data and allow easy specification of quality control metrics. These capabilities improve interoperability, software-reuse, and, because they apply to SLD, minimize transmission, storage, and handling of unwanted data. While SLD analysis still poses many challenges compared to regularly gridded, rectangular data, the custom analyses scripts SLD once required are now shorter, more powerful, and user-friendly.
NASA Astrophysics Data System (ADS)
Garay, Michael J.; Davis, Anthony B.; Diner, David J.
2016-12-01
We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.
NASA Astrophysics Data System (ADS)
Ohfuchi, Wataru; Enomoto, Takeshi; Yoshioka, Mayumi K.; Takaya, Koutarou
2014-05-01
Some high-resolution simulations with a conventional atmospheric general circulation model (AGCM) were conducted right after the first Earth Simulator started operating in the spring of 2002. More simulations with various resolutions followed. The AGCM in this study, AFES (Agcm For the Earth Simulator), is a primitive equation spectral transform method model with a cumulus convection parameterization. In this presentation, some findings from comparisons between high and low-resolution simulations, and some future perspectives of old-fashioned AGCMs will be discussed. One obvious advantage of increasing resolution is capability of resolving the fine structures of topography and atmospheric flow. By increasing resolution from T39 (about 320 km horizontal grid interval) to T79 (160 km), to T159 (80 km) to T319 (40 km), topographic precipitation over Japan becomes increasingly realistic. This feature is necessary for climate and weather studies involving both global and local aspects. In order to resolve submesoscale (about 100 km horizontal scale) atmospheric circulation, about 10-km grid interval is necessary. Comparing T1279 (10 km) simulations with T319 ones, it is found that, for example, the intensity of heavy rain associated with Baiu front and the central pressure of typhoon become more realistic. These realistic submesoscale phenomena should have impact on larger-sale flow through dynamics and thermodynamics. An interesting finding by increasing horizontal resolution of a conventional AGCM is that some cumulus convection parameterizations, such as Arakawa-Schubert type scheme, gradually stop producing precipitation, while some others, such as Emanuel type, do not. With the former, the grid condensation increases with the model resolution to compensate. Which characteristics are more desirable is arguable but it is an important feature one has to consider when developing a high-resolution conventional AGCM. Many may think that conventional primitive equation spectral transform AGCMs, such as AFES, have no future. Developing globally homogeneous nonhydrostatic cloud resolving grid AGCMs is obviously a straightforward direction for the future. However these models will be very expensive for many users for a while, perhaps for the next some decades. On the other hand, old-fashioned AGCMs with a grid interval of 20-100 km will remain to be accurate and efficient tools for many users for many years to come. Also by coupling with a fine-resolution regional nonhydrostatic model, a conventional AGCM may overcome its limitation for use in climate and weather studies in the future.
NASA Astrophysics Data System (ADS)
Singh, Gurjeet; Panda, Rabindra K.; Mohanty, Binayak P.; Jana, Raghavendra B.
2016-05-01
Strategic ground-based sampling of soil moisture across multiple scales is necessary to validate remotely sensed quantities such as NASA's Soil Moisture Active Passive (SMAP) product. In the present study, in-situ soil moisture data were collected at two nested scale extents (0.5 km and 3 km) to understand the trend of soil moisture variability across these scales. This ground-based soil moisture sampling was conducted in the 500 km2 Rana watershed situated in eastern India. The study area is characterized as sub-humid, sub-tropical climate with average annual rainfall of about 1456 mm. Three 3x3 km square grids were sampled intensively once a day at 49 locations each, at a spacing of 0.5 km. These intensive sampling locations were selected on the basis of different topography, soil properties and vegetation characteristics. In addition, measurements were also made at 9 locations around each intensive sampling grid at 3 km spacing to cover a 9x9 km square grid. Intensive fine scale soil moisture sampling as well as coarser scale samplings were made using both impedance probes and gravimetric analyses in the study watershed. The ground-based soil moisture samplings were conducted during the day, concurrent with the SMAP descending overpass. Analysis of soil moisture spatial variability in terms of areal mean soil moisture and the statistics of higher-order moments, i.e., the standard deviation, and the coefficient of variation are presented. Results showed that the standard deviation and coefficient of variation of measured soil moisture decreased with extent scale by increasing mean soil moisture.
NASA Astrophysics Data System (ADS)
Shi, X.; Zhao, C.
2017-12-01
Haze aerosol pollution has been a focus issue in China, and its characteristics is highly demanded. With limited observation sites, aerosol properties obtained from a single site is frequently used to represent the haze condition over a large domain, such as tens of kilometers. This could result in high uncertainties in the haze characteristics due to their spatial variation. Using a network observation from November 2015 to February 2016 over an urban city in North China with high spatial resolution, this study examines the spatial representation of ground site observations. A method is first developed to determine the representative area of measurements from limited stations. The key idea of this method is to determine the spatial variability of particulate matter with diameters less than 2.5 μm (PM2.5) concentration using a variance function in 2km x 2km grids. Based on the high spatial resolution (0.5km x 0.5km) measurements of PM2.5, the grids in which PM2.5 have high correlations and weak value differences are determined as the representation area of measurements at these grids. Note that the size representation area is not exactly a circle region. It shows that the size representation are for the study region and study period ranges from 0.25 km2 to 16.25 km2. The representation area varies with locations. For the 20 km x 20 km study region, 10 station observations would have a good representation of the PM2.5 observations obtained from current 169 stations at the four-month time scale.
NASA Astrophysics Data System (ADS)
Götze, Hans-Jürgen; Schmidt, Sabine
2014-05-01
Modern geophysical interpretation requires an interdisciplinary approach, particularly when considering the available amount of 'state of the art' information. A combination of different geophysical surveys employing seismic, gravity and EM, together with geological and petrological studies, can provide new insights into the structures and tectonic evolution of the lithosphere, natural deposits and underground cavities. Interdisciplinary interpretation is essential for any numerical modelling of these structures and the processes acting on them Interactive gravity and magnetic modeling can play an important role in the depth imaging workflow of complex projects. The integration of the workflow and the tools is important to meet the needs of today's more interactive and interpretative depth imaging workflows. For the integration of gravity and magnetic models the software IGMAS+ can play an important role in this workflow. For simplicity the focus is on gravity modeling, but all methods can be applied to the modeling of magnetic data as well. Currently there are three common ways to define a 3D gravity model. Grid based models: Grids define the different geological units. The densities of the geological units are constant. Additional grids can be introduced to subdivide the geological units, making it possible to represent density depth relations. Polyhedral models: The interfaces between different geological units are defined by polyhedral, typically triangles. Voxel models: Each voxel in a regular cube has a density assigned. Spherical Earth modeling: Geophysical investigations may cover huge areas of several thousand square kilometers. The depression of the earth's surface due to the curvature of the Earth is 3 km at a distance of 200 km and 20 km at a distance of 500 km. Interactive inversion: Inversion is typically done in batch where constraints are defined beforehand and then after a few minutes or hours a model fitting the data and constraints is generated. As examples I show results from the Central Andes and the North Sea. Both gravity and geoid of the two areas were investigated with regard to their isostatic state, the crustal density structure and rigidity of the Lithosphere. Modern satellite measurements of the recent ESA campaigns are compared to ground observations in the region. Estimates of stress and GPE (gravitational potential energy) at the western South American margin have been derived from an existing 3D density model. Here, sensitivity studies of gravity and gravity gradients indicate that short wavelength lithospheric structures are more pronounced in the gravity gradient tensor than in the gravity field. A medium size example of the North Sea underground demonstrates how interdisciplinary data sets can support aero gravity investigations. At the micro scale an example from the detection of a crypt (Alversdorf, Northern Germany) is shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.
Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less
The Impact of Sika Deer on Vegetation in Japan: Setting Management Priorities on a National Scale
NASA Astrophysics Data System (ADS)
Ohashi, Haruka; Yoshikawa, Masato; Oono, Keiichi; Tanaka, Norihisa; Hatase, Yoriko; Murakami, Yuhide
2014-09-01
Irreversible shifts in ecosystems caused by large herbivores are becoming widespread around the world. We analyzed data derived from the 2009-2010 Sika Deer Impact Survey, which assessed the geographical distribution of deer impacts on vegetation through a questionnaire, on a scale of 5-km grid-cells. Our aim was to identify areas facing irreversible ecosystem shifts caused by deer overpopulation and in need of management prioritization. Our results demonstrated that the areas with heavy impacts on vegetation were widely distributed across Japan from north to south and from the coastal to the alpine areas. Grid-cells with heavy impacts are especially expanding in the southwestern part of the Pacific side of Japan. The intensity of deer impacts was explained by four factors: (1) the number of 5-km grid-cells with sika deer in neighboring 5 km-grid-cells in 1978 and 2003, (2) the year sika deer were first recorded in a grid-cell, (3) the number of months in which maximum snow depth exceeded 50 cm, and (4) the proportion of urban areas in a particular grid-cell. Based on our model, areas with long-persistent deer populations, short snow periods, and fewer urban areas were predicted to be the most vulnerable to deer impact. Although many areas matching these criteria already have heavy deer impact, there are some areas that remain only slightly impacted. These areas may need to be designated as having high management priority because of the possibility of a rapid intensification of deer impact.
The impact of Sika deer on vegetation in Japan: setting management priorities on a national scale.
Ohashi, Haruka; Yoshikawa, Masato; Oono, Keiichi; Tanaka, Norihisa; Hatase, Yoriko; Murakami, Yuhide
2014-09-01
Irreversible shifts in ecosystems caused by large herbivores are becoming widespread around the world. We analyzed data derived from the 2009-2010 Sika Deer Impact Survey, which assessed the geographical distribution of deer impacts on vegetation through a questionnaire, on a scale of 5-km grid-cells. Our aim was to identify areas facing irreversible ecosystem shifts caused by deer overpopulation and in need of management prioritization. Our results demonstrated that the areas with heavy impacts on vegetation were widely distributed across Japan from north to south and from the coastal to the alpine areas. Grid-cells with heavy impacts are especially expanding in the southwestern part of the Pacific side of Japan. The intensity of deer impacts was explained by four factors: (1) the number of 5-km grid-cells with sika deer in neighboring 5 km-grid-cells in 1978 and 2003, (2) the year sika deer were first recorded in a grid-cell, (3) the number of months in which maximum snow depth exceeded 50 cm, and (4) the proportion of urban areas in a particular grid-cell. Based on our model, areas with long-persistent deer populations, short snow periods, and fewer urban areas were predicted to be the most vulnerable to deer impact. Although many areas matching these criteria already have heavy deer impact, there are some areas that remain only slightly impacted. These areas may need to be designated as having high management priority because of the possibility of a rapid intensification of deer impact.
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Long, D. G.; Paget, A. C.; Armstrong, R. L.
2015-12-01
Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Currently available global gridded passive microwave data sets serve a diverse community of hundreds of data users, but do not meet many requirements of modern Earth System Data Records (ESDRs) or Climate Data Records (CDRs), most notably in the areas of intersensor calibration, quality-control, provenance and consistent processing methods. The original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. Further, since the first Level 3 data sets were produced, the Level 2 passive microwave data on which they were based have been reprocessed as Fundamental CDRs (FCDRs) with improved calibration and documentation. We are funded by NASA MEaSUREs to reprocess the historical gridded data sets as EASE-Grid 2.0 ESDRs, using the most mature available Level 2 satellite passive microwave (SMMR, SSM/I-SSMIS, AMSR-E) records from 1978 to the present. We have produced prototype data from SSM/I and AMSR-E for the year 2003, for review and feedback from our Early Adopter user community. The prototype data set includes conventional, low-resolution ("drop-in-the-bucket" 25 km) grids and enhanced-resolution grids derived from the two candidate image reconstruction techniques we are evaluating: 1) Backus-Gilbert (BG) interpolation and 2) a radiometer version of Scatterometer Image Reconstruction (SIR). We summarize our temporal subsetting technique, algorithm tuning parameters and computational costs, and include sample SSM/I images at enhanced resolutions of up to 3 km. We are actively working with our Early Adopters to finalize content and format of this new, consistently-processed high-quality satellite passive microwave ESDR.
NASA Astrophysics Data System (ADS)
Meyer, B.; Chulliat, A.; Saltus, R.
2017-12-01
The Earth Magnetic Anomaly Grid at 2 arc min resolution version 3, EMAG2v3, combines marine and airborne trackline observations, satellite data, and magnetic observatory data to map the location, intensity, and extent of lithospheric magnetic anomalies. EMAG2v3 includes over 50 million new data points added to NCEI's Geophysical Database System (GEODAS) in recent years. The new grid relies only on observed data, and does not utilize a priori geologic structure or ocean-age information. Comparing this grid to other global magnetic anomaly compilations (e.g., EMAG2 and WDMAM), we can see that the inclusion of a priori ocean-age patterns forces an artificial linear pattern to the grid; the data-only approach allows for greater complexity in representing the evolution along oceanic spreading ridges and continental margins. EMAG2v3 also makes use of the satellite-derived lithospheric field model MF7 in order to accurately represent anomalies with wavelengths greater than 300 km and to create smooth grid merging boundaries. The heterogeneous distribution of errors in the observations used in compiling the EMAG2v3 was explored, and is reported in the final distributed grid. This grid is delivered at both 4 km continuous altitude above WGS84, as well as at sea level for all oceanic and coastal regions.
NASA Astrophysics Data System (ADS)
Burke, Sophia; Mulligan, Mark
2017-04-01
WaterWorld is a widely used spatial hydrological policy support system. The last user census indicates regular use by 1029 institutions across 141 countries. A key feature of WaterWorld since 2001 is that it comes pre-loaded with all of the required data for simulation anywhere in the world at a 1km or 1 ha resolution. This means that it can be easily used, without specialist technical ability, to examine baseline hydrology and the impacts of scenarios for change or management interventions to support policy formulation, hence its labelling as a policy support system. WaterWorld is parameterised by an extensive global gridded database of more than 600 variables, developed from many sources, since 1998, the so-called simTerra database. All of these data are available globally at 1km resolution and some variables (terrain, land cover, urban areas, water bodies) are available globally at 1ha resolution. If users have access to better data than is pre-loaded, they can upload their own data. WaterWorld is generally applied at the national or basin scale at 1km resolution, or locally (for areas of <10,000km2) at 1ha resolution, though continental (1km resolution) and global (10km resolution) applications are possible so it is a model with local to global applications. WaterWorld requires some 140 maps to run including monthly climate data, land cover and use, terrain, population, water bodies and more. Whilst publically-available terrain and land cover data are now well developed for local scale application, climate and land use data remain a challenge, with most global products being available at 1km or 10km resolution or worse, which is rather coarse for local application. As part of the EartH2Observe project we have used WFDEI (WATCH Forcing Data methodology applied to ERA-Interim data) at 1km resolution to provide an alternative input to WaterWorld's preloaded climate data. Here we examine the impacts of that on key hydrological outputs: water balance, water quality and outline the remaining challenges of using datasets like these for local scale application.
A multi-resolution approach to electromagnetic modeling.
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-04-01
We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
Applications of nuclear analytical techniques to environmental studies
NASA Astrophysics Data System (ADS)
Freitas, M. C.; Pacheco, A. M. G.; Marques, A. P.; Barros, L. I. C.; Reis, M. A.
2001-07-01
A few examples of application of nuclear-analytical techniques to biological monitors—natives and transplants—are given herein. Parmelia sulcata Taylor transplants were set up in a heavily industrialized area of Portugal—the Setúbal peninsula, about 50 km south of Lisbon—where indigenous lichens are rare. The whole area was 10×15 km around an oil-fired power station, and a 2.5×2.5 km grid was used. In north-western Portugal, native thalli of the same epiphytes (Parmelia spp., mostly Parmelia sulcata Taylor) and bark from olive trees (Olea europaea) were sampled across an area of 50×50 km, using a 10×10 km grid. This area is densely populated and features a blend of rural, urban-industrial and coastal environments, together with the country's second-largest metro area (Porto). All biomonitors have been analyzed by INAA and PIXE. Results were put through nonparametric tests and factor analysis for trend significance and emission sources, respectively.
2007-07-20
science.msfc.nasa.gov/ss/pad/ sppb /workshoV7/&eoma&ne/Reo cgn/geo c&m.for. See also the GEOPACK library at http:/nssdcftp.gsfc.na-a.gov/models...calculations, fed into the inverse computation, should reproduce the original coordinate grid. This test of the consistency of the direct and inverse...algorithm. A test of this type was performed for a uniform grid, for line traces from 7200 km to the ground, and for 800 km to the ground. The maximum
NASA Astrophysics Data System (ADS)
Poll, Stefan; Shrestha, Prabhakar; Simmer, Clemens
2017-04-01
Land heterogeneity influences the atmospheric boundary layer (ABL) structure including organized (secondary) circulations which feed back on land-atmosphere exchange fluxes. Especially the latter effects cannot be incorporated explicitly in regional and climate models due to their coarse computational spatial grids, but must be parameterized. Current parameterizations lead, however, to uncertainties in modeled surface fluxes and boundary layer evolution, which feed back to cloud initiation and precipitation. This study analyzes the impact of different horizontal grid resolutions on the simulated boundary layer structures in terms of stability, height and induced secondary circulations. The ICON-LES (Icosahedral Nonhydrostatic in LES mode) developed by the MPI-M and the German weather service (DWD) and conducted within the framework of HD(CP)2 is used. ICON is dynamically downscaled through multiple scales of 20 km, 7 km, 2.8 km, 625 m, 312 m, and 156 m grid spacing for several days over Germany and partial neighboring countries for different synoptic conditions. We examined the entropy spectrum of the land surface heterogeneity at these grid resolutions for several locations close to measurement sites, such as Lindenberg, Jülich, Cabauw and Melpitz, and studied its influence on the surface fluxes and the evolution of the boundary layer profiles.
Antenna structures and cloud-to-ground lightning location: 1995-2015
NASA Astrophysics Data System (ADS)
Kingfield, Darrel M.; Calhoun, Kristin M.; de Beurs, Kirsten M.
2017-05-01
Spatial analyses of cloud-to-ground (CG) lightning occurrence due to a rapid expansion in the number of antenna towers across the United States are explored by gridding 20 years of National Lightning Detection Network data at 500 m spatial resolution. The 99.8% of grid cells with ≥100 CGs were within 1 km of an antenna tower registered with the Federal Communications Commission. Tower height is positively correlated with CG occurrence; towers taller than 400 m above ground level experience a median increase of 150% in CG lightning density compared to a region 2 km to 5 km away. In the northern Great Plains, the cumulative CG lightning density near the tower was around 138% (117%) higher than a region 2 to 5 km away in the September-February (March-August) months. Higher CG frequencies typically also occur in the first full year following new tower construction, creating new lightning hot spots.
Lankila, Tiina; Näyhä, Simo; Rautio, Arja; Koiranen, Markku; Rusanen, Jarmo; Taanila, Anja
2013-01-01
We examined the association of health and well-being with moving using a detailed geographical scale. 7845 men and women born in northern Finland in 1966 were surveyed by postal questionnaire in 1997 and linked to 1 km(2) geographical grids based on each subject's home address in 1997-2000. Population density was used to classify each grid as rural (1-100 inhabitants/km²) or urban (>100 inhabitants/km²) type. Moving was treated as a three-class response variate (not moved; moved to different type of grid; moved to similar type of grid). Moving was regressed on five explanatory factors (life satisfaction, self-reported health, lifetime morbidity, activity-limiting illness and use of health services), adjusting for factors potentially associated with health and moving (gender, marital status, having children, housing tenure, education, employment status and previous move). The results were expressed as odds ratios (OR) and their 95% confidence intervals (CI). Moves from rural to urban grids were associated with dissatisfaction with current life (adjusted OR 2.01; 95% CI 1.26-3.22) and having somatic (OR 1.66; 1.07-2.59) or psychiatric (OR 2.37; 1.21-4.63) morbidities, the corresponding ORs for moves from rural to other rural grids being 1.71 (0.98-2.98), 1.63 (0.95-2.78) and 2.09 (0.93-4.70), respectively. Among urban dwellers, only the frequent use of health services (≥ 21 times/year) was associated with moving, the adjusted ORs being 1.65 (1.05-2.57) for moves from urban to rural grids and 1.30 (1.03-1.64) for urban to other urban grids. We conclude that dissatisfaction with life and history of diseases and injuries, especially psychiatric morbidity, may increase the propensity to move from rural to urbanised environments, while availability of health services may contribute to moves within urban areas and also to moves from urban areas to the countryside, where high-level health services enable a good quality of life for those attracted by the pastoral environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Treseler, Christine; Bixby, Walter R; Nepocatych, Svetlana
2016-07-01
Treseler, C, Bixby, WR, and Nepocatych, S. The effect of compression stockings on physiological and psychological responses after 5-Km performance in recreationally active females. J Strength Cond Res 30(7): 1985-1991, 2016-The purpose of the study was to examine the physiological and perceptual responses to wearing below-the-knee compression stockings (CS) after a 5-km running performance in recreationally active women. Nineteen women were recruited to participate in the study (20 ± 1 year, 61.4 ± 5.3 kg, 22.6 ± 3.9% body fat). Each participant completed two 5-km performance time trials with CS or regular socks in a counterbalanced order separated by 1 week. For each session, 5-km time, heart rate (HR), rate of perceived exertion (RPE), pain pressure threshold, muscle soreness (MS), and rate of perceived recovery were measured. There was no significant difference in average 5-km times between CS and regular socks (p = 0.74) and HR response (p = 0.42). However, significantly higher RPE and lower gain scores (%) for lower extremity MS but not for calf were observed with CS when compared with regular socks (p = 0.05, p = 0.01, and p = 0.3, respectively). Based on the results of this study, there were no significant improvements in average 5-km running time, heart rate, or perceived calf MS. However, participants perceived less MS in lower extremities and working harder with CS compared with regular socks. Compression stockings may not cause significant physiological improvements; however, there might be psychological benefits positively affecting postexercise recovery.
Science Enabling Applications of Gridded Radiances and Products
NASA Astrophysics Data System (ADS)
Goldberg, M.; Wolf, W.; Zhou, L.
2005-12-01
New generations of hyperspectral sounders and imagers are not only providing vastly improved information to monitor, assess and predict the Earth's environment, they also provide tremendous volumes of data to manage. Key management challenges must include data processing, distribution, archive and utilization. At the NOAA/NESDIS Office of Research and Applications, we have started to address the challenge of utilizing high volume satellite by thinning observations and developing gridded datasets from the observations made from the NASA AIRS, AMSU and MODIS instrument. We have developed techniques for intelligent thinning of AIRS data for numerical weather prediction, by selecting the clearest AIRS 14 km field of view within a 3 x 3 array. The selection uses high spatial resolution 1 km MODIS data which are spatially convolved to the AIRS field of view. The MODIS cloud masks and AIRS cloud tests are used to select the clearest. During the real-time processing the data are thinned and gridded to support monitoring, validation and scientific studies. Products from AIRS, which includes profiles of temperature, water vapor and ozone and cloud-corrected infrared radiances for more than 2000 channels, are derived from a single AIRS/AMSU field of regard, which is a 3 x 3 array of AIRS footprints (each with a 14 km spatial resolution) collocated with a single AMSU footprint (42 km). One of our key gridded dataset is a daily 3 x 3 latitude/longitude projection which contains the nearest AIRS/AMSU field of regard with respect to the center of the 3 x 3 lat/lon grid. This particular gridded dataset is 1/40 the size of the full resolution data. This gridded dataset is the type of product request that can be used to support algorithm validation and improvements. It also provides for a very economical approach for reprocessing, testing and improving algorithms for climate studies without having to reprocess the full resolution data stored at the DAAC. For example, on a single CPU workstation, all the AIRS derived products can be derived from a single year of gridded data in 5 days. This relatively short turnaround time, which can be reduced considerably to 3 hours by using a cluster of 40 pc G5processors, allows for repeated reprocessing at the PIs home institution before substantial investments are made to reprocess the full resolution data sets archived at the DAAC. In other words, do not reprocess the full resolution data until the science community have tested and selected the optimal algorithm on the gridded data. Development and applications of gridded radiances and products will be discussed. The applications can be provided as part of a web-based service.
Mapping the spatial distribution of subsurface saline material in the Darling River valley
NASA Astrophysics Data System (ADS)
Triantafilis, John; Buchanan, Sam Mostyn
2010-02-01
In the Australian landscape larg stores of soluble salt are present naturally. In many cases it is attributable to salts entrapped as marine sediment in earlier geological time. At the district level, the need for information on the presence of saline subsurface material is increasing, particularly for its application to salinity hazard assessment and environmental management. This is the case in irrigated areas, where changes in hydrology can result in secondary salinisation. To reduce the expense, environmental studies use a regression relationship to make use of more readily observed measurements (e.g. electromagnetic (EM) data) which are strongly correlated with the variable of interest. In this investigation a methodology is outlined for mapping the spatial distribution of average subsurface (6-12 m) salinity (EC e — mS m - 1 ) using an environmental correlation with EM34 survey data collected across the Bourke Irrigation District (BID) in the Darling River valley. The EM34 is used in the horizontal dipole mode at coil configurations of 10 (EM34-10), 20 (EM34-20), and 40 (EM34-40). A multiple-linear regression (MLR) relationship is established between average subsurface EC e and the three EM34 signal data using a forward modeling stepwise linear modeling approach. The spatial distribution of average subsurface salinity generally reflects the known surface expression of point-source salinisation and provides information for future environmental monitoring and natural resource management. The generation of EM34 data on various contrived grids (i.e. 1, 1.5, 2. 2.5 and 3 km) indicates that in terms of accuracy, the data available on the 0.5 (RMSE = 188) and 1 km (RMSE = 283) grid are best, with the least biased predictions achieved using 1 (ME = - 1) and 2 km (ME = 12) grids. Viewing the spatial distribution of subsurface saline material showed that the 0.5 km spacing is optimal, particularly in order to account for short-range spatial variation between various physiographic units. The Relative Improvement (RI) shows that increasing EM grids from 1, 1.5, 2, 2.5 to 3 km gave RI of - 53, - 100%, - 107%, - 128% and - 140%, respectively. We conclude that at a minimum a 1 km grid is needed for reconnaissance EM34 surveying.
Nonlinear refraction and reflection travel time tomography
Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.
1998-01-01
We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
NASA Astrophysics Data System (ADS)
Takemi, T.; Yasui, M.
2005-12-01
Recent studies on dust emission and transport have been concerning the small-scale atmospheric processes in order to incorporate them as a subgrid-scale effect in large-scale numerical prediction models. In the present study, we investigated the dynamical processes and mechanisms of dust emission, mixing, and transport induced by boundary-layer and cumulus convection under a fair-weather condition over a Chinese desert. We performed a set of sensitivity experiments as well as a control simulation in order to examine the effects of vertical wind shear, upper-level wind speed, and moist convection by using a simplified and idealized modeling framework. The results of the control experiment showed that surface dust emission was at first caused before the noon time by intense convective motion which not only developed in the boundary layer but also penetrated into the free troposphere. In the afternoon hours, boundary-layer dry convection actively mixed and transported dust within the boundary layer. Some of the convective cells penetrated above the boundary layer, which led to the generation of cumulus clouds and hence gradually increased the dust content in the free troposphere. Coupled effects of the dry and moist convection played an important role in inducing surface dust emission and transporting dust vertically. This was clearly demonstrated through the comparison of the results between the control and the sensitivity experiments. The results of the control simulation were compared with lidar measurements. The simulation well captured the observed diurnal features of the upward transport of dust. We also examined the dependence of the simulated results on grid resolution: the grid size was changed from 250 m up to 4 km. It was found that there was a significant difference between the 2-km and 4-km grids. If a cumulus parameterization was added to the 4-km grid run, the column content was comparable to the other cases. This result suggests that subgrid parameterizations are required if the grid size is larger than the order of 1 km in a fair-weather condition.
VP Structure of Mount St. Helens, Washington, USA, imaged with local earthquake tomography
Waite, G.P.; Moran, S.C.
2009-01-01
We present a new P-wave velocity model for Mount St. Helens using local earthquake data recorded by the Pacific Northwest Seismograph Stations and Cascades Volcano Observatory since the 18 May 1980 eruption. These data were augmented with records from a dense array of 19 temporary stations deployed during the second half of 2005. Because the distribution of earthquakes in the study area is concentrated beneath the volcano and within two nearly linear trends, we used a graded inversion scheme to compute a coarse-grid model that focused on the regional structure, followed by a fine-grid inversion to improve spatial resolution directly beneath the volcanic edifice. The coarse-grid model results are largely consistent with earlier geophysical studies of the area; we find high-velocity anomalies NW and NE of the edifice that correspond with igneous intrusions and a prominent low-velocity zone NNW of the edifice that corresponds with the linear zone of high seismicity known as the St. Helens Seismic Zone. This low-velocity zone may continue past Mount St. Helens to the south at depths below 5??km. Directly beneath the edifice, the fine-grid model images a low-velocity zone between about 2 and 3.5??km below sea level that may correspond to a shallow magma storage zone. And although the model resolution is poor below about 6??km, we found low velocities that correspond with the aseismic zone between about 5.5 and 8??km that has previously been modeled as the location of a large magma storage volume. ?? 2009 Elsevier B.V.
New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica
Scheinert, M.; Ferraccioli, F.; Schwabe, J.; Bell, R.; Studinger, M.; Damaske, D.; Jokat, W.; Aleshkova, N.; Jordan, T.; Leitchenkov, G.; Blankenship, D. D.; Damiani, T. M.; Young, D.; Cochran, J. R.; Richter, T. D.
2018-01-01
Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, airborne and shipborne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million km2, which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated levelling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica. PMID:29326484
New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica
NASA Technical Reports Server (NTRS)
Scheinert, M.; Ferraccioli, F.; Schwabe, J.; Bell, R.; Studinger, M.; Damaske, D.; Jokat, W.; Aleshkova, N.; Jordan, T.; Leitchenkov, G.;
2016-01-01
Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, air-borne and ship-borne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million sq km, which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated leveling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica.
New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica.
Scheinert, M; Ferraccioli, F; Schwabe, J; Bell, R; Studinger, M; Damaske, D; Jokat, W; Aleshkova, N; Jordan, T; Leitchenkov, G; Blankenship, D D; Damiani, T M; Young, D; Cochran, J R; Richter, T D
2016-01-28
Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, airborne and shipborne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million km 2 , which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated levelling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica.
A Grid of NLTE Line-blanketed Model Atmospheres of Early B-Type Stars
NASA Astrophysics Data System (ADS)
Lanz, Thierry; Hubeny, Ivan
2007-03-01
We have constructed a comprehensive grid of 1540 metal line-blanketed, NLTE, plane-parallel, hydrostatic model atmospheres for the basic parameters appropriate to early B-type stars. The BSTAR2006 grid considers 16 values of effective temperatures, 15,000 K<=Teff<=30,000 K with 1000 K steps, 13 surface gravities, 1.75<=logg<=4.75 with 0.25 dex steps, six chemical compositions, and a microturbulent velocity of 2 km s-1. The lower limit of logg for a given effective temperature is set by an approximate location of the Eddington limit. The selected chemical compositions range from twice to one-tenth of the solar metallicity and metal-free. Additional model atmospheres for B supergiants (logg<=3.0) have been calculated with a higher microturbulent velocity (10 km s-1) and a surface composition that is enriched in helium and nitrogen and depleted in carbon. This new grid complements our earlier OSTAR2002 grid of O-type stars (our Paper I). The paper contains a description of the BSTAR2006 grid and some illustrative examples and comparisons. NLTE ionization fractions, bolometric corrections, radiative accelerations, and effective gravities are obtained over the parameter range covered by the grid. By extrapolating radiative accelerations, we have determined an improved estimate of the Eddington limit in absence of rotation between 55,000 and 15,000 K. The complete BSTAR2006 grid is available at the TLUSTY Web site.
Lithosphere temperature model and resource assessment for deep geothermal exploration in Hungary
NASA Astrophysics Data System (ADS)
Bekesi, Eszter; van Wees, Jan-Diederik; Vrijlandt, Mark; Lenkey, Laszlo; Horvath, Ferenc
2017-04-01
The demand for deep geothermal energy has increased considerably over the past years. To reveal potential areas for geothermal exploration, it is crucial to have an insight into the subsurface temperature distribution. Hungary is one of the most suitable countries in Europe for geothermal development, as a result of Early and Middle Miocene extension and subsequent thinning of the lithosphere. Hereby we present the results of a new thermal model of Hungary extending from the surface down to the lithosphere-astenosphere boundary (LAB). Subsurface temperatures were calculated through a regular 3D grid with a horizontal resolution of 2.5 km, a vertical resolution of 200 m for the uppermost 7 km, and 3 km down to the depth of the LAB The model solves the heat equation in steady-state, assuming conduction as the main heat transfer mechanism. At the base, it adopts a constant basal temperature or heat flow condition. For the calibration of the model, more than 5000 temperature measurements were collected from the Geothermal Database of Hungary. The model is built up by five sedimentary layers, upper crust, lower crust, and lithospheric mantle, where each layer has its own thermal properties. The prior thermal properties and basal condition of the model is updated through the ensemble smoother with multiple data assimilation technique. The conductive model shows misfits with the observed temperatures, which cannot be explained by neglected transient effects related to lithosphere extension. These anomalies are explained mostly by groundwater flow in Mesozoic carbonates and other porous sedimentary rocks. To account for the effect of heat convection, we use a pseudo-conductive approach by adjusting the thermal conductivity of the layers where fluid flow may occur. After constructing the subsurface temperature model of Hungary, the resource base for EGS (Enhanced Geothermal Systems) is quantified. To this end, we applied a cash-flow model to translate the geological potential into economical potential for different scenarios in Hungary. The calculations were made for each grid cell of the model. Results of the temperature modeling together with the economical resource assessment provide an indication on the potential sites for future EGS in Hungary.
Daily hydro- and morphodynamic simulations at Duck, NC, USA using Delft3D
NASA Astrophysics Data System (ADS)
Penko, Allison; Veeramony, Jay; Palmsten, Margaret; Bak, Spicer; Brodie, Katherine; Hesser, Tyler
2017-04-01
Operational forecasting of the coastal nearshore has wide ranging societal and humanitarian benefits, specifically for the prediction of natural hazards due to extreme storm events. However, understanding the model limitations and uncertainty is as equally important as the predictions themselves. By comparing and contrasting the predictions of multiple high-resolution models in a location with near real-time collection of observations, we are able to perform a vigorous analysis of the model results in order to achieve more robust and certain predictions. In collaboration with the U.S. Army Corps of Engineers Field Research Facility (USACE FRF) as part of the Coastal Model Test Bed (CMTB) project, we have set up Delft3D at Duck, NC, USA to run in near-real time, driven by measured wave data at the boundary. The CMTB at the USACE FRF allows for the unique integration of operational wave, circulation, and morphology models with real-time observations. The FRF has an extensive array of in-situ and remotely sensed oceanographic, bathymetric, and meteorological data that is broadcast in near-real time onto a publically accessible server. Wave, current, and bed elevation instruments are permanently installed across the model domain including 2 waverider buoys in 17-m and 26-m water depths at 3.5-km and 17-km offshore, respectively, that record directional wave data every 30-min. Here, we present the workflow and output of the Delft3D hydro- and morphodynamic simulations at Duck, and show the tactical benefits and operational potential of such a system. A nested Delft3D simulation runs a parent grid that extends 12-km in the along-shore and 3.5-km in the cross-shore with 50-m resolution and a maximum depth of approximately 17-m. The bathymetry for the parent grid was obtained from a regional digital elevation model (DEM) generated by the Federal Emergency Management Agency (FEMA). The inner nested grid extends 1.8-km in the along-shore and 1-km in the cross-shore with 5-m resolution and a maximum depth of approximately 8-m. The inner nested grid initial model bathymetry is set to either the predicted bathymetry from the previous day's simulation or a survey, whichever is more recent. Delft3D-WAVE runs in the parent grid and is driven with the real-time spectral wave measurements from the waverider buoy in 17-m depth. The spectral output from Delft3D-WAVE in the parent grid is then used as the boundary condition for the inner nested high-resolution grid, in which the coupled Delft3D wave-flow-morphology model is run. The model results are then compared to the wave, current, and bathymetry observations collected at the FRF as well as other models that are run in the CMTB.
Unstructured viscous grid generation by advancing-front method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
1993-01-01
A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.
Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.
Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos
2010-07-01
To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
Voxel inversion of airborne electromagnetic data for improved model integration
NASA Astrophysics Data System (ADS)
Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders
2014-05-01
Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054 spatially constrained 1D models with 29 layers. For comparison, the SCI inversion models have been gridded on the same grid of the voxel inversion. The new voxel inversion and the classic SCI give similar data fit and inversion models. The voxel inversion decouples the geophysical model from the position of acquired data, and at the same time fits the data as well as the classic SCI inversion. Compared to the classic approach, the voxel inversion is better suited for informing directly (hydro)geological models and for sequential/Joint/Coupled (hydro)geological inversion. We believe that this new approach will facilitate the integration of geophysics, geology and hydrology for improved groundwater and environmental management.
“Fine-Scale Application of the coupled WRF-CMAQ System to ...
The DISCOVER-AQ project (Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality), is a joint collaboration between NASA, U.S. EPA and a number of other local organizations with the goal of characterizing air quality in urban areas using satellite, aircraft, vertical profiler and ground based measurements (http://discover-aq.larc.nasa.gov). In July 2011, the DISCOVER-AQ project conducted intensive air quality measurements in the Baltimore, MD and Washington, D.C. area in the eastern U.S. To take advantage of these unique data, the Community Multiscale Air Quality (CMAQ) model, coupled with the Weather Research and Forecasting (WRF) model is used to simulate the meteorology and air quality in the same region using 12-km, 4-km and 1-km horizontal grid spacings. The goal of the modeling exercise is to demonstrate the capability of the coupled WRF-CMAQ modeling system to simulate air quality at fine grid spacings in an urban area. Development of new data assimilation techniques and the use of higher resolution input data for the WRF model have been implemented to improve the meteorological results, particularly at the 4-km and 1-km grid resolutions. In addition, a number of updates to the CMAQ model were made to enhance the capability of the modeling system to accurately represent the magnitude and spatial distribution of pollutants at fine model resolutions. Data collected during the 2011 DISCOVER-AQ campa
“Application and evaluation of the two-way coupled WRF ...
The DISCOVER-AQ project (Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality), is a joint collaboration between NASA, U.S. EPA and a number of other local organizations with the goal of characterizing air quality in urban areas using satellite, aircraft, vertical profiler and ground based measurements (http://discover-aq.larc.nasa.gov). In July 2011, the DISCOVER-AQ project conducted intensive air quality measurements in the Baltimore, MD and Washington, D.C. area in the eastern U.S. To take advantage of these unique data, the Community Multiscale Air Quality (CMAQ) model, coupled with the Weather Research and Forecasting (WRF) model is used to simulate the meteorology and air quality in the same region using 12-km, 4-km and 1-km horizontal grid spacings. The goal of the modeling exercise is to demonstrate the capability of the coupled WRF-CMAQ modeling system to simulate air quality at fine grid spacings in an urban area. Development of new data assimilation techniques and the use of higher resolution input data for the WRF model have been implemented to improve the meteorological results, particularly at the 4-km and 1-km grid resolutions. In addition, a number of updates to the CMAQ model were made to enhance the capability of the modeling system to accurately represent the magnitude and spatial distribution of pollutants at fine model resolutions. Data collected during the 2011 DISCOVER-AQ campa
Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
NASA Astrophysics Data System (ADS)
Gagnon, Patrick; Rousseau, Alain N.; Charron, Dominique; Fortin, Vincent; Audet, René
2017-11-01
Several businesses and industries rely on rainfall forecasts to support their day-to-day operations. To deal with the uncertainty associated with rainfall forecast, some meteorological organisations have developed products, such as ensemble forecasts. However, due to the intensive computational requirements of ensemble forecasts, the spatial resolution remains coarse. For example, Environment and Climate Change Canada's (ECCC) Global Ensemble Prediction System (GEPS) data is freely available on a 1-degree grid (about 100 km), while those of the so-called High Resolution Deterministic Prediction System (HRDPS) are available on a 2.5-km grid (about 40 times finer). Potential users are then left with the option of using either a high-resolution rainfall forecast without uncertainty estimation and/or an ensemble with a spectrum of plausible rainfall values, but at a coarser spatial scale. The objective of this study was to evaluate the added value of coupling the Gibbs Sampling Disaggregation Model (GSDM) with ECCC products to provide accurate, precise and consistent rainfall estimates at a fine spatial resolution (10-km) within a forecast framework (6-h). For 30, 6-h, rainfall events occurring within a 40,000-km2 area (Québec, Canada), results show that, using 100-km aggregated reference rainfall depths as input, statistics of the rainfall fields generated by GSDM were close to those of the 10-km reference field. However, in forecast mode, GSDM outcomes inherit of the ECCC forecast biases, resulting in a poor performance when GEPS data were used as input, mainly due to the inherent rainfall depth distribution of the latter product. Better performance was achieved when the Regional Deterministic Prediction System (RDPS), available on a 10-km grid and aggregated at 100-km, was used as input to GSDM. Nevertheless, most of the analyzed ensemble forecasts were weakly consistent. Some areas of improvement are identified herein.
Modelling the spatial distribution of SO2 and NOx emissions in Ireland.
de Kluizenaar, Y; Aherne, J; Farrell, E P
2001-01-01
The spatial distributions of sulphur dioxide (SO2) and nitrogen oxides (NOx) emissions are essential inputs to models of atmospheric transport and deposition. Information of this type is required for international negotiations on emission reduction through the critical load approach. High-resolution emission maps for the Republic of Ireland have been created using emission totals and a geographical information system, supported by surrogate statistics and landcover information. Data have been subsequently allocated to the EMEP 50 x 50-km grid, used in long-range transport models for the investigation of transboundary air pollution. Approximately two-thirds of SO2 emissions in Ireland emanate from two grid-squares. Over 50% of total SO2 emissions originate from one grid-square in the west of Ireland, where the largest point sources of SO2 are located. Approximately 15% of the total SO2 emissions originate from the grid-square containing Dublin. SO2 emission densities for the remaining areas are very low, < 1 t km-2 year-1 for most grid-squares. NOx emissions show a very similar distribution pattern. However, NOx emissions are more evenly spread over the country, as about 40% of total NOx emissions originate from road transport.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sugimoto, Hiroyuki; Suzuoki, Yasuo
We established a procedure for estimating regional electricity demand and regional potential capacity of distributed generators (DGs) by using a grid square statistics data set. A photovoltaic power system (PV system) for residential use and a co-generation system (CGS) for both residential and commercial use were taken into account. As an example, the result regarding Aichi prefecture was presented in this paper. The statistical data of the number of households by family-type and the number of employees by business category for about 4000 grid-square with 1km × 1km area was used to estimate the floor space or the electricity demand distribution. The rooftop area available for installing PV systems was also estimated with the grid-square statistics data set. Considering the relation between a capacity of existing CGS and a scale-index of building where CGS is installed, the potential capacity of CGS was estimated for three business categories, i.e. hotel, hospital, store. In some regions, the potential capacity of PV systems was estimated to be about 10,000kW/km2, which corresponds to the density of the existing area with intensive installation of PV systems. Finally, we discussed the ratio of regional potential capacity of DGs to regional maximum electricity demand for deducing the appropriate capacity of DGs in the model of future electricity distribution system.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
Impact of buildings on surface solar radiation over urban Beijing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Bin; Liou, Kuo-Nan; Gu, Yu
The rugged surface of an urban area due to varying buildings can interact with solar beams and affect both the magnitude and spatiotemporal distribution of surface solar fluxes. Here we systematically examine the impact of buildings on downward surface solar fluxes over urban Beijing by using a 3-D radiation parameterization that accounts for 3-D building structures vs. the conventional plane-parallel scheme. We find that the resulting downward surface solar flux deviations between the 3-D and the plane-parallel schemes are generally ±1–10 W m -2 at 800 m grid resolution and within ±1 W m -2 at 4 km resolution. Pairsmore » of positive–negative flux deviations on different sides of buildings are resolved at 800 m resolution, while they offset each other at 4 km resolution. Flux deviations from the unobstructed horizontal surface at 4 km resolution are positive around noon but negative in the early morning and late afternoon. The corresponding deviations at 800 m resolution, in contrast, show diurnal variations that are strongly dependent on the location of the grids relative to the buildings. Both the magnitude and spatiotemporal variations of flux deviations are largely dominated by the direct flux. Furthermore, we find that flux deviations can potentially be an order of magnitude larger by using a finer grid resolution. Atmospheric aerosols can reduce the magnitude of downward surface solar flux deviations by 10–65 %, while the surface albedo generally has a rather moderate impact on flux deviations. The results imply that the effect of buildings on downward surface solar fluxes may not be critically significant in mesoscale atmospheric models with a grid resolution of 4 km or coarser. However, the effect can play a crucial role in meso-urban atmospheric models as well as microscale urban dispersion models with resolutions of 1 m to 1 km.« less
Grid scale drives the scale and long-term stability of place maps
Mallory, Caitlin S; Hardcastle, Kiah; Bant, Jason S; Giocomo, Lisa M
2018-01-01
Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally-defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here, we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory. PMID:29335607
Verification of the NWP models operated at ICM, Poland
NASA Astrophysics Data System (ADS)
Melonek, Malgorzata
2010-05-01
Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw (ICM) started its activity in the field of NWP in May 1997. Since this time the numerical weather forecasts covering Central Europe have been routinely published on our publicly available website. First NWP model used in ICM was hydrostatic Unified Model developed by the UK Meteorological Office. It was a mesoscale version with horizontal resolution of 17 km and 31 levels in vertical. At present two NWP non-hydrostatic models are running in quasi-operational regime. The main new UM model with 4 km horizontal resolution, 38 levels in vertical and forecats range of 48 hours is running four times a day. Second, the COAMPS model (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by the US Naval Research Laboratory, configured with the three nested grids (with coresponding resolutions of 39km, 13km and 4.3km, 30 vertical levels) are running twice a day (for 00 and 12 UTC). The second grid covers Central Europe and has forecast range of 84 hours. Results of the both NWP models, ie. COAMPS computed on 13km mesh resolution and UM, are verified against observations from the Polish synoptic stations. Verification uses surface observations and nearest grid point forcasts. Following meteorological elements are verified: air temperature at 2m, mean sea level pressure, wind speed and wind direction at 10 m and 12 hours accumulated precipitation. There are presented different statistical indices. For continous variables Mean Error(ME), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) in 6 hours intervals are computed. In case of precipitation the contingency tables for different thresholds are computed and some of the verification scores such as FBI, ETS, POD, FAR are graphically presented. The verification sample covers nearly one year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, K.; Wilson, R.J.; Hemler, R.S.
1999-11-15
The large-scale circulation in the Geophysical Fluid Dynamics Laboratory SKYHI troposphere-stratosphere-mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of {approximately}35 km and another with {approximately}100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder thanmore » observed, but the bias is reduced with improved horizontal resolution (from {approximately}70 C in a version with {approximately}300 km grid spacing to less than 10 C in the {approximately}35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere {approximately}1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.« less
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.
Simulation of Anomalous Regional Climate Events with a Variable Resolution Stretched Grid GCM
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.
1999-01-01
The stretched-grid approach provides an efficient down-scaling and consistent interactions between global and regional scales due to using one variable-resolution model for integrations. It is a workable alternative to the widely used nested-grid approach introduced over a decade ago as a pioneering step in regional climate modeling. A variable-resolution General Circulation Model (GCM) employing a stretched grid, with enhanced resolution over the US as the area of interest, is used for simulating two anomalous regional climate events, the US summer drought of 1988 and flood of 1993. The special mode of integration using a stretched-grid GCM and data assimilation system is developed that allows for imitating the nested-grid framework. The mode is useful for inter-comparison purposes and for underlining the differences between these two approaches. The 1988 and 1993 integrations are performed for the two month period starting from mid May. Regional resolutions used in most of the experiments is 60 km. The major goal and the result of the study is obtaining the efficient down-scaling over the area of interest. The monthly mean prognostic regional fields for the stretched-grid integrations are remarkably close to those of the verifying analyses. Simulated precipitation patterns are successfully verified against gauge precipitation observations. The impact of finer 40 km regional resolution is investigated for the 1993 integration and an example of recovering subregional precipitation is presented. The obtained results show that the global variable-resolution stretched-grid approach is a viable candidate for regional and subregional climate studies and applications.
Wave Resource Characterization Using an Unstructured Grid Modeling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wei-Cheng; Yang, Zhaoqing; Wang, Taiping
This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization using the unstructured-grid SWAN model coupled with a nested-grid WWIII model. The flexibility of models of various spatial resolutions and the effects of open- boundary conditions simulated by a nested-grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured-grid modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Centermore » Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the model skill of the ST2 physics package for predicting wave power density for large waves, which is important for wave resource assessment, device load calculation, and risk management. In addition, bivariate distributions show the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than that with the ST2 physics package. This study demonstrated that the unstructured-grid wave modeling approach, driven by the nested-grid regional WWIII outputs with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (10^2 km).« less
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
NASA Astrophysics Data System (ADS)
Fastook, J. L.
2006-12-01
Recent extraordinary programs of the Airborne Geophysical survey of the Amundsen Sea Embayment (AGASEA), by University of Texas [holt06] and British Antarctic Survey [vaughan06] teams in the astral summers of 2004/2005, collected some 75,000 km of flight-line data measuring ice thickness with radar sounders and surface elevation with laser or radar altimeters. Recently these data have been made available as a 5-km gridded data set in a format convenient for modeling. We apply the University of Maine Ice Sheet Model (UMISM) in its embedded mode [fastook04b] to do high-resolution analysis of the velocity distribution within the Amundsen Catchment. We show that the model adequately captures velocity distributions measured by SAR radar [rignot04]. We show the distribution of basal water predicted by the model [fastook97, johnson99, johnson02b, johnson02]. We hope that, within the limitations of our grounding line parameterization, the model has predictive capabilities and will show some examples of possible future retreat. The nest of embedded models begins with a 40 km grid of the entire ice sheet. Embedded in this is a 10 km grid that includes the entire AGASEA measurement area. Nested inside this medium-resolution grid are two 5 km grids encompassing Thwaites and Pine Island Glaciers. This procedure allows us to obtain the highest- resolution results with very reasonable runtimes. A cycle of growth to an LGM configuration followed by retreat to the present configuration is run for this nest of embedded grids. Advance and retreat is controlled by a "thinning-at-the-grounding-line" parameter (the WEERTMAN) which is coupled to the Vostok Core temperature proxy. Full resolution 5-km results for thickness, velocity, and water distribution are shown for the two focused embedded grids. We also present a plausible, but perhaps extreme, scenario of future retreat that might arise if conditions ever returned to the state that produced the large retreat from the LGM configuration. One of these scenarios produces complete collapse of the WAIS in a few thousand years, while the other demonstrates how the "weak underbelly" might collapse [hughes81c]. [fastook97] J.L. Fastook. 4th Annual WAIS Initiative Workshop, 10-12 Sept. 1997, Sterling, Virginia, 1997. [fastook04b] J.L. Fastook and A. Sargent. 11h Annual WAIS Initiative Workshop, Sterling, Virginia, 2004. [holt06] J.W. Holt, et al.. Geophys. Res. Lett., L09502(doi:10.1029/2005GL025561), 2006. [hughes81c] T. Hughes. Journal of Glaciology, 27:518--521, 1981. [johnson02] Jesse Johnson and James L. Fastook. Quaternary International, 95-96:65--74, 2002. [johnson99] Jesse Johnson, Slawek Tulaczyk, and J.L. Fastook. 6th Annual WAIS Initiative Workshop, 15-18 Sept. 1999, Sterling, Virginia, 1999. [johnson02b] Jesse V. Johnson. A basal water model for ice sheets. PhD thesis, Department Physcs, University of Maine, Orono, Maine, 2002. [rignot04] E. Rignot, et al., Annals of Glaciology, 39:231--237, 2004. [vaughan06] D.G. Vaughan, et al., Geophys. Res. Lett., doi 10.1029/2005GL025588, 2006.
Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.
2018-02-01
We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.
Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.
2018-06-01
We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.
NASA Astrophysics Data System (ADS)
Machalek, P.; Kim, S. M.; Berry, R. D.; Liang, A.; Small, T.; Brevdo, E.; Kuznetsova, A.
2012-12-01
We describe how the Climate Corporation uses Python and Clojure, a language impleneted on top of Java, to generate climatological forecasts for precipitation based on the Advanced Hydrologic Prediction Service (AHPS) radar based daily precipitation measurements. A 2-year-long forecasts is generated on each of the ~650,000 CONUS land based 4-km AHPS grids by constructing 10,000 ensembles sampled from a 30-year reconstructed AHPS history for each grid. The spatial and temporal correlations between neighboring AHPS grids and the sampling of the analogues are handled by Python. The parallelization for all the 650,000 CONUS stations is further achieved by utilizing the MAP-REDUCE framework (http://code.google.com/edu/parallel/mapreduce-tutorial.html). Each full scale computational run requires hundreds of nodes with up to 8 processors each on the Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) distributed computing service resulting in 3 terabyte datasets. We further describe how we have productionalized a monthly run of the simulations process at full scale of the 4km AHPS grids and how the resultant terabyte sized datasets are handled.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-10-01
In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.
NASA Astrophysics Data System (ADS)
Fast, Jerome D.; Osteen, B. Lance
In this study, a four-dimensional data assimilation technique based on Newtonian relaxation is incorporated into the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) and evaluated using data taken from one experiment of the US Department of Energy's (DOE) 1991 Atmospheric Studies in COmplex Terrain (ASCOT) field study along the front range of the Rockies in Colorado. The main objective of this study is to determine the ability of the model to predict small-scale circulations influenced by terrain, such as drainage flows, and assess the impact of data assimilation on the numerical results. In contrast to previous studies in which the smallest horizontal grid spacing was 10 km and 8 km, data assimilation is applied in this study to domains with a horizontal grid spacing as small as 1 km. The prognostic forecasts made by RAMS are evaluated by comparing simulations that employ static initial conditions, with simulations that incorporate continuous data assimilation, and data assimilation for a fixed period of time (dynamic initialization). This paper will also elaborate on the application and limitation of the Newtonian relaxation technique in limited-area mesoscale models with a relatively small grid spacing.
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Broxton, P. D.; Brunke, M.; Gochis, D.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.
2015-12-01
The terrestrial hydrological system, including surface and subsurface water, is an essential component of the Earth's climate system. Over the past few decades, land surface modelers have built one-dimensional (1D) models resolving the vertical flow of water through the soil column for use in Earth system models (ESMs). These models generally have a relatively coarse model grid size (~25-100 km) and only account for sub-grid lateral hydrological variations using simple parameterization schemes. At the same time, hydrologists have developed detailed high-resolution (~0.1-10 km grid size) three dimensional (3D) models and showed the importance of accounting for the vertical and lateral redistribution of surface and subsurface water on soil moisture, the surface energy balance and ecosystem dynamics on these smaller scales. However, computational constraints have limited the implementation of the high-resolution models for continental and global scale applications. The current work presents a hybrid-3D hydrological approach is presented, where the 1D vertical soil column model (available in many ESMs) is coupled with a high-resolution lateral flow model (h2D) to simulate subsurface flow and overland flow. H2D accounts for both local-scale hillslope and regional-scale unconfined aquifer responses (i.e. riparian zone and wetlands). This approach was shown to give comparable results as those obtained by an explicit 3D Richards model for the subsurface, but improves runtime efficiency considerably. The h3D approach is implemented for the Delaware river basin, where Noah-MP land surface model (LSM) is used to calculated vertical energy and water exchanges with the atmosphere using a 10km grid resolution. Noah-MP was coupled within the WRF-Hydro infrastructure with the lateral 1km grid resolution h2D model, for which the average depth-to-bedrock, hillslope width function and soil parameters were estimated from digital datasets. The ability of this h3D approach to simulate the hydrological dynamics of the Delaware River basin will be assessed by comparing the model results (both hydrological performance and numerical efficiency) with the standard setup of the NOAH-MP model and a high-resolution (1km) version of NOAH-MP, which also explicitly accounts for lateral subsurface and overland flow.
Membrane potential dynamics of grid cells
Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.
2014-01-01
During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
NASA Astrophysics Data System (ADS)
Goepel, A.; Queitsch, M.; Lonschinski, M.; Eitner, A.; Meisel, M.; Reißig, S.; Engelhardt, J.; Büchel, G.; Kukowski, N.
2012-04-01
The Laacher See Volcano (LSV) is part of the Quaternary East-Eifel volcanic field (EVF) located in the western part of Germany, where at least 103 eruptive centers have been identified. The Laacher See volcano explosively erupted about 6.3 km3 of phonolitic magma during a dominantly phreato-plinian eruption at about 12,900 BP. Despite numerous previous studies the eruptive history of LSV is not fully unveiled. For a better understanding of the eruptive history of LSV several geophysical methods, including magnetic, gravimetric and bathymetric surveys have been applied on and around Laacher See Volcano. Here we focus on the magnetic and bathymetric data. The presented high resolution magnetic data covering an area of about 25 km2 (20,000 sample points) and were collected using ground based proton magnetometers (GEM Systems GSM-19TGW, Geometrics G856) during several field campaigns. In addition, a magnetic survey on the lake was done using a non-magnetic boat as platform. The bathymetric survey was conducted on profiles (total length of 235 km) using an echo sounder GARMIN GPSMap 421. Depth data were computed to a bathymetric model on a 10 m spaced regular grid. A joint interpretation of magnetic, morphologic and bathymetric data allows us to search for common patterns which can be associated with typical volcanic features. From our data at least one new eruptive center and lava flow could be identified. Furthermore, the new data suggest that previously identified lava flows have not been accurately located.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo M.; Putman, William; Nattala, J.
2014-01-01
This document describes the gridded output files produced by a two-year global, non-hydrostatic mesoscale simulation for the period 2005-2006 produced with the non-hydrostatic version of GEOS-5 Atmospheric Global Climate Model (AGCM). In addition to standard meteorological parameters (wind, temperature, moisture, surface pressure), this simulation includes 15 aerosol tracers (dust, sea-salt, sulfate, black and organic carbon), O3, CO and CO2. This model simulation is driven by prescribed sea-surface temperature and sea-ice, daily volcanic and biomass burning emissions, as well as high-resolution inventories of anthropogenic sources. A description of the GEOS-5 model configuration used for this simulation can be found in Putman et al. (2014). The simulation is performed at a horizontal resolution of 7 km using a cubed-sphere horizontal grid with 72 vertical levels, extending up to to 0.01 hPa (approximately 80 km). For user convenience, all data products are generated on two logically rectangular longitude-latitude grids: a full-resolution 0.0625 deg grid that approximately matches the native cubed-sphere resolution, and another 0.5 deg reduced-resolution grid. The majority of the full-resolution data products are instantaneous with some fields being time-averaged. The reduced-resolution datasets are mostly time-averaged, with some fields being instantaneous. Hourly data intervals are used for the reduced-resolution datasets, while 30-minute intervals are used for the full-resolution products. All full-resolution output is on the model's native 72-layer hybrid sigma-pressure vertical grid, while the reduced-resolution output is given on native vertical levels and on 48 pressure surfaces extending up to 0.02 hPa. Section 4 presents additional details on horizontal and vertical grids. Information of the model surface representation can be found in Appendix B. The GEOS-5 product is organized into file collections that are described in detail in Appendix C. Additional details about variables listed in this file specification can be found in a separate document, the GEOS-5 File Specification Variable Definition Glossary. Documentation about the current access methods for products described in this document can be found on the GEOS-5 Nature Run portal: http://gmao.gsfc.nasa.gov/projects/G5NR. Information on the scientific quality of this simulation will appear in a forthcoming NASA Technical Report Series on Global Modeling and Data Assimilation to be available from http://gmao.gsfc.nasa.gov/pubs/tm/.
High throughput profile-profile based fold recognition for the entire human proteome.
McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T
2006-06-07
In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
Modeling CCN effects on tropical convection: An statistical perspective
NASA Astrophysics Data System (ADS)
Carrio, G. G.; Cotton, W. R.; Massie, S. T.
2012-12-01
This modeling study examines the response of tropical convection to the enhancement of CCN concentrations from a statistical perspective. The sensitivity runs were performed using RAMS version 6.0, covering almost the entire Amazonian Aerosol Characterization Experiment period (AMAZE, wet season of 2008). The main focus of the analysis was the indirect aerosol effects on the probability density functions (PDFs) of various cloud properties. RAMS was configured to work with four two-way interactive nested grids with 42 vertical levels and horizontal grid spacing of 150, 37.5, 7.5, and 1.5 km. Grids 2 and 3 were used to simulate the synoptic and mesoscale environments, while grid 4 was used to resolve deep convection. Comparisons were made using the finest grid with a domain size of 300 X 300km, approximately centered on the city of Manaus (3.1S, 60.01W). The vertical grid was stretched using with 75m spacing at the finest levels to provide better resolution within the first 1.5 km, and the model top extended to approximately 22 km above ground level. RAMS was initialized on February 10 2008 (00:00 UTC), the length of simulations was 32 days, and GSF data were used for initialization and nudging of the coarser-grid boundaries. The control run considered a CCN concentration of 300cm-3 while other several other simulations considered an influx of higher CCN concentrations (up to 1300/cc) . The latter concentration was observed near the end of the AMAZE project period. Both direct and indirect effects of these CCN particles were considered. Model output data (finest grid) every 15 min were used to compute the PDFs for each model level. When increasing aerosol concentrations, significant impacts were simulated for the PDFs of the water contents of various hydrometeors, vertical motions, area with precipitation, latent heat releases, among other quantities. In most cases, they exhibited a peculiar non-monotonic response similar to that seen in two previous studies of ours (for isolated cloud systems). It is well known that a reduction in sizes of cloud droplets reduces coalescence, increases their probability of reaching super-cooled levels, and convective cells are intensified by additional release of latent heat of freezing. However, indirect aerosol effects tend to revert when aerosol concentrations are greatly enhanced due to the riming efficiency reduction of ice particles. However, some quantities show a different response; for instance, the water content associated with small ice crystals large contents are always more likely at high levels when considering air masses more polluted in terms of CCN. Conversely, the PDF's of water contents of larger ice crystals at high altitudes exhibit the aforementioned non-monotonic behavior.
Optimizing "self-wicking" nanowire grids.
Wei, Hui; Dandey, Venkata P; Zhang, Zhening; Raczkowski, Ashleigh; Rice, Willam J; Carragher, Bridget; Potter, Clinton S
2018-05-01
We have developed a self-blotting TEM grid for use with a novel instrument for vitrifying samples for cryo-electron microscopy (cryoEM). Nanowires are grown on the copper surface of the grid using a simple chemical reaction and the opposite smooth side is used to adhere to a holey sample substrate support, for example carbon or gold. When small volumes of sample are applied to the nanowire grids the wires effectively act as blotting paper to rapidly wick away the liquid, leaving behind a thin film. In this technical note, we present a detailed description of how we make these grids using a variety of substrates fenestrated with either lacey or regularly spaced holes. We explain how we characterize the quality of the grids and we describe their behavior under a variety of conditions. Copyright © 2018 Elsevier Inc. All rights reserved.
Previous research has demonstrated the ability to use the Weather Research and Forecast (WRF) model and contemporary dynamical downscaling methods to refine global climate modeling results to a horizontal resolution of 36 km. Environmental managers and urban planners have expre...
Estimation of Global 1km-grid Terrestrial Carbon Exchange Part I: Developing Inputs and Modelling
NASA Astrophysics Data System (ADS)
Sasai, T.; Murakami, K.; Kato, S.; Matsunaga, T.; Saigusa, N.; Hiraki, K.
2015-12-01
Global terrestrial carbon cycle largely depends on a spatial pattern in land cover type, which is heterogeneously-distributed over regional and global scales. However, most studies, which aimed at the estimation of carbon exchanges between ecosystem and atmosphere, remained within several tens of kilometers grid spatial resolution, and the results have not been enough to understand the detailed pattern of carbon exchanges based on ecological community. Improving the sophistication of spatial resolution is obviously necessary to enhance the accuracy of carbon exchanges. Moreover, the improvement may contribute to global warming awareness, policy makers and other social activities. In this study, we show global terrestrial carbon exchanges (net ecosystem production, net primary production, and gross primary production) with 1km-grid resolution. As methodology for computing the exchanges, we 1) developed a global 1km-grid climate and satellite dataset based on the approach in Setoyama and Sasai (2013); 2) used the satellite-driven biosphere model (Biosphere model integrating Eco-physiological And Mechanistic approaches using Satellite data: BEAMS) (Sasai et al., 2005, 2007, 2011); 3) simulated the carbon exchanges by using the new dataset and BEAMS by the use of a supercomputer that includes 1280 CPU and 320 GPGPU cores (GOSAT RCF of NIES). As a result, we could develop a global uniform system for realistically estimating terrestrial carbon exchange, and evaluate net ecosystem production in each community level; leading to obtain highly detailed understanding of terrestrial carbon exchanges.
Study of a close-grid geodynamic measurement system
NASA Technical Reports Server (NTRS)
1977-01-01
The Clogeos (Close-Grid Geodynamic Measurement System) concept, a complete range or range-rate measurement terminal installed in a satellite in a near-polar orbit with a network of relatively simple transponders or retro-reflectors on the ground at intervals of 0.1 to 10 km was reviewed. The distortion of the grid was measured in three dimensions to accuracies of + or - 1 cm with important applications to geodynamics, glaciology, and geodesy. User requirements are considered, and a typical grid, designed for earthquake prediction, was laid out along the San Andreas, Hayward, and Calaceras faults in southern California. The sensitivity of both range and range-rate measurements to small grid motions was determined by a simplified model. Variables in the model are satellite altitude and elevation angle plus grid displacements in latitude, and height.
NASA Technical Reports Server (NTRS)
Schlesinger, R. E.
1982-01-01
Preliminary results of four runs with a three-dimensional model of the effects of vertical wind shear on cloud top height/temperature structure and the internal properties of isolate midlatitude thunderstorms are reported. The model is being developed as an aid to analyses of GEO remote sensing satellite data. The grid is a 27 x 27 x 20 mesh with 2 km horizontal resolution and 0.9 vertical resolution. The total grid is 54 km on a side and 18 km deep. A second-order Crowley scheme for advecting momentum is extended with a third-order correction for spatial truncation error, and the earth-relative horizontal surface wind components are decreased to 50 percent of their values at 0.45 km. A temperature increase with height is included, together with an initial impulse consisting of a nonrotating cylindrical weak buoyant updraft 10 km in radius. The results of the runs are discussed in terms of the time variation of the vertical velocity extrema, the effects of strong and weak shear on a storm, the cloud top height, the Lagrangian dynamics of a thermal couplet, and data from a real storm.
In this study we investigate the CMAQ model response in terms of simulated mercury concentration and deposition to boundary/initial conditions (BC/IC), model grid resolution (12- versus 36-km), and two alternative Hg(II) reduction mechanisms. The model response to the change of g...
Coexisting shortening and extension along the "Africa-Eurasia" plate boundary in southern Italy
NASA Astrophysics Data System (ADS)
Cuffaro, M.; Riguzzi, F.; Scrocca, D.; Doglioni, C.
2009-04-01
We performed geodetic strain rate field analyses along the "Africa (Sicily microplate)"-"Eurasia (Tyrrhenian microplate)" plate boundary in Sicily (southern Italy), using new GPS velocities from a data set spanning maximum ten years (1998-2007). Data from GPS permanent stations maintained from different institutions and the recent RING network, settled in Italy in the last five years by the Istituto Nazionale di Geofisica e Vulcanologia, were included into the analysis. Two dimensional strain and rotation rate fields were estimated by the distance weighted approach on a regularly spaced grid (30*30km), estimating the strain using all stations, but data from each station are weighted by their distance from the grid node by a constant a=70km that specifies how the effect of a station decays with distance from the node grid interpolation. Results show that most of the shortening of the Africa-Eurasia relative motion is distributed in the northwestern side offshore Sicily, whereas the extension becomes comparable with shortening on the western border of the Capo d'Orlando basin, and grater in the northeastern side, offshore Sicily, as directly provided by GPS velocities which show a larger E-ward component of sites located in Calabria with respect to those located either in northern Sicily or in the Ustica-Aeolian islands. Moreover, where shortening and extension have mostly a similar order of magnitude, two rotation rate fields can be detected, CCW in the northwestern side of Sicily, and CW in the northeastern one respectively. Also, 2-D dilatation field records a similar pattern, with negative values (shortening) in the northwestern area of Sicily close to the Ustica island, and positive values (extension) in the northeastern and southeastern ones, respectively. Principal shortening and extension rate axes are consistent with long-term geological features: seismic reflection profiles acquired in the southern Tyrrhenian seismogenic belt show active extensional faults affecting Pleistocene strata and deforming the seafloor in the western sector of the Cefalù Basin, on both NE-SW and W-E trending faults. Combining geodetic data and geological features contributes to the knowledge of the active deformation along the Africa-Eurasia plate boundary, suggesting coexisting, independent geodynamic processes, i.e., active E-W backarc spreading in the hangingwall of the Apennines subduction zone, and shortening of the southern margin of the Tyrrhenian backarc basin operated by the "Africa" NW-motion relative to "Europe".
Where is the ideal location for a US East Coast offshore grid?
NASA Astrophysics Data System (ADS)
Dvorak, Michael J.; Stoutenburg, Eric D.; Archer, Cristina L.; Kempton, Willett; Jacobson, Mark Z.
2012-03-01
This paper identifies the location of an “ideal” offshore wind energy (OWE) grid on the U.S. East Coast that would (1) provide the highest overall and peak-time summer capacity factor, (2) use bottom-mounted turbine foundations (depth ≤50 m), (3) connect regional transmissions grids from New England to the Mid-Atlantic, and (4) have a smoothed power output, reduced hourly ramp rates and hours of zero power. Hourly, high-resolution mesoscale weather model data from 2006-2010 were used to approximate wind farm output. The offshore grid was located in the waters from Long Island, New York to the Georges Bank, ≈450 km east. Twelve candidate 500 MW wind farms were located randomly throughout that region. Four wind farms (2000 MW total capacity) were selected for their synergistic meteorological characteristics that reduced offshore grid variability. Sites likely to have sea breezes helped increase the grid capacity factor during peak time in the spring and summer months. Sites far offshore, dominated by powerful synoptic-scale storms, were included for their generally higher but more variable power output. By interconnecting all 4 farms via an offshore grid versus 4 individual interconnections, power was smoothed, the no-power events were reduced from 9% to 4%, and the combined capacity factor was 48% (gross). By interconnecting offshore wind energy farms ≈450 km apart, in regions with offshore wind energy resources driven by both synoptic-scale storms and mesoscale sea breezes, substantial reductions in low/no-power hours and hourly ramp rates can be made.
The SMAP mission combined active-passive soil moisture product at 9 km and 3km spatial resolutions
USDA-ARS?s Scientific Manuscript database
The NASA Soil Moisture Active Passive (SMAP) mission with onboard L-band radiometer and radar was launched on January 31st, 2015. The spacecraft provided high-resolution (3 km and 9 km) global soil moisture estimates at regular intervals by combining radiometer and radar observations for ~2.5 months...
NASA Astrophysics Data System (ADS)
Fuchsberger, Jürgen; Kirchengast, Gottfried; Bichler, Christoph; Kabas, Thomas; Lenz, Gunther; Leuprecht, Armin
2017-04-01
The Feldbach region in southeast Austria, characteristic for experiencing a rich variety of weather and climate patterns, has been selected as the focus area for a pioneering weather and climate observation network at very high resolution: The WegenerNet comprises 153 meteorological stations measuring temperature, humidity, precipitation, and other parameters, in a tightly spaced grid within an area of about 20 km × 15 km centered near the city of Feldbach (46.93°N, 15.90°E). With its stations about every 2 km2, each with 5-min time sampling, the network provides regular measurements since January 2007. Detailed information is available in the recent description by Kirchengast et al. (2014) and via www.wegcenter.at/wegenernet. As a smaller "sister network" of the WegenerNet Feldbach region, the WegenerNet Johnsbachtal consists of eleven meteorological stations (complemented by one hydrographic station at the Johnsbach creek), measuring temperature, humidity, precipitation, radiation, wind, and other parameters in an alpine setting at altitudes ranging from below 700 m to over 2100 m. Data are available partly since 2007, partly since more recent dates and have a temporal resolution of 10 minutes. The networks are set to serve as a long-term monitoring and validation facility for weather and climate research and applications. Uses include validation of nonhydrostatic models operated at 1-km-scale resolution and of statistical downscaling techniques (in particular for precipitation), validation of radar and satellite data, study of orography-climate relationships, and many others. Quality-controlled station time series and gridded field data (spacing 200 m × 200 m) are available in near-real time (data latency less than 1-2 h) for visualization and download via a data portal (www.wegenernet.org). This data portal has been undergoing a complete renewal over the last year, and now serves as a modern gateway to the WegenerNet's more than 10 years of high-resolution data. The poster gives a brief introduction to the WegenerNet design and setup and shows a detailed overview of the new data portal. It also focuses on showing examples for high-resolution precipitation measurements, especially heavy-precipitation and convective events. Reference: Kirchengast, G., T. Kabas, A. Leuprecht, C. Bichler, and H. Truhetz (2014): WegenerNet: A pioneering high-resolution network for monitoring weather and climate. Bull. Amer. Meteor. Soc., 95, 227-242, doi:10.1175/BAMS-D-11-00161.1.
Tropical Cyclone Intensity in Global Models
NASA Astrophysics Data System (ADS)
Davis, C. A.; Wang, W.; Ahijevych, D.
2017-12-01
In recent years, global prediction and climate models have begun to depict intense tropical cyclones, even up to Category 5 on the Saffir-Simpson scale. In light of the limitation of horizontal resolution in such models, we examine the how well these models treat tropical cyclone intensity, measured from several different perspectives. The models evaluated include the operational Global Forecast System, with a grid spacing of about 13 km, and the Model for Prediction Across Scales, with a variable resolution of 15 km over the Northwest Pacific transitioning to 60 km elsewhere. We focus on the Northwest Pacific for the period July-October, 2016. Results indicate that discrimination of tropical cyclone intensity is reasonably good up to roughly category 3 storms. The models are able to capture storms of category 4 intensity, but still exhibit a negative intensity bias of 20-30 knots at lead times beyond 5 days. This is partly indicative of the large number of super-typhoons that occurred in 2016. The question arises of how well global models should represent intensity, given that it is unreasonable for them to depict the inner core of many intense tropical cyclones with a grid increment of 13-15 km. We compute an expected "best-case" prediction of intensity based on filtering the observed wind profiles of Atlantic tropical cyclones according to different hypothetical model resolutions. The Atlantic is used because of the significant number of reconnaissance missions and more reliable estimate of wind radii. Results indicate that, even under the most optimistic assumptions, models with horizontal grid spacing of 1/4 degree or coarser should not produce a realistic number of category 4 and 5 storms unless there are errors in spatial attributes of the wind field. Furthermore, models with a grid spacing of 1/4 degree or greater are unlikely to systematically discriminate hurricanes with differing intensity. Finally, for simple wind profiles, it is shown how an accurate representation of maximum wind on a coarse grid will lead to an overestimate of horizontally integrated kinetic energy by a factor of two or more.
Operation quality assessment model for video conference system
NASA Astrophysics Data System (ADS)
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
2018-01-01
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Evaluation of automated global mapping of Reference Soil Groups of WRB2015
NASA Astrophysics Data System (ADS)
Mantel, Stephan; Caspari, Thomas; Kempen, Bas; Schad, Peter; Eberhardt, Einar; Ruiperez Gonzalez, Maria
2017-04-01
SoilGrids is an automated system that provides global predictions for standard numeric soil properties at seven standard depths down to 200 cm, currently at spatial resolutions of 1km and 250m. In addition, the system provides predictions of depth to bedrock and distribution of soil classes based on WRB and USDA Soil Taxonomy (ST). In SoilGrids250m(1), soil classes (WRB, version 2006) consist of the RSG and the first prefix qualifier, whereas in SoilGrids1km(2), the soil class was assessed at RSG level. Automated mapping of World Reference Base (WRB) Reference Soil Groups (RSGs) at a global level has great advantages. Maps can be updated in a short time span with relatively little effort when new data become available. To translate soil names of older versions of FAO/WRB and national classification systems of the source data into names according to WRB 2006, correlation tables are used in SoilGrids. Soil properties and classes are predicted independently from each other. This means that the combinations of soil properties for the same cells or soil property-soil class combinations do not necessarily yield logical combinations when the map layers are studied jointly. The model prediction procedure is robust and probably has a low source of error in the prediction of RSGs. It seems that the quality of the original soil classification in the data and the use of correlation tables are the largest sources of error in mapping the RSG distribution patterns. Predicted patterns of dominant RSGs were evaluated in selected areas and sources of error were identified. Suggestions are made for improvement of WRB2015 RSG distribution predictions in SoilGrids. Keywords: Automated global mapping; World Reference Base for Soil Resources; Data evaluation; Data quality assurance References 1 Hengl T, de Jesus JM, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, et al. (2016) SoilGrids250m: global gridded soil information based on Machine Learning. Earth System Science Data (ESSD), in review. 2 Hengl T, de Jesus JM, MacMillan RA, Batjes NH, Heuvelink GBM, et al. (2014) SoilGrids1km — Global Soil Information Based on Automated Mapping. PLoS ONE 9(8): e105992. doi:10.1371/journal.pone.0105992
NASA Astrophysics Data System (ADS)
Simpson, J. J.; Taflove, A.
2005-12-01
We report a finite-difference time-domain (FDTD) computational solution of Maxwell's equations [1] that models the possibility of detecting and characterizing ionospheric disturbances above seismic regions. Specifically, we study anomalies in Schumann resonance spectra in the extremely low frequency (ELF) range below 30 Hz as observed in Japan caused by a hypothetical cylindrical ionospheric disturbance above Taiwan. We consider excitation of the global Earth-ionosphere waveguide by lightning in three major thunderstorm regions of the world: Southeast Asia, South America (Amazon region), and Africa. Furthermore, we investigate varying geometries and characteristics of the ionospheric disturbance above Taiwan. The FDTD technique used in this study enables a direct, full-vector, three-dimensional (3-D) time-domain Maxwell's equations calculation of round-the-world ELF propagation accounting for arbitrary horizontal as well as vertical geometrical and electrical inhomogeneities and anisotropies of the excitation, ionosphere, lithosphere, and oceans. Our entire-Earth model grids the annular lithosphere-atmosphere volume within 100 km of sea level, and contains over 6,500,000 grid-points (63 km laterally between adjacent grid points, 5 km radial resolution). We use our recently developed spherical geodesic gridding technique having a spatial discretization best described as resembling the surface of a soccer ball [2]. The grid is comprised entirely of hexagonal cells except for a small fixed number of pentagonal cells needed for completion. Grid-cell areas and locations are optimized to yield a smoothly varying area difference between adjacent cells, thereby maximizing numerical convergence. We compare our calculated results with measured data prior to the Chi-Chi earthquake in Taiwan as reported by Hayakawa et. al. [3]. Acknowledgement This work was suggested by Dr. Masashi Hayakawa, University of Electro-Communications, Chofugaoka, Chofu Tokyo. References [1] A. Taflove and S. C. Hagness, Computational Electrodynamics: The Finite-Difference Time- Domain Method, 3rd. ed. Norwood, MA: Artech House, 2005. [2] M. Hayakawa, K. Ohta, A. P. Nickolaenko, and Y. Ando, "Anomalous effect in Schumann resonance phenomena observed in Japan, possibly associated with the Chi-Chi earthquake in Taiwan," Ann. Geophysicae, in press. [3] J. J. Simpson and A. Taflove, "3-D FDTD modeling of ULF/ELF propagation within the global Earth-ionosphere cavity using an optimized geodesic grid," Proc. IEEE AP-S International Symposium, Washington, D.C., July 2005.
NASA Astrophysics Data System (ADS)
Hsieh, J. S.; Chang, P.; Saravanan, R.
2017-12-01
Frontal and mesoscale air-sea interactions along the Gulf Stream (GS) during boreal winter are investigated using an eddy-resolving and convection-permitting coupled regional climate model with atmospheric grid resolutions varying from meso-β (27-km) to -r (9-km and 3-km nest) scales in WRF and a 9-km ocean model (ROMS) that explicitly resolves the ocean mesoscale eddies across the North Atlantic basin. The mesoscale wavenumber energy spectra for the simulated surface wind stress and SST demonstrate good agreement with the observed spectra calculated from the observational QuikSCAT and AMSR-E datasets, suggesting that the model well captures the energy cascade of the mesoscale eddies in both the atmosphere and the ocean. Intercomparison among different resolution simulations indicates that after three months of integration the simulated GS path tends to overshoot beyond the separation point in the 27-km WRF coupled experiments than the observed climatological path of the GS, whereas the 3-km nested and 9-km WRF coupled simulations realistically simulate GS separation. The GS overshoot in 27-km WRF coupled simulations is accompanied with a significant SST warming bias to the north of the GS extension. Such biases are associated with the deficiency of wind stress-SST coupling strengths simulated by the coupled model with a coarser resolution in WRF. It is found that the model at 27-km grid spacing can approximately simulate 72% (62%) of the observed mean coupling strength between surface wind stress curl (divergence) and crosswind (downwind) SST gradient while by increasing the WRF resolutions to 9 km or 3 km the coupled model can much better capture the observed coupling strengths.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Controls on Deep Seated Gravitational Slope Deformations in the European Alps
NASA Astrophysics Data System (ADS)
Crosta, Giovanni B.; Frattini, Paolo; Agliardi, Federico
2013-04-01
DSGSDs are very large, slow mass movements affecting entire high-relief valley slopes. The first orogen-scale inventory of such phenomena at has been recently presented for the European Alps (Crosta et al 2008, Agliardi et al 2012), and then further implemented. The inventory includes 1034 Deep Seated Gravitational Slope Deformations, widespread over the entire orogen and clustered along major valleys and in some specific sectors of the Alps. In this contribution we systematically explore lithological, structural and topographic controls on DSGSD distribution with the help of multivariate statistical techniques (Principal Component Analysis, Discriminant Analysis). Analysis units for statistical analysis were obtained by creating three square vector grids with 2.5 km, 5 km and 10 km grid cell size, respectively, covering the entire area (about 110,000 km2). For each grid cell, we calculated the density of DSGSD, and we assigned a value for each of the controlling variable considered in the analysis. From the NASA SRTM (Shuttle Radar Topography Mission) DEM we derived land surface parameters, such as relief, slope gradients, slope aspect, mean vertical distance from base level and ruggedness. The SRTM DEM was also used to extract the drainage density, with a threshold of 1 km2 and 10 km2. We also computer the stream power of the 1km2 river network Lithology was obtained by assembling different geological maps (1:200.000 map of Salzburg, 1:250.000 map of France, 1:500.000 maps of Switzerland and Austria, 1:1.000.000 map of Italy) and by reclassifying the geological units into 8 lithological classes (carbonate rocks, metapelites, sandstones and marls, paragneiss, ortogneiss, flysch-type rocks, granitoid/metabasite, Quaternary units, and volcanic rocks). To study the role of seismicity, we calculated the number of earthquakes (CPTI11 and USGS-NEIC database) within a distance dmax from the square cell, calculated adopting Keefer's (1984) equation, and the sum of Arias Intensities of all earthquakes lying within dmax. Fission-track ages on apatite have been collected from published sources, and interpolated over the entire Alps by using a natural-neighbour interpolator. Finally, the ice thickness during the Last Glacial Maximum, the modern rock uplift, and the mean annual rainfall have been used. Results of the multivariate statistical analysis confirm the results of the previous orogen-scale investigations (Crosta et al., 2008; Agliardi et al., 2012) and shed new light on the relative importance of the (positive or negative) contributions of different controlling factors. The most important controls on DSGSD distribution are: lithology, landscape morphology, LGM ice thickness, modern uplift rate and mean annual rainfall. Lithology is the dominant factor, with units highly favourable (chiefly metapelites, followed by paragneiss and flysch-type rocks) and other unfavourable (especially carbonates rocks) to DSGSD. Landscape morphology plays a role that is difficult to correctly evaluate because of the interplay between morphology and geological and hydrological parameters. DSGSDs are more frequent along main alpine valleys, where long and regular slopes can accommodate these large phenomena, but also where the action of glaciers and the presence of main tectonic lineaments are more important. Favourable landscape morphologies seem also controlled by exhumation and uplift rate. Mean annual rainfall is inversely correlated with DSGSD density. This can be interpreted as the long-term effects of climate in shaping large-scale topography and favouring other types of landslides as players of long-term erosion. Crosta, G.B., Agliardi, F., Frattini, P., Zanchi, A. (2008) Alpine inventory of Deep-Seated Gravitational Slope Deformations. Vol. 10, EGU2008-A-02709, 2008, SRef-ID: 1607-7962/gra/EGU2008-A-0270. Agliardi, F., Crosta, G., Frattini, P. (2012). Slow rock-slope deformation. In: Clague JJ;Stead D;(eds). Landslides Types, Mechanisms and Modeling. p. 207-221, Cambridge University Press, ISBN: 978-1-107-00206-7.
ManUniCast: A Community Weather and Air-Quality Forecasting Teaching Portal
NASA Astrophysics Data System (ADS)
Schultz, David M.; Anderson, Stuart; Fairman, Jonathan G.; Lowe, Douglas; McFiggans, Gordon; Lee, Elsa; Seo-Zindy, Ryo
2014-05-01
Manunicast was borne out of the needs of our teaching program: students were entering a world where environmental prediction via numerical model was an essential skill, but were not exposed to the production or output of such models. Our site is an educational testbed to explain to students and the public how weather, air-quality, and air-chemistry forecasts are made using real-time predictions as examples. As far as we know, this site provides the first freely available real-time predictions for the UK. We perform two simulations a day over three domains using the most popular, freely available, community atmospheric mesoscale and chemistry models WRF-ARW and WRF-Chem: 1. a WRF-ARW domain over the North Atlantic and western Europe (20-km horizontal grid spacing) 2. a WRF-ARW domain over the UK and Ireland (4-km grid spacing, nested within the 20-km domain) 3. a WRF-Chem domain over the UK and Ireland (12-km grid spacing) Called ManUniCast (Manchester University Forecast), we offer a suite of products from horizontal maps, time series at stations (meteograms), skew-T-logp charts, and cross sections to help students better visualize the weather and the relationships between the various fields more effectively, specifically through the ability to overlay and fade between different plotted products.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
Circulation and multiple-scale variability in the Southern California Bight
NASA Astrophysics Data System (ADS)
Dong, Changming; Idica, Eileen Y.; McWilliams, James C.
2009-09-01
The oceanic circulation in the Southern California Bight (SCB) is influenced by the large-scale California Current offshore, tropical remote forcing through the coastal wave guide alongshore, and local atmospheric forcing. The region is characterized by local complexity in the topography and coastline. All these factors engender variability in the circulation on interannual, seasonal, and intraseasonal time scales. This study applies the Regional Oceanic Modeling System (ROMS) to the SCB circulation and its multiple-scale variability. The model is configured in three levels of nested grids with the parent grid covering the whole US West Coast. The first child grid covers a large southern domain, and the third grid zooms in on the SCB region. The three horizontal grid resolutions are 20 km, 6.7 km, and 1 km, respectively. The external forcings are momentum, heat, and freshwater flux at the surface and adaptive nudging to gyre-scale SODA reanalysis fields at the boundaries. The momentum flux is from a three-hourly reanalysis mesoscale MM5 wind with a 6 km resolution for the finest grid in the SCB. The oceanic model starts in an equilibrium state from a multiple-year cyclical climatology run, and then it is integrated from years 1996 through 2003. In this paper, the 8-year simulation at the 1 km resolution is analyzed and assessed against extensive observational data: High-Frequency (HF) radar data, current meters, Acoustic Doppler Current Profilers (ADCP) data, hydrographic measurements, tide gauges, drifters, altimeters, and radiometers. The simulation shows that the domain-scale surface circulation in the SCB is characterized by the Southern California Cyclonic Gyre, comprised of the offshore equatorward California Current System and the onshore poleward Southern California Countercurrent. The simulation also exhibits three subdomain-scale, persistent ( i.e., standing), cyclonic eddies related to the local topography and wind forcing: the Santa Barbara Channel Eddy, the Central-SCB Eddy, and the Catalina-Clemente Eddy. Comparisons with observational data reveal that ROMS reproduces a realistic mean state of the SCB oceanic circulation, as well as its interannual (mainly as a local manifestation of an ENSO event), seasonal, and intraseasonal (eddy-scale) variations. We find high correlations of the wind curl with both the alongshore pressure gradient (APG) and the eddy kinetic energy level in their variations on time scales of seasons and longer. The geostrophic currents are much stronger than the wind-driven Ekman flows at the surface. The model exhibits intrinsic eddy variability with strong topographically related heterogeneity, westward-propagating Rossby waves, and poleward-propagating coastally-trapped waves (albeit with smaller amplitude than observed due to missing high-frequency variations in the southern boundary conditions).
NASA Astrophysics Data System (ADS)
Nadeem, Imran; Formayer, Herbert
2016-11-01
A suite of high-resolution (10 km) simulations were performed with the International Centre for Theoretical Physics (ICTP) Regional Climate Model (RegCM3) to study the effect of various lateral boundary conditions (LBCs), domain size, and intermediate domains on simulated precipitation over the Great Alpine Region. The boundary conditions used were ECMWF ERA-Interim Reanalysis with grid spacing 0.75∘, the ECMWF ERA-40 Reanalysis with grid spacing 1.125 and 2.5∘, and finally the 2.5∘ NCEP/DOE AMIP-II Reanalysis. The model was run in one-way nesting mode with direct nesting of the high-resolution RCM (horizontal grid spacing Δx = 10 km) with driving reanalysis, with one intermediate resolution nest (Δx = 30 km) between high-resolution RCM and reanalysis forcings, and also with two intermediate resolution nests (Δx = 90 km and Δx = 30 km) for simulations forced with LBC of resolution 2.5∘. Additionally, the impact of domain size was investigated. The results of multiple simulations were evaluated using different analysis techniques, e.g., Taylor diagram and a newly defined useful statistical parameter, called Skill-Score, for evaluation of daily precipitation simulated by the model. It has been found that domain size has the major impact on the results, while different resolution and versions of LBCs, e.g., 1.125∘ ERA40 and 0.7∘ ERA-Interim, do not produce significantly different results. It is also noticed that direct nesting with reasonable domain size, seems to be the most adequate method for reproducing precipitation over complex terrain, while introducing intermediate resolution nests seems to deteriorate the results.
NASA Astrophysics Data System (ADS)
Zittis, G.; Bruggeman, A.; Camera, C.; Hadjinicolaou, P.; Lelieveld, J.
2017-07-01
Climate change is expected to substantially influence precipitation amounts and distribution. To improve simulations of extreme rainfall events, we analyzed the performance of different convection and microphysics parameterizations of the WRF (Weather Research and Forecasting) model at very high horizontal resolutions (12, 4 and 1 km). Our study focused on the eastern Mediterranean climate change hot-spot. Five extreme rainfall events over Cyprus were identified from observations and were dynamically downscaled from the ERA-Interim (EI) dataset with WRF. We applied an objective ranking scheme, using a 1-km gridded observational dataset over Cyprus and six different performance metrics, to investigate the skill of the WRF configurations. We evaluated the rainfall timing and amounts for the different resolutions, and discussed the observational uncertainty over the particular extreme events by comparing three gridded precipitation datasets (E-OBS, APHRODITE and CHIRPS). Simulations with WRF capture rainfall over the eastern Mediterranean reasonably well for three of the five selected extreme events. For these three cases, the WRF simulations improved the ERA-Interim data, which strongly underestimate the rainfall extremes over Cyprus. The best model performance is obtained for the January 1989 event, simulated with an average bias of 4% and a modified Nash-Sutcliff of 0.72 for the 5-member ensemble of the 1-km simulations. We found overall added value for the convection-permitting simulations, especially over regions of high-elevation. Interestingly, for some cases the intermediate 4-km nest was found to outperform the 1-km simulations for low-elevation coastal parts of Cyprus. Finally, we identified significant and inconsistent discrepancies between the three, state of the art, gridded precipitation datasets for the tested events, highlighting the observational uncertainty in the region.
Evaluating β Diversity as a Surrogate for Species Representation at Fine Scale.
Beier, Paul; Albuquerque, Fábio
2016-01-01
Species turnover or β diversity is a conceptually attractive surrogate for conservation planning. However, there has been only 1 attempt to determine how well sites selected to maximize β diversity represent species, and that test was done at a scale too coarse (2,500 km2 sites) to inform most conservation decisions. We used 8 plant datasets, 3 bird datasets, and 1 mammal dataset to evaluate whether sites selected to span β diversity will efficiently represent species at finer scale (sites sizes < 1 ha to 625 km2). We used ordinations to characterize dissimilarity in species assemblages (β diversity) among plots (inventory data) or among grid cells (atlas data). We then selected sites to maximize β diversity and used the Species Accumulation Index, SAI, to evaluate how efficiently the surrogate (selecting sites for maximum β diversity) represented species in the same taxon. Across all 12 datasets, sites selected for maximum β diversity represented species with a median efficiency of 24% (i.e., the surrogate was 24% more effective than random selection of sites), and an interquartile range of 4% to 41% efficiency. β diversity was a better surrogate for bird datasets than for plant datasets, and for atlas datasets with 10-km to 14-km grid cells than for atlas datasets with 25-km grid cells. We conclude that β diversity is more than a mere descriptor of how species are distributed on the landscape; in particular β diversity might be useful to maximize the complementarity of a set of sites. Because we tested only within-taxon surrogacy, our results do not prove that β diversity is useful for conservation planning. But our results do justify further investigation to identify the circumstances in which β diversity performs well, and to evaluate it as a cross-taxon surrogate.
C library for topological study of the electronic charge density.
Vega, David; Aray, Yosslen; Rodríguez, Jesús
2012-12-05
The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.
Downscaling soil moisture over regions that include multiple coarse-resolution grid cells
USDA-ARS?s Scientific Manuscript database
Many applications require soil moisture estimates over large spatial extents (30-300 km) and at fine-resolutions (10-30 m). Remote-sensing methods can provide soil moisture estimates over very large spatial extents (continental to global) at coarse resolutions (10-40 km), but their output must be d...
[Relations of landslide and debris flow hazards to environmental factors].
Zhang, Guo-ping; Xu, Jing; Bi, Bao-gui
2009-03-01
To clarify the relations of landslide and debris flow hazards to environmental factors is of significance to the prediction and evaluation of landslide and debris flow hazards. Base on the latitudinal and longitudinal information of 18431 landslide and debris flow hazards in China, and the 1 km x 1 km grid data of elevation, elevation difference, slope, slope aspect, vegetation type, and vegetation coverage, this paper analyzed the relations of landslide and debris flow hazards in this country to above-mentioned environmental factors by the analysis method of frequency ratio. The results showed that the landslide and debris flow hazards in China more occurred in lower elevation areas of the first and second transitional zones. When the elevation difference within a 1 km x 1 km grid cell was about 300 m and the slope was around 30 degree, there was the greatest possibility of the occurrence of landslide and debris hazards. Mountain forest land and slope cropland were the two land types the hazards most easily occurred. The occurrence frequency of the hazards was the highest when the vegetation coverage was about 80%-90%.
NASA Astrophysics Data System (ADS)
Appel, W.; Gilliam, R. C.; Pouliot, G. A.; Godowitch, J. M.; Pleim, J.; Hogrefe, C.; Kang, D.; Roselle, S. J.; Mathur, R.
2013-12-01
The DISCOVER-AQ project (Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality), is a joint collaboration between NASA, U.S. EPA and a number of other local organizations with the goal of characterizing air quality in urban areas using satellite, aircraft, vertical profiler and ground based measurements (http://discover-aq.larc.nasa.gov). In July 2011, the DISCOVER-AQ project conducted intensive air quality measurements in the Baltimore, MD and Washington, D.C. area in the eastern U.S. To take advantage of these unique data, the Community Multiscale Air Quality (CMAQ) model, coupled with the Weather Research and Forecasting (WRF) model is used to simulate the meteorology and air quality in the same region using 12-km, 4-km and 1-km horizontal grid spacings. The goal of the modeling exercise is to demonstrate the capability of the coupled WRF-CMAQ modeling system to simulate air quality at fine grid spacings in an urban area. Development of new data assimilation techniques and the use of higher resolution input data for the WRF model have been implemented to improve the meteorological results, particularly at the 4-km and 1-km grid resolutions. In addition, a number of updates to the CMAQ model were made to enhance the capability of the modeling system to accurately represent the magnitude and spatial distribution of pollutants at fine model resolutions. Data collected during the 2011 DISCOVER-AQ campaign, which include aircraft transects and spirals, ship measurements in the Chesapeake Bay, ozonesondes, tethered balloon measurements, DRAGON aerosol optical depth measurements, LIDAR measurements, and intensive ground-based site measurements, are used to evaluate results from the WRF-CMAQ modeling system for July 2011 at the three model grid resolutions. The results of the comparisons of the model results to these measurements will be presented, along with results from the various sensitivity simulations examining the impact the various updates to the modeling system have on the model estimates.
Interpolated Sounding and Gridded Sounding Value-Added Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toto, T.; Jensen, M.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less
GIS characterization of spatially distributed lifeline damage
Toprak, Selcuk; O'Rourke, Thomas; Tutuncu, Ilker
1999-01-01
This paper describes the visualization of spatially distributed water pipeline damage following an earthquake using geographical information systems (GIS). Pipeline damage is expressed as a repair rate (RR). Repair rate contours are developed with GIS by dividing the study area into grid cells (n ?? n), determining the number of particular pipeline repairs in each grid cell, and dividing the number of repairs by the length of that pipeline in each cell area. The resulting contour plot is a two-dimensional visualization of point source damage. High damage zones are defined herein as areas with an RR value greater than the mean RR for the entire study area of interest. A hyperbolic relationship between visual display of high pipeline damage zones and grid size, n, was developed. The relationship is expressed in terms of two dimensionless parameters, threshold area coverage (TAC) and dimensionless grid size (DGS). The relationship is valid over a wide range of different map scales spanning approximately 1,200 km2 for the largest portion of the Los Angeles water distribution system to 1 km2 for the Marina in San Francisco. This relationship can aid GIS users to get sufficiently refined, but easily visualized, maps of damage patterns.
Lin, Wei-Chih; Lin, Yu-Pin; Wang, Yung-Chieh; Chang, Tsun-Kuo; Chiang, Li-Chi
2014-02-21
In this study, a deconvolution procedure was used to create a variogram of oral cancer (OC) rates. Based on the variogram, area-to-point (ATP) Poisson kriging and p-field simulation were used to downscale and simulate, respectively, the OC rate data for Taiwan from the district scale to a 1 km × 1 km grid scale. Local cluster analysis (LCA) of OC mortality rates was then performed to identify OC mortality rate hot spots based on the downscaled and the p-field-simulated OC mortality maps. The relationship between OC mortality and land use was studied by overlapping the maps of the downscaled OC mortality, the LCA results, and the land uses. One thousand simulations were performed to quantify local and spatial uncertainties in the LCA to identify OC mortality hot spots. The scatter plots and Spearman's rank correlation yielded the relationship between OC mortality and concentrations of the seven metals in the 1 km cell grid. The correlation analysis results for the 1 km scale revealed a weak correlation between OC mortality rate and concentrations of the seven studied heavy metals in soil. Accordingly, the heavy metal concentrations in soil are not major determinants of OC mortality rates at the 1 km scale at which soils were sampled. The LCA statistical results for local indicator of spatial association (LISA) revealed that the sites with high probability of high-high (high value surrounded by high values) OC mortality at the 1 km grid scale were clustered in southern, eastern, and mid-western Taiwan. The number of such sites was also significantly higher on agricultural land and in urban regions than on land with other uses. The proposed approach can be used to downscale and evaluate uncertainty in mortality data from a coarse scale to a fine scale at which useful additional information can be obtained for assessing and managing land use and risk.
NASA Astrophysics Data System (ADS)
Dossing, A.; Olesen, A. V.; Forsberg, R.
2010-12-01
Results of an 800 x 800 km aero-gravity and aeromagnetic survey (LOMGRAV) of the southern Lomonosov Ridge and surrounding area are presented. The survey was acquired by the Danish National Space Center, DTU in cooperation with National Resources Canada in spring 2009 as a net of ~NE-SW flight lines spaced 8-10 km apart. Nominal flight level was 2000 ft. We have compiled a detailed 2.5x2.5 km gravity anomaly grid based on the LOMGRAV data and existing data from the southern Arctic Ocean (NRL98/99) and the North Greenland continental margin (KMS98/99). The gravity grid reveals detailed, elongated high-low anomaly patterns over the Lomonosov Ridge which is interpreted as the presence of narrow ridges and subbasins. Distinct local topography is also interpreted over the southernmost part of the Lomonosov Ridge where existing bathymetry compilations suggest a smooth topography due to the lack of data. A new bathymetry model is presented for the region predicted by formalized inversion of the available gravity data. Finally, a detailed magnetic anomaly grid has been compiled from the LOMGRAV data and existing NRL98/99 and PMAP data. New tectonic features are revealed, particularly in the Amerasia Basin, compared with existing magnetic anomaly data from the region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kansa, E.J.; Axelrod, M.C.; Kercher, J.R.
1994-05-01
Our current research into the response of natural ecosystems to a hypothesized climatic change requires that we have estimates of various meteorological variables on a regularly spaced grid of points on the surface of the earth. Unfortunately, the bulk of the world`s meteorological measurement stations is located at airports that tend to be concentrated on the coastlines of the world or near populated areas. We can also see that the spatial density of the station locations is extremely non-uniform with the greatest density in the USA, followed by Western Europe. Furthermore, the density of airports is rather sparse in desertmore » regions such as the Sahara, the Arabian, Gobi, and Australian deserts; likewise the density is quite sparse in cold regions such as Antarctica Northern Canada, and interior northern Russia. The Amazon Basin in Brazil has few airports. The frequency of airports is obviously related to the population centers and the degree of industrial development of the country. We address the following problem here. Given values of meteorological variables, such as maximum monthly temperature, measured at the more than 5,500 airport stations, interpolate these values onto a regular grid of terrestrial points spaced by one degree in both latitude and longitude. This is known as the scattered data problem.« less
Seismic Activity offshore Martinique and Dominique islands (Lesser Antilles subduction zone)
NASA Astrophysics Data System (ADS)
Ruiz Fernandez, Mario; Galve, Audrey; Monfret, Tony; Charvis, Philippe; Laigle, Mireille; Flueh, Ernst; Gallart, Josep; Hello, Yann
2010-05-01
In the framework of the European project Thales was Right, two seismic surveys (Sismantilles II and Obsantilles) were carried out to better constrain the lithospheric structure of the Lesser Antilles subduction zone, its seismic activity and to evaluate the associated seismic hazards. Sismantilles II experiment was conducted in January, 2007 onboard R/V Atalante (IFREMER). A total of 90 OBS belonging to Géoazur, INSU-CNRS and IFM-Geomar were deployed on a regular grid, offshore Antigua, Guadeloupe, Dominique and Martinique islands. During the active part of the survey, more than 2500 km of multichannel seismic profiles were shot along the grid lines. Then the OBS remained on the seafloor continuously recording for the seismic activity for approximately 4 months. On April 2007 Obsantilles experiment, carried out onboard R/V Antea (IRD), was focused on the recovery of those OBS and the redeployment of 28 instruments (Géoazur OBS) off Martinique and Dominica Islands for 4 additional months of continuous recording of the seismicity. This work focuses on the analysis of the seismological data recorded in the southern sector of the study area, offshore Martinique and Dominique. During the two recording periods, extending from January to the end of August 2007, more than 3300 seismic events were detected in this area. Approximately 1100 earthquakes had enough quality to be correctly located. Station corrections, obtained from multichannel seismic profiles, were introduced to each OBS to take in to account the sedimentary cover and better constrain the hypocentral determinations. Results show events located at shallower depths in the northern sector of the array, close to the Tiburon Ridge, where the seismic activity is mainly located between 20 to 40 km depth. In the southern sector, offshore Martinique, hypocenters become deeper, ranging to 60 km depth and dipping to the west. Focal solutions have also been obtained using the P wave polarities of the best azimuthally constrained earthquakes (Gap smaller than 90°). Focal mechanisms also reveal some differences between the northern and southern sector of the array. Whereas in the southern sector most of the analysed events show purely reverse fault solutions, in the northern area events present strike slip and normal fault solutions and could be related to intraplate deformation.
Dynamic Testing and Automatic Repair of Reconfigurable Wiring Harnesses
2006-11-27
Switch An M ×N grid of switches configured to provide a M -input, N -output routing network. Permutation Network A permutation network performs an...wiring reduces the effective advantage of their reduced switch count, particularly when considering that regular grids (crossbar switches being a...are connected to. The outline circuit shown in Fig. 20 shows how a suitable ‘discovery probe’ might be implemented. The circuit shows a UART
Integrating bathymetric and topographic data
NASA Astrophysics Data System (ADS)
Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat
2017-11-01
The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan
2004-07-01
It is well known that regional climate simulations are sensitive to the size and position of the domain chosen for calculations. Here we study the physical mechanisms of this sensitivity. We conducted simulations with the Regional Atmospheric Modeling System (RAMS) for June 2000 over North America at 50 km horizontal resolution using a 7500 km × 5400 km grid and NCEP/NCAR reanalysis as boundary conditions. The position of the domain was displaced in several directions, always maintaining the U.S. in the interior, out of the buffer zone along the lateral boundaries. Circulation biases developed a large scale structure, organized by the Rocky Mountains, resulting from a systematic shifting of the synoptic wave trains that crossed the domain. The distortion of the large-scale circulation was produced by interaction of the modeled flow with the lateral boundaries of the nested domain and varied when the position of the grid was altered. This changed the large-scale environment among the different simulations and translated into diverse conditions for the development of the mesoscale processes that produce most of precipitation for the Great Plains in the summer season. As a consequence, precipitation results varied, sometimes greatly, among the experiments with the different grid positions. To eliminate the dependence of results on the position of the domain, we used spectral nudging of waves longer than 2500 km above the boundary layer. Moisture was not nudged at any level. This constrained the synoptic scales to follow reanalysis while allowing the model to develop the small-scale dynamics responsible for the rainfall. Nudging of the large scales successfully eliminated the variation of precipitation results when the grid was moved. We suggest that this technique is necessary for all downscaling studies with regional models with domain sizes of a few thousand kilometers and larger embedded in global models.
NASA Astrophysics Data System (ADS)
Lebassi-Habtezion, Bereket; Diffenbaugh, Noah S.
2013-10-01
potential importance of local-scale climate phenomena motivates development of approaches to enable computationally feasible nonhydrostatic climate simulations. To that end, we evaluate the potential viability of nested nonhydrostatic model approaches, using the summer climate of the western United States (WUSA) as a case study. We use the Weather Research and Forecast (WRF) model to carry out five simulations of summer 2010. This suite allows us to test differences between nonhydrostatic and hydrostatic resolutions, single and multiple nesting approaches, and high- and low-resolution reanalysis boundary conditions. WRF simulations were evaluated against station observations, gridded observations, and reanalysis data over domains that cover the 11 WUSA states at nonhydrostatic grid spacing of 4 km and hydrostatic grid spacing of 25 km and 50 km. Results show that the nonhydrostatic simulations more accurately resolve the heterogeneity of surface temperature, precipitation, and wind speed features associated with the topography and orography of the WUSA region. In addition, we find that the simulation in which the nonhydrostatic grid is nested directly within the regional reanalysis exhibits the greatest overall agreement with observational data. Results therefore indicate that further development of nonhydrostatic nesting approaches is likely to yield important insights into the response of local-scale climate phenomena to increases in global greenhouse gas concentrations. However, the biases in regional precipitation, atmospheric circulation, and moisture flux identified in a subset of the nonhydrostatic simulations suggest that alternative nonhydrostatic modeling approaches such as superparameterization and variable-resolution global nonhydrostatic modeling will provide important complements to the nested approaches tested here.
Infrasound Monitoring of Local, Regional and Global Events
2007-09-01
detect and associate signals from the March 9th 2005 eruption at Mount Saint Helens, and locate the event to be within 5 km of the caldera . The...are located within 5 km of the center of the caldera at Mount Saint Helens. Figure 4. Locations of grid nodes that were automatically associated...photograph, and are located within 5 km of the center of the caldera . 29th Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring
Winter bait stations as a multispecies survey tool
Lacy Robinson; Samuel A. Cushman; Michael K. Lucid
2017-01-01
Winter bait stations are becoming a commonly used technique for multispecies inventory and monitoring but a technical evaluation of their effectiveness is lacking. Bait stations have three components: carcass attractant, remote camera, and hair snare. Our 22,975 km2 mountainous study area was stratified with a 5 Ã 5 km sampling grid centered on northern Idaho and...
This paper examines the operational performance of the Community Multiscale Air Quality (CMAQ) model simulations for 2002 - 2006 using both 36-km and 12-km horizontal grid spacing, with a primary focus on the performance of the CMAQ model in predicting wet deposition of sulfate (...
Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids
NASA Astrophysics Data System (ADS)
Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.
2017-12-01
Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.
Satellite radar altimetry over ice. Volume 2: Users' guide for Greenland elevation data from Seasat
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Major, Judith A.; Brenner, Anita C.; Bindschadler, Robert A.; Martin, Thomas V.
1990-01-01
A gridded surface-elevation data set and a geo-referenced data base for the Seasat radar altimeter data over Antarctica are described. It is intended to be a user's guide to accompany the data provided to data centers and other users. The grid points are on a polar stereographic projection with a nominal spacing of 20 km. The gridded elevations are derived from the elevation data in the geo-referenced data base by a weighted fitting of a surface in the neighborhood of each grid point. The gridded elevations are useful for the creating smaller-scale contour maps, and examining individual elevation measurements in specific geographic areas. Tape formats are described, and a FORTRAN program for reading the data tape is listed and provided on the tape.
NASA Astrophysics Data System (ADS)
Nhu Y, Do
2018-03-01
Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
Modelling effects on grid cells of sensory input during self‐motion
Raudies, Florian; Hinman, James R.
2016-01-01
Abstract The neural coding of spatial location for memory function may involve grid cells in the medial entorhinal cortex, but the mechanism of generating the spatial responses of grid cells remains unclear. This review describes some current theories and experimental data concerning the role of sensory input in generating the regular spatial firing patterns of grid cells, and changes in grid cell firing fields with movement of environmental barriers. As described here, the influence of visual features on spatial firing could involve either computations of self‐motion based on optic flow, or computations of absolute position based on the angle and distance of static visual cues. Due to anatomical selectivity of retinotopic processing, the sensory features on the walls of an environment may have a stronger effect on ventral grid cells that have wider spaced firing fields, whereas the sensory features on the ground plane may influence the firing of dorsal grid cells with narrower spacing between firing fields. These sensory influences could contribute to the potential functional role of grid cells in guiding goal‐directed navigation. PMID:27094096
Shen, Wei; Han, Weijian; Wallington, Timothy J
2014-06-17
China's oil imports and greenhouse gas (GHG) emissions have grown rapidly over the past decade. Addressing energy security and GHG emissions is a national priority. Replacing conventional vehicles with electric vehicles (EVs) offers a potential solution to both issues. While the reduction in petroleum use and hence the energy security benefits of switching to EVs are obvious, the GHG benefits are less obvious. We examine the current Chinese electric grid and its evolution and discuss the implications for EVs. China's electric grid will be dominated by coal for the next few decades. In 2015 in Beijing, Shanghai, and Guangzhou, EVs will need to use less than 14, 19, and 23 kWh/100 km, respectively, to match the 183 gCO2/km WTW emissions for energy saving vehicles. In 2020, in Beijing, Shanghai, and Guangzhou EVs will need to use less than 13, 18, and 20 kWh/100 km, respectively, to match the 137 gCO2/km WTW emissions for energy saving vehicles. EVs currently demonstrated in China use 24-32 kWh/100 km. Electrification will reduce petroleum imports; however, it will be very challenging for EVs to contribute to government targets for GHGs emissions reduction.
Enabling Grid Computing resources within the KM3NeT computing model
NASA Astrophysics Data System (ADS)
Filippidis, Christos
2016-04-01
KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
NASA Technical Reports Server (NTRS)
Homemdemello, Luiz S.
1992-01-01
An assembly planner for tetrahedral truss structures is presented. To overcome the difficulties due to the large number of parts, the planner exploits the simplicity and uniformity of the shapes of the parts and the regularity of their interconnection. The planning automation is based on the computational formalism known as production system. The global data base consists of a hexagonal grid representation of the truss structure. This representation captures the regularity of tetrahedral truss structures and their multiple hierarchies. It maps into quadratic grids and can be implemented in a computer by using a two-dimensional array data structure. By maintaining the multiple hierarchies explicitly in the model, the choice of a particular hierarchy is only made when needed, thus allowing a more informed decision. Furthermore, testing the preconditions of the production rules is simple because the patterned way in which the struts are interconnected is incorporated into the topology of the hexagonal grid. A directed graph representation of assembly sequences allows the use of both graph search and backtracking control strategies.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Imaging the Crust and Upper Mantle of Taiwan with Ambient Noise and Full Waveform Tomography
NASA Astrophysics Data System (ADS)
Rodzianko, A.; Roecker, S. W.
2013-12-01
Taiwan is the result of a complex, actively deforming tectonic boundary between the Eurasian and Philippine Sea plates that provides an excellent venue for investigating processes related to arc-continent collision. The TAIGER (TAiwan Integrated GEodynamics Research) project deployed broadband and short-period seismic stations that observed passive and active sources between 2006-2008. We analyze data collected by the TAIGER deployment, supplemented by observations from the permanent BATS (Broadband Array in Taiwan for Seismology) network, to create a 3D elastic wave velocity model of the crust and upper mantle beneath Taiwan. We start by applying ambient noise tomography techniques on the dataset to create a 3D Vs model. The vertical component of continuous ambient noise is whitened and cross-correlated between stations to construct empirical Green's functions (EGFs) of Rayleigh waves, which are graded by the signal to noise (SNR) ratio prior to recovering group and phase velocities of the fundamental mode for periods between 6 and 30 seconds. We invert group and phase velocity maps on a regular grid with 5 km spacing, and combine the results to generate a 3D Vs model. This model, combined with the arrival time model of Hao et al (2012), are used as a starting model for full waveform inversion (FWI) of teleseismic body and surface waves using the 2.5D technique of Roecker et al (2010). We find that below the Central Mountain Range, the crust thickens with the Moho at ~50 km depth and with S-wave speeds ~3.0 km/s, indicating a deep crustal root. The west half of the island is generally characterized by a thinner crust and relatively lower S-wave velocities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martini, Matus N.; Gustafson, William I.; Yang, Qing
2014-11-18
Organized mesoscale cellular convection (MCC) is a common feature of marine stratocumulus that forms in response to a balance between mesoscale dynamics and smaller scale processes such as cloud radiative cooling and microphysics. We use the Weather Research and Forecasting model with chemistry (WRF-Chem) and fully coupled cloud-aerosol interactions to simulate marine low clouds during the VOCALS-REx campaign over the southeast Pacific. A suite of experiments with 3- and 9-km grid spacing indicates resolution-dependent behavior. The simulations with finer grid spacing have smaller liquid water paths and cloud fractions, while cloud tops are higher. The observed diurnal cycle is reasonablymore » well simulated. To isolate organized MCC characteristics we develop a new automated method, which uses a variation of the watershed segmentation technique that combines the detection of cloud boundaries with a test for coincident vertical velocity characteristics. This ensures that the detected cloud fields are dynamically consistent for closed MCC, the most common MCC type over the VOCALS-REx region. We demonstrate that the 3-km simulation is able to reproduce the scaling between horizontal cell size and boundary layer height seen in satellite observations. However, the 9-km simulation is unable to resolve smaller circulations corresponding to shallower boundary layers, instead producing invariant MCC horizontal scale for all simulated boundary layers depths. The results imply that climate models with grid spacing of roughly 3 km or smaller may be needed to properly simulate the MCC structure in the marine stratocumulus regions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less
NASA Astrophysics Data System (ADS)
Mohaideen, M. M. Diwan; Varija, K.
2018-05-01
This study investigates the potential and applicability of variable infiltration capacity (VIC) hydrological model to simulate different hydrological components of the Upper Bhima basin under two different Land Use Land Cover (LULC) (the year 2000 and 2010) conditions. The total drainage area of the basin was discretized into 1694 grids of about 5.5 km by 5.5 km: accordingly the model parameters were calibrated at each grid level. Vegetation parameters for the model were prepared using temporal profile of Leaf Area Index (LAI) from Moderate-Resolution Imaging Spectroradiometer and LULC. This practice provides a methodological framework for the improved vegetation parameterization along with region-specific condition for the model simulation. The calibrated and validated model was run using the two LULC conditions separately with the same observed meteorological forcing (1996-2001) and soil data. The change in LULC has resulted to an increase in the average annual evapotranspiration over the basin by 7.8%, while the average annual surface runoff and baseflow decreased by 18.86 and 5.83%, respectively. The variability in hydrological components and the spatial variation of each component attributed to LULC were assessed at the basin grid level. It was observed that 80% of the basin grids showed an increase in evapotranspiration (ET) (maximum of 292 mm). While the majority of the grids showed a decrease in surface runoff and baseflow, some of the grids showed an increase (i.e. 21 and 15% of total grids—surface runoff and baseflow, respectively).
NASA Astrophysics Data System (ADS)
Vargas, Marco; Miura, Tomoaki; Csiszar, Ivan; Zheng, Weizhong; Wu, Yihua; Ek, Michael
2017-04-01
The first Joint Polar Satellite System (JPSS) mission, the Suomi National Polar-orbiting Partnership (S-NPP) satellite, was successfully launched in October, 2011, and it will be followed by JPSS-1, slated for launch in 2017. JPSS provides operational continuity of satellite-based observations and products for NOAA's Polar Operational Environmental Satellites (POES). Vegetation products derived from satellite measurements are used for weather forecasting, land modeling, climate research, and monitoring the environment including drought, the health of ecosystems, crop monitoring and forest fires. The operationally produced S-NPP VIIRS Vegetation Index (VI) Environmental Data Record (EDR) includes two vegetation indices: the Top of the Atmosphere (TOA) Normalized Difference Vegetation Index (NDVI), and the Top of the Canopy (TOC) Enhanced Vegetation Index (EVI). For JPSS-1, the S-NPP Vegetation Index EDR algorithm has been updated to include the TOC NDV. The current JPSS operational VI products are generated in granule style at 375 meter resolution at nadir, but these products in granule format cannot be ingested into NOAA operational monitoring and decision making systems. For that reason, the NOAA JPSS Land Team is developing a new global gridded Vegetation Index (VI) product suite for operational use by the NOAA National Centers for Environmental Prediction (NCEP). The new global gridded VIs will be used in the Multi-Physics (MP) version of the Noah land surface model (Noah-MP) in NCEP NOAA Environmental Modeling System (NEMS) for plant growth and data assimilation and to describe vegetation coverage and density in order to model the correct surface energy partition. The new VI 4km resolution global gridded products (TOA NDVI, TOC NDVI and TOC EVI) are being designed to meet the needs of directly ingesting vegetation index variables without the need to develop local gridding and compositing procedures. These VI products will be consistent with the already operational SNPP VIIRS Green Vegetation Fraction (GVF) global gridded 4km resolution. The ultimate goal is a global consistent set of global gridded land products at 1-km resolution to enable consistent use of the products in the full suite of global and regional NCEP land models. The new JPSS vegetation products system is scheduled to transition to operations in the fall of 2017.
NASA Astrophysics Data System (ADS)
Zhang, B.; LI, Z.; Chu, R.
2015-12-01
Ambient noise has been proven particularly effective in imaging Earth's crust and uppermost mantle on local, regional and global scales, as well as in monitoring temporal variations of the Earth interior and determining earthquake ground truth location. Previous studies also have shown that the Microtremor Survey Method is effective to map the shallow crustal structure. In order to obtain the shallow crustal velocity structure beneath the Wudalianchi Weishan volcano area, an array of 29 new no-cable digital geophones were deployed for three days at the test site (3km×3km) for recording continuously seismic noise. Weishan volcano is located in the far north of Wudalianchi Volcanoes, the volcanic cone is composed of basaltic lava and the volcano area covered by a quaternary sediments layer (gray and black loam, brown and yellow loam, sandy loam). Accurate shallow crustal structure, particularly sedimentary structure model can improve the accuracy of location of volcanic earthquakes and structural imaging. We use ESPAC method, which is one of Microtremor Survey Methods, to calculate surface wave phase velocity dispersion curves between station pairs. A generalized 2-D linear inversion code that is named Surface Wave Tomography (SWT) is adopted to invert phase velocity tomographic maps in 2-5 Hz periods band. On the basis of a series of numerical tests, the study region is parameterized with a grid spacing of 0.1km×0.1km, all damping parameters and regularization are set properly to ensure relatively smooth results and small data misfits as well. We constructed a 3D Shallow Crustal S-wave Velocity model in the area by inverting the phase velocity dispersion curves at each node adopting an iterative linearized least-square inversion scheme of surf96. The tomography model is useful in interpreting volcanic features.
SoilInfo App: global soil information on your palm
NASA Astrophysics Data System (ADS)
Hengl, Tomislav; Mendes de Jesus, Jorge
2015-04-01
ISRIC ' World Soil Information has released in 2014 and app for mobile de- vices called 'SoilInfo' (http://soilinfo-app.org) and which aims at providing free access to the global soil data. SoilInfo App (available for Android v.4.0 Ice Cream Sandwhich or higher, and Apple v.6.x and v.7.x iOS) currently serves the Soil- Grids1km data ' a stack of soil property and class maps at six standard depths at a resolution of 1 km (30 arc second) predicted using automated geostatistical mapping and global soil data models. The list of served soil data includes: soil organic carbon (), soil pH, sand, silt and clay fractions (%), bulk density (kg/m3), cation exchange capacity of the fine earth fraction (cmol+/kg), coarse fragments (%), World Reference Base soil groups, and USDA Soil Taxonomy suborders (DOI: 10.1371/journal.pone.0105992). New soil properties and classes will be continuously added to the system. SoilGrids1km are available for download under a Creative Commons non-commercial license via http://soilgrids.org. They are also accessible via a Representational State Transfer API (http://rest.soilgrids.org) service. SoilInfo App mimics common weather apps, but is also largely inspired by the crowdsourcing systems such as the OpenStreetMap, Geo-wiki and similar. Two development aspects of the SoilInfo App and SoilGrids are constantly being worked on: Data quality in terms of accuracy of spatial predictions and derived information, and Data usability in terms of ease of access and ease of use (i.e. flexibility of the cyberinfrastructure / functionalities such as the REST SoilGrids API, SoilInfo App etc). The development focus in 2015 is on improving the thematic and spatial accuracy of SoilGrids predictions, primarily by using finer resolution covariates (250 m) and machine learning algorithms (such as random forests) to improve spatial predictions.
Hexagonal Pixels and Indexing Scheme for Binary Images
NASA Technical Reports Server (NTRS)
Johnson, Gordon G.
2004-01-01
A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.
NASA Astrophysics Data System (ADS)
Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.
2013-03-01
The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDzORINCA model for 137Cs reinforces the importance of atmospheric modeling in emergency cases to gather information for protecting the population from the adverse effects of radiation.
NASA Astrophysics Data System (ADS)
Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.
2013-07-01
The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDZORINCA model for 137Cs reinforces the importance of atmospheric modelling in emergency cases to gather information for protecting the population from the adverse effects of radiation.
The CMAQ modeling system has been used to simulate the CONUS using 12-km by 12-km horizontal grid spacing for the entire year of 2006 as part of the Air Quality Model Evaluation International initiative (AQMEII). The operational model performance for O3 and PM2.5<...
Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2016-11-01
We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
MISR Level 2 TOA/Cloud Classifier parameters (MIL2TCCL_V2)
NASA Technical Reports Server (NTRS)
Diner, David J. (Principal Investigator)
The TOA/Cloud Classifiers contain the Angular Signature Cloud Mask (ASCM), a scene classifier calculated using support vector machine technology (SVM) both of which are on a 1.1 km grid, and cloud fractions at 17.6 km resolution that are available in different height bins (low, middle, high) and are also calculated on an angle-by-angle basis. [Location=GLOBAL] [Temporal_Coverage: Start_Date=2000-02-24; Stop_Date=] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=17.6 km; Longitude_Resolution=17.6 km; Horizontal_Resolution_Range=10 km - < 50 km or approximately .09 degree - < .5 degree; Temporal_Resolution=about 15 orbits/day; Temporal_Resolution_Range=Daily - < Weekly, Daily - < Weekly].
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
NASA Technical Reports Server (NTRS)
Lambert, Winnie; Sharp, David; Spratt, Scott; Volkmer, Matthew
2005-01-01
Each morning, the forecasters at the National Weather Service in Melbourn, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in central Florida, especially during the warm season months of May-September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC (0700 AM EST) each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to increase consistency between forecasters while enabling them to focus on the mesoscale detail of the forecast, ultimately benefiting the end-users of the product. Several studies took place at the Florida State University (FSU) and NWS Tallahassee (TAE) for which they created daily flow regimes using Florida 1200 UTC synoptic soundings and CG strike densities from National Lightning Detection Network (NLDN) data. The densities were created on a 2.5 km x 2.5 km grid for every hour of every day during the warm seasons in the years 1989-2004. The grids encompass an area that includes the entire state of Florida and adjacent Atlantic and Gulf of Mexico waters. Personnel at the two organizations provided this data and supporting software for the work performed by the AMU. The densities were first stratified by flow regime, then by time in 1-, 3-, 6-, 12-, and 24-hour increments while maintaining the 2.5 km x 2.5 km grid resolution. A CG frequency of occurrence was calculated for each stratification and grid box by counting the number of days with lightning and dividing by the total number of days in the data set. New CG strike densities were calculated for each stratification and grid box by summing the strike number values over all warm seasons, then normalized by dividing the summed values by the number of lightning days. This makes the densities conditional on whether lightning occurred. The frequency climatology values will be used by forecasters as proxy inputs for lightning prObability, while the density climatology values will be used for CG amount. In addition to the benefits outlined above, these climatologies will provide improved temporal and spatial resolution, expansion of the lightning threat area to include adjacent coastal waters, and potential to extend the forecast to include the day-2 period. This presentation will describe the lightning threat index map, discuss the work done to create the maps initialized with climatological guidance, and show examples of the climatological CG lightning densities and frequencies of occurren based on flow regime.
NASA Astrophysics Data System (ADS)
Tscherning, Carl Christian; Arabelos, Dimitrios; Reguzzoni, Mirko
2013-04-01
The GOCE satellite measures gravity gradients which are filtered and transformed to gradients into an Earth-referenced frame by the GOCE High Level processing Facility. More than 80000000 data with 6 components are available from the period 2009-2011. IAG Arctic gravity was used north of 83 deg., while data at the Antarctic was not used due to bureaucratic restrictions by the data-holders. Subsets of the data have been used to produce gridded values at 10 km altitude of gravity anomalies and vertical gravity gradients in 20 deg. x 20 deg. blocks with 10' spacing. Various combinations and densities of data were used to obtain values in areas with known gravity anomalies. The (marginally) best choice was vertical gravity gradients selected with an approximately 0.125 deg spacing. Using Least-Squares Collocation, error-estimates were computed and compared to the difference between the GOCE-grids and grids derived from EGM2008 to deg. 512. In general a good agreement was found, however with some inconsistencies in certain areas. The computation time on a usual server with 24 processors was typically 100 minutes for a block with generally 40000 GOCE vertical gradients as input. The computations will be updated with new Wiener-filtered data in the near future.
Urbanization and the more-individuals hypothesis.
Chiari, Claudia; Dinetti, Marco; Licciardello, Cinzia; Licitra, Gaetano; Pautasso, Marco
2010-03-01
1. Urbanization is a landscape process affecting biodiversity world-wide. Despite many urban-rural studies of bird assemblages, it is still unclear whether more species-rich communities have more individuals, regardless of the level of urbanization. The more-individuals hypothesis assumes that species-rich communities have larger populations, thus reducing the chance of local extinctions. 2. Using newly collated avian distribution data for 1 km(2) grid cells across Florence, Italy, we show a significantly positive relationship between species richness and assemblage abundance for the whole urban area. This richness-abundance relationship persists for the 1 km(2) grid cells with less than 50% of urbanized territory, as well as for the remaining grid cells, with no significant difference in the slope of the relationship. These results support the more-individuals hypothesis as an explanation of patterns in species richness, also in human modified and fragmented habitats. 3. However, the intercept of the species richness-abundance relationship is significantly lower for highly urbanized grid cells. Our study confirms that urban communities have lower species richness but counters the common notion that assemblages in densely urbanized ecosystems have more individuals. In Florence, highly inhabited areas show fewer species and lower assemblage abundance. 4. Urbanized ecosystems are an ongoing large-scale natural experiment which can be used to test ecological theories empirically.
Evaluating Mesoscale Simulations of the Coastal Flow Using Lidar Measurements
NASA Astrophysics Data System (ADS)
Floors, R.; Hahmann, A. N.; Peña, A.
2018-03-01
The atmospheric flow in the coastal zone is investigated using lidar and mast measurements and model simulations. Novel dual-Doppler scanning lidars were used to investigate the flow over a 7 km transect across the coast, and vertically profiling lidars were used to study the vertical wind profile at offshore and onshore positions. The Weather, Research and Forecasting model is set up in 12 different configurations using 2 planetary boundary layer schemes, 3 horizontal grid spacings and varied sources of land use, and initial and lower boundary conditions. All model simulations describe the observed mean wind profile well at different onshore and offshore locations from the surface up to 500 m. The simulated mean horizontal wind speed gradient across the shoreline is close to that observed, although all simulations show wind speeds that are slightly higher than those observed. Inland at the lowest observed height, the model has the largest deviations compared to the observations. Taylor diagrams show that using ERA-Interim data as boundary conditions improves the model skill scores. Simulations with 0.5 and 1 km horizontal grid spacing show poorer model performance compared to those with a 2 km spacing, partially because smaller resolved wave lengths degrade standard error metrics. Modeled and observed velocity spectra were compared and showed that simulations with the finest horizontal grid spacing resolved more high-frequency atmospheric motion.
The interpretation of remotely sensed cloud properties from a model paramterization perspective
NASA Technical Reports Server (NTRS)
HARSHVARDHAN; Wielicki, Bruce A.; Ginger, Kathryn M.
1994-01-01
A study has been made of the relationship between mean cloud radiative properties and cloud fraction in stratocumulus cloud systems. The analysis is of several Land Resources Satellite System (LANDSAT) images and three hourly International Satellite Cloud Climatology Project (ISCCP) C-1 data during daylight hours for two grid boxes covering an area typical of a general circulation model (GCM) grid increment. Cloud properties were inferred from the LANDSAT images using two thresholds and several pixel resolutions ranging from roughly 0.0625 km to 8 km. At the finest resolution, the analysis shows that mean cloud optical depth (or liquid water path) increases somewhat with increasing cloud fraction up to 20% cloud coverage. More striking, however, is the lack of correlation between the two quantities for cloud fractions between roughly 0.2 and 0.8. When the scene is essentially overcast, the mean cloud optical tends to be higher. Coarse resolution LANDSAT analysis and the ISCCP 8-km data show lack of correlation between mean cloud optical depth and cloud fraction for coverage less than about 90%. This study shows that there is perhaps a local mean liquid water path (LWP) associated with partly cloudy areas of stratocumulus clouds. A method has been suggested to use this property to construct the cloud fraction paramterization in a GCM when the model computes a grid-box-mean LWP.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
3D Cloud Field Prediction using A-Train Data and Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Johnson, C. L.
2017-12-01
Validation of cloud process parameterizations used in global climate models (GCMs) would greatly benefit from observed 3D cloud fields at the size comparable to that of a GCM grid cell. For the highest resolution simulations, surface grid cells are on the order of 100 km by 100 km. CloudSat/CALIPSO data provides 1 km width of detailed vertical cloud fraction profile (CFP) and liquid and ice water content (LWC/IWC). This work utilizes four machine learning algorithms to create nonlinear regressions of CFP, LWC, and IWC data using radiances, surface type and location of measurement as predictors and applies the regression equations to off-track locations generating 3D cloud fields for 100 km by 100 km domains. The CERES-CloudSat-CALIPSO-MODIS (C3M) merged data set for February 2007 is used. Support Vector Machines, Artificial Neural Networks, Gaussian Processes and Decision Trees are trained on 1000 km of continuous C3M data. Accuracy is computed using existing vertical profiles that are excluded from the training data and occur within 100 km of the training data. Accuracy of the four algorithms is compared. Average accuracy for one day of predicted data is 86% for the most successful algorithm. The methodology for training the algorithms, determining valid prediction regions and applying the equations off-track is discussed. Predicted 3D cloud fields are provided as inputs to the Ed4 NASA LaRC Fu-Liou radiative transfer code and resulting TOA radiances compared to observed CERES/MODIS radiances. Differences in computed radiances using predicted profiles and observed radiances are compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
Analyzing Spatial and Temporal Variation in Precipitation Estimates in a Coupled Model
NASA Astrophysics Data System (ADS)
Tomkins, C. D.; Springer, E. P.; Costigan, K. R.
2001-12-01
Integrated modeling efforts at the Los Alamos National Laboratory aim to simulate the hydrologic cycle and study the impacts of climate variability and land use changes on water resources and ecosystem function at the regional scale. The integrated model couples three existing models independently responsible for addressing the atmospheric, land surface, and ground water components: the Regional Atmospheric Model System (RAMS), the Los Alamos Distributed Hydrologic System (LADHS), and the Finite Element and Heat Mass (FEHM). The upper Rio Grande Basin, extending 92,000 km2 over northern New Mexico and southern Colorado, serves as the test site for this model. RAMS uses nested grids to simulate meteorological variables, with the smallest grid over the Rio Grande having 5-km horizontal grid spacing. As LADHS grid spacing is 100 m, a downscaling approach is needed to estimate meteorological variables from the 5km RAMS grid for input into LADHS. This study presents daily and cumulative precipitation predictions, in the month of October for water year 1993, and an approach to compare LADHS downscaled precipitation to RAMS-simulated precipitation. The downscaling algorithm is based on kriging, using topography as a covariate to distribute the precipitation and thereby incorporating the topographical resolution achieved at the 100m-grid resolution in LADHS. The results of the downscaling are analyzed in terms of the level of variance introduced into the model, mean simulated precipitation, and the correlation between the LADHS and RAMS estimates. Previous work presented a comparison of RAMS-simulated and observed precipitation recorded at COOP and SNOTEL sites. The effects of downscaling the RAMS precipitation were evaluated using Spearman and linear correlations and by examining the variance of both populations. The study focuses on determining how the downscaling changes the distribution of precipitation compared to the RAMS estimates. Spearman correlations computed for the LADHS and RAMS cumulative precipitation reveal a disassociation over time, with R equal to 0.74 at day eight and R equal to 0.52 at day 31. Linear correlation coefficients (Pearson) returned a stronger initial correlation of 0.97, decreasing to 0.68. The standard deviations for the 2500 LADHS cells underlying each 5km RAMS cell range from 8 mm to 695 mm in the Sangre de Cristo Mountains and 2 mm to 112 mm in the San Luis Valley. Comparatively, the standard deviations of the RAMS estimates in these regions are 247 mm and 30 mm respectively. The LADHS standard deviations provide a measure of the variability introduced through the downscaling routine, which exceeds RAMS regional variability by a factor of 2 to 4. The coefficient of variation for the average LADHS grid cell values and the RAMS cell values in the Sangre de Cristo Mountains are 0.66 and 0.27, respectively, and 0.79 and 0.75 in the San Luis Valley. The coefficients of variation evidence the uniformity of the higher precipitation estimates in the mountains, especially for RAMS, and also the lower means and variability found in the valley. Additionally, Kolmogorov-Smirnov tests indicate clear spatial and temporal differences in mean simulated precipitation across the grid.
Statistical analysis of NWP rainfall data from Poland..
NASA Astrophysics Data System (ADS)
Starosta, Katarzyna; Linkowska, Joanna
2010-05-01
A goal of this work is to summarize the latest results of precipitation verification in Poland. In IMGW, COSMO_PL version 4.0 has been running. The model configuration is: 14 km horizontal grid spacing, initial time at 00 UTC and 12 UTC, the forecast range 72 h. The fields from the model had been verified with Polish SYNOP stations. The verification was performed using a new verification tool. For the accumulated precipitation indices FBI, POD, FAR, ETS from contingency table are calculated. In this paper the comparison of monthly and seasonal verification of 6h, 12h, 24h accumulated precipitation in 2009 is presented. Since February 2010 the model with 7 km grid spacing will be running in IMGW. The results of precipitation verification for two different models' resolution will be shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boville, B.A.; Randel, W.J.
1992-05-01
Equatorially trapped wave modes, such as Kelvin and mixed Rossby-gravity waves, are believed to play a crucial role in forcing the quasi-biennial oscillation (QBO) of the lower tropical stratosphere. This study examines the ability of a general circulation model (GCM) to simulate these waves and investigates the changes in the wave properties as a function of the vertical resolution of the model. The simulations produce a stratopause-level semiannual oscillation but not a QBO. An unfortunate property of the equatorially trapped waves is that they tend to have small vertical wavelengths ([le] 15 km). Some of the waves, believed to bemore » important in forcing the QBO, have wavelengths as short as 4 km. The short vertical wavelengths pose a stringent computational requirement for numerical models whose vertical grid spacing is typically chosen based on the requirements for simulating extratropical Rossby waves (which have much longer vertical wavelengths). This study examines the dependence of the equatorial wave simulation of vertical resolution using three experiments with vertical grid spacings of approximately 2.8, 1.4, and 0.7 km. Several Kelvin, mixed Rossby-gravity, and 0.7 km. Several Kelvin, mixed Rossby-gravity, and inertio-gravity waves are identified in the simulations. At high vertical resolution, the simulated waves are shown to correspond fairly well to the available observations. The properties of the relatively slow (and vertically short) waves believed to play a role in the QBO vary significantly with vertical resolution. Vertical grid spacings of about 1 km or less appear to be required to represent these waves adequately. The simulated wave amplitudes are at least as large as observed, and the waves are absorbed in the lower stratosphere, as required in order to force the QBO. However, the EP flux divergence associated with the waves is not sufficient to explain the zonal flow accelerations found in the QBO. 39 refs., 17 figs., 1 tab.« less
Grid Data Management and Customer Demands at MeteoSwiss
NASA Astrophysics Data System (ADS)
Rigo, G.; Lukasczyk, Ch.
2010-09-01
Data grids constitute the required input form for a variety of applications. Therefore, customers increasingly expect climate services to not only provide measured data, but also grids of these with the required configurations on an operational basis. Currently, MeteoSwiss is establishing a production chain for delivering data grids by subscription directly from the data warehouse in order to meet the demand for precipitation data grids by governmental, business and science customers. The MeteoSwiss data warehouse runs on an Oracle database linked with an ArcGIS Standard edition geodatabase. The grids are produced by Unix-based software written in R called GRIDMCH which extracts the station data from the data warehouse and stores the files in the file system. By scripts, the netcdf-v4 files are imported via an FME interface into the database. Currently daily and monthly deliveries of daily precipitation grids are available from MeteoSwiss with a spatial resolution of 2.2km x 2.2km. These daily delivered grids are a preliminary based on 100 measuring sites whilst the grid of the monthly delivery of daily sums is calculated out of about 430 stations. Crucial for the absorption by the customers is the understanding of and the trust into the new grid product. Clearly stating needs which can be covered by grid products, the customers require a certain lead time to develop applications making use of the particular grid. Therefore, early contacts and a continuous attendance as well as flexibility in adjusting the production process to fulfill emerging customer needs are important during the introduction period. Gridding over complex terrain can lead to temporally elevated uncertainties in certain areas depending on the weather situation and coverage of measurements. Therefore, careful instructions on the quality and use and the possibility to communicate the uncertainties of gridded data proofed to be essential especially to the business and science customers who require near-real-time datasets to build up trust in the product in different applications. The implementation of a new method called RSOI for the daily production allowed to bring the daily precipitation field up to the expectations of customers. The main use of the grids were near-realtime and past event analysis in areas scarcely covered with stations, and inputs for forecast tools and models. Critical success factors of the product were speed of delivery and at the same time accuracy, temporal and spatial resolution, and configuration (coordinate system, projection). To date, grids of archived precipitation data since 1961 and daily/monthly precipitation gridsets with 4h-delivery lag of Switzerland or subareas are available.
NASA Astrophysics Data System (ADS)
Saranya, K. R. L.; Reddy, C. Sudhakar
2016-04-01
The spatial changes in forest cover of Similipal biosphere reserve, Odisha, India over eight decades (1930-2012) has been quantified by using multi-temporal data from different sources. Over the period, the forest cover reduced by 970.8 km2 (23.6% of the total forest), and most significantly during the period, 1930-1975. Human-induced activities like conversion of forest land for agriculture, construction of dams and mining activities have been identified as major drivers of deforestation. Spatial analysis indicates that 399 grids (1 grid = 1 × 1 km) have undergone large-scale changes in forest cover (>75 ha) during 1930-1975, while only 3 grids have shown >75 ha loss during 1975-1990. Annual net rate of deforestation was 0.58 during 1930-1975, which has been reduced substantially during 1975-1990 (0.04). Annual gross rate of deforestation in 2006-2012 is indeed low (0.01) as compared to the national and global average. This study highlights the impact and effectiveness of conservation practices in minimizing the rate of deforestation and protecting the Similipal Biosphere Reserve.
NASA Astrophysics Data System (ADS)
Li, Tao; Zheng, Xiaogu; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Zhang, Shupeng; Wu, Guocan; Wang, Zhonglei; Huang, Chengcheng; Shen, Yan; Liao, Rongwei
2014-09-01
As part of a joint effort to construct an atmospheric forcing dataset for mainland China with high spatiotemporal resolution, a new approach is proposed to construct gridded near-surface temperature, relative humidity, wind speed and surface pressure with a resolution of 1 km×1 km. The approach comprises two steps: (1) fit a partial thin-plate smoothing spline with orography and reanalysis data as explanatory variables to ground-based observations for estimating a trend surface; (2) apply a simple kriging procedure to the residual for trend surface correction. The proposed approach is applied to observations collected at approximately 700 stations over mainland China. The generated forcing fields are compared with the corresponding components of the National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis dataset and the Princeton meteorological forcing dataset. The comparison shows that, both within the station network and within the resolutions of the two gridded datasets, the interpolation errors of the proposed approach are markedly smaller than the two gridded datasets.
Non-LTE Line-Blanketed Model Atmospheres of B-type Stars
NASA Astrophysics Data System (ADS)
Lanz, T.; Hubeny, I.
2005-12-01
We present an extension of our OSTAR2002 grid of NLTE model atmospheres to B-type stars. We have calculated over 1,300 metal line-blanketed, NLTE, plane-parallel, hydrostatic model atmospheres for the basic parameters appropriate to B stars. The grid covers 16 effective temperatures from 15,000 to 30,000 K, with 1000 K steps, 13 surface gravities, log g≤ 4.75 down to the Eddington limit, and 5 compositions (2, 1, 0.5, 0.2, and 0.1 times solar). We have adopted a microturbulent velocity of 2 km/s for all models. In the lower surface gravity range (log g≤ 3.0), we supplemented the main grid with additional model atmospheres accounting for higher microtutbulent velocity (10 km/s) and for alterated surface composition (He and N-rich, C-deficient), as observed in B supergiants. The models incorporate basically all known atomic levels of 46 ions of H, He, C, N, O, Ne, Mg, Al, Si, S, and Fe, which are grouped into 1127 superlevels. Models and spectra will be available at our Web site, http://nova.astro.umd.edu.
Thermodynamic and liquid profiling during the 2010 Winter Olympics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ware, R.; Cimini, D.; Campos, E.
2013-10-01
Tropospheric observations by a microwave profiling radiometer and six-hour radiosondes were obtained during the Alpine Venue of the 2010 Winter Olympic Games at Whistler, British Columbia, by Environment Canada. The radiometer provided continuous temperature, humidity and liquid (water) profiles during all weather conditions including rain, sleet and snow. Gridded analysis was provided by the U.S. National Oceanic and Atmospheric Administration. We compare more than two weeks of radiometer neural network and radiosonde temperature and humidity soundings including clear and precipitating conditions. Corresponding radiometer liquid and radiosonde wind soundings are shown. Close correlation is evident between radiometer and radiosonde temperature andmore » humidity profiles up to 10 km height and among southwest winds, liquid water and upper level thermodynamics, consistent with up-valley advection and condensation of moist maritime air. We compare brightness temperatures observed by the radiometer and forward-modeled from radiosonde and gridded analysis. Radiosonde-equivalent observation accuracy is demonstrated for radiometer neural network temperature and humidity retrievals up to 800 m height and for variational retrievals that combine radiometer and gridded analysis up to 10 km height« less
NASA Astrophysics Data System (ADS)
van Osnabrugge, B.; Weerts, A. H.; Uijlenhoet, R.
2017-11-01
To enable operational flood forecasting and drought monitoring, reliable and consistent methods for precipitation interpolation are needed. Such methods need to deal with the deficiencies of sparse operational real-time data compared to quality-controlled offline data sources used in historical analyses. In particular, often only a fraction of the measurement network reports in near real-time. For this purpose, we present an interpolation method, generalized REGNIE (genRE), which makes use of climatological monthly background grids derived from existing gridded precipitation climatology data sets. We show how genRE can be used to mimic and extend climatological precipitation data sets in near real-time using (sparse) real-time measurement networks in the Rhine basin upstream of the Netherlands (approximately 160,000 km2). In the process, we create a 1.2 × 1.2 km transnational gridded hourly precipitation data set for the Rhine basin. Precipitation gauge data are collected, spatially interpolated for the period 1996-2015 with genRE and inverse-distance squared weighting (IDW), and then evaluated on the yearly and daily time scale against the HYRAS and EOBS climatological data sets. Hourly fields are compared qualitatively with RADOLAN radar-based precipitation estimates. Two sources of uncertainty are evaluated: station density and the impact of different background grids (HYRAS versus EOBS). The results show that the genRE method successfully mimics climatological precipitation data sets (HYRAS/EOBS) over daily, monthly, and yearly time frames. We conclude that genRE is a good interpolation method of choice for real-time operational use. genRE has the largest added value over IDW for cases with a low real-time station density and a high-resolution background grid.
A new method for estimating carbon dioxide emissions from transportation at fine spatial scales
Shu, Yuqin; Reams, Margaret
2016-01-01
Detailed estimates of carbon dioxide (CO2) emissions at fine spatial scales are useful to both modelers and decision makers who are faced with the problem of global warming and climate change. Globally, transport related emissions of carbon dioxide are growing. This letter presents a new method based on the volume-preserving principle in the areal interpolation literature to disaggregate transportation-related CO2 emission estimates from the county-level scale to a 1 km2 grid scale. The proposed volume-preserving interpolation (VPI) method, together with the distance-decay principle, were used to derive emission weights for each grid based on its proximity to highways, roads, railroads, waterways, and airports. The total CO2 emission value summed from the grids within a county is made to be equal to the original county-level estimate, thus enforcing the volume-preserving property. The method was applied to downscale the transportation-related CO2 emission values by county (i.e. parish) for the state of Louisiana into 1 km2 grids. The results reveal a more realistic spatial pattern of CO2 emission from transportation, which can be used to identify the emission ‘hot spots’. Of the four highest transportation-related CO2 emission hotspots in Louisiana, high-emission grids literally covered the entire East Baton Rouge Parish and Orleans Parish, whereas CO2 emission in Jefferson Parish (New Orleans suburb) and Caddo Parish (city of Shreveport) were more unevenly distributed. We argue that the new method is sound in principle, flexible in practice, and the resultant estimates are more accurate than previous gridding approaches. PMID:26997973
Meter Scale Heterogeneities in the Oceanic Mantle Revealed in Ophiolites Peridotites
NASA Astrophysics Data System (ADS)
Haller, M. B.; Walker, R. J.; Day, J. M.; O'Driscoll, B.; Daly, J. S.
2016-12-01
Mid-ocean ridge basalts and other oceanic mantle-derived rocks do not capture the depleted endmember isotopic compositions present in oceanic peridotites. Ophiolites are especially useful in interrogating this issue as field-based observations can be paired with geochemical investigations over a wide range of geologic time. Grid sampling methods (3m x 3m) at the 497 Ma Leka Ophiolite Complex (LOC), Norway, and the 1.95 Ga Jormua Ophiolite Complex (JOC), Finland, offer an opportunity to study mantle domains at the meter and kilometer scale, and over a one billion year timespan. The lithology of each locality predominately comprises harzburgite, hosting layers and lenses of dunite and pyroxenite. Here, we combine highly siderophile elements (HSE) and Re-Os isotopic analysis of these rocks with major and trace element measurements. Harzburgites at individual LOC grid sites show variations in γOs(497 Ma) (-2.1 to +2.2) at the meter scale. Analyses of adjacent, more radiogenic dunites within the same LOC grid, reveal that dunites may either have similar γOs to their host harzburgite, or different, implying interactions between spatially associated rock types may differ at the meter scale. Averaged γOs values between the mantle sections of two LOC grid sites (+1.3 and -0.4) separated by 5 km indicate km-scale heterogeneity in the convecting upper mantle. Pd/Ir and Ru/Ir ratios are scattered and do not obviously correlate with γOs values. Analyses of pyroxenites within LOC grid sections, thin section observations of relict olivine grains, and whole rock major and trace element data are also examined to shed light on the causes of the isotopic heterogeneities in the LOC. Data from JOC grid sampling will be presented as well.
Examinations of Linkages Between the Northwest Mexican Monsoon and Great Plains Precipitation
NASA Astrophysics Data System (ADS)
Saleeby, S. M.; Cotton, W. R.
2001-12-01
The Regional Atmospheric Modeling System (RAMS) is being used to examine linkages between the Mexican monsoon and precipitation in the Great Plains region of the United States. Currently, available datasets have allowed for seasonal runs for July and August of the 1993 flood year in the midwest US and the 1997 El Nino year. There is also a plan to perform a full monsoon season simulation of the drought summer of 1988 once precipitation data becomes available. Preliminary results of this ongoing study are presented here. The model configuration consists of a 120km resolution coarse grid that covers a region from west of Hawaii to Bermuda and from south of the equator up into Canada. Two 40km resolution nested grids exist, with one covering the western two-thirds of the United States and Mexico and the other covering the Pacific ITCZ. A 10km fine grid and 2.5km cloud resolving grid are spawned over the region of monsoon surges to explicitly resolve convection. The model is initialized with NCEP reanalysis data, surface obs, rawinsonde data, variable soil moisture, and weekly averaged SST's. RAMS is running with two-stream Harrington radiation, one moment microphysics, and Kuo cumulus parameterization. The completed 1993 and 1997 seasonal simulations are now being examined and verified again NCEP reanalysis data and high resolution precipitation data. Initial model results look promising when verified against the NCEP upper level fields, such that the model is able to capture the large scale dynamics. For the duration of both seasonal runs, RAMS successfully simulates the mid and upper level geopotential heights, the temperature, and winds. The large scale 700mb and 500mb anti-cyclone over the US and Mexico is resolved, as well as the easterly flow over Mexico. Model fields are also being examined to isolate monsoon surge events which are characterized by increased precipitation over the Sierra Madres and a northward moisture surge into the northern extent of the Gulf of California and southern Arizona. Within the coarse grids, the RAMS model has successfully resolved the low-level jet that persists in the Gulf of California and the local maximum in mixing ratio that persists over the gulf. It has also captured the upslope flow over the Sierra Madres that forces the moist air into the higher elevation to the east. This provides the necessary lifting and moisture for the development of intense convection and resulting large amounts of precipitation that occur along the Sierra Madre mountain range. Examination of model-predicted low-level moisture transport reveals that moisture advected from the Gulf of California is the primary monsoon moisture source, rather than the Gulf of Mexico. Time averages of moisture transport, mixing ratio, winds, and precipitation for July 1993 reveal the prominent diurnal cycle variations that exist due to radiative effects and land-sea interactions; the maximum in convection, precipitation rate, and moisture transport occurs around 00Z. Seasonal accumulated precipitation amounts in the model are successful in predicting the placement of precipitation and relative amounts for most of the 40km continental grid, but there is an overestimation of precipitation along the northern Sierra Madre Occidental and an underestimation in the US mid-west. During the 1993 flood summer, much of the mid-west US precipitation fell in association with mesoscale convective systems; it is suspected that other cumulus parameterizations may provide better prediction of sub-grid scale convective precipitation. >http://hugo.atmos.colostate.edu/www/monsoon/monsoon.html
Estimation of Global 1km-grid Terrestrial Carbon Exchange Part II: Evaluations and Applications
NASA Astrophysics Data System (ADS)
Murakami, K.; Sasai, T.; Kato, S.; Niwa, Y.; Saito, M.; Takagi, H.; Matsunaga, T.; Hiraki, K.; Maksyutov, S. S.; Yokota, T.
2015-12-01
Global terrestrial carbon cycle largely depends on a spatial pattern in land cover type, which is heterogeneously-distributed over regional and global scales. Many studies have been trying to reveal distribution of carbon exchanges between terrestrial ecosystems and atmosphere for understanding global carbon cycle dynamics by using terrestrial biosphere models, satellite data, inventory data, and so on. However, most studies remained within several tens of kilometers grid spatial resolution, and the results have not been enough to understand the detailed pattern of carbon exchanges based on ecological community and to evaluate the carbon stocks by forest ecosystems in each countries. Improving the sophistication of spatial resolution is obviously necessary to enhance the accuracy of carbon exchanges. Moreover, the improvement may contribute to global warming awareness, policy makers and other social activities. We show global terrestrial carbon exchanges (net ecosystem production, net primary production, and gross primary production) with 1km-grid resolution. The methodology for these estimations are shown in the 2015 AGU FM poster "Estimation of Global 1km-grid Terrestrial Carbon Exchange Part I: Developing Inputs and Modelling". In this study, we evaluated the carbon exchanges in various regions with other approaches. We used the satellite-driven biosphere model (BEAMS) as our estimations, GOSAT L4A CO2 flux data, NEP retrieved by NICAM and CarbonTracer2013 flux data, for period from Jun 2001 to Dec 2012. The temporal patterns for this period were indicated similar trends between BEAMS, GOSAT, NICAM, and CT2013 in many sub-continental regions. Then, we estimated the terrestrial carbon exchanges in each countries, and could indicated the temporal patterns of the exchanges in large carbon stock regions.Global terrestrial carbon cycle largely depends on a spatial pattern of land cover type, which is heterogeneously-distributed over regional and global scales. Many studies have been trying to reveal distribution of carbon exchanges between terrestrial ecosystems and atmosphere for understanding global carbon cycle dynamics by using terrestrial biosphere models, satellite data, inventory data, and so on. However, most studies remained within several tens of kilometers grid spatial resolution, and the results have not been enough to understand the detailed pattern of carbon exchanges based on ecological community and to evaluate the carbon stocks by forest ecosystems in each countries. Improving the sophistication of spatial resolution is obviously necessary to enhance the accuracy of carbon exchanges. Moreover, the improvement may contribute to global warming awareness, policy makers and other social activities. We show global terrestrial carbon exchanges (net ecosystem production, net primary production, and gross primary production) with 1km-grid resolution. The methodology for these estimations are shown in the 2015 AGU FM poster "Estimation of Global 1km-grid Terrestrial Carbon Exchange Part I: Developing Inputs and Modelling". In this study, we evaluated the carbon exchanges in various regions with other approaches. We used the satellite-driven biosphere model (BEAMS) as our estimations, GOSAT L4A CO2 flux data, NEP retrieved by NICAM and CarbonTracer2013 flux data, for period from Jun 2001 to Dec 2012. The temporal patterns for this period were indicated similar trends between BEAMS, GOSAT, NICAM, and CT2013 in many sub-continental regions. Then, we estimated the terrestrial carbon exchanges in each countries, and could indicated the temporal patterns of the exchanges in large carbon stock regions.
Three-dimensional body-wave model of Nepal using finite difference tomography
NASA Astrophysics Data System (ADS)
Ho, T. M.; Priestley, K.; Roecker, S. W.
2017-12-01
The processes occurring during continent-continent collision are still poorly understood. Ascertaining the seismic properties of the crust and uppermost mantle in such settings provides insight into continental rheology and geodynamics. The most active present-day continent-continent collision is that of India with Eurasia which has created the Himalayas and the Tibetan Plateau. Nepal provides an ideal laboratory for imaging the crustal processes resulting from the Indo-Eurasia collision. We build body wave models using local body wave arrivals picked at stations in Nepal deployed by the Department of Mining and Geology of Nepal. We use the tomographic inversion method of Roecker et al. [2006], the key feature of which is that the travel times are generated using a finite difference solution to the eikonal equation. The advantage of this technique is increased accuracy in the highly heterogeneous medium expected for the Himalayas. Travel times are calculated on a 3D Cartesian grid with a grid spacing of 6 km and intragrid times are estimated by trilinear interpolation. The gridded area spans a region of 80-90o longitude and 25-30o latitude. For a starting velocity model, we use IASP91. Inversion is performed using the LSQR algorithm. Since the damping parameter can have a significant effect on the final solution, we tested a range of damping parameters to fully explore its effect. Much of the seismicity is clustered to the West of Kathmandu at depths < 30 km. Small areas of strong fast wavespeeds exist in the centre of the region in the upper 30 km of the crust. At depths of 40-50 km, large areas of slow wavespeeds are present which track along the plate boundary.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E.; Shelly, D. R.
2013-12-01
Observations of non-volcanic tremor have become ubiquitous in recent years. In spite of the abundance of observations, locating tremor remains a difficult task because of the lack of distinctive phase arrivals. Here we use time-reverse-imaging techniques that do not require identifying phase arrivals to locate individual low-frequency-earthquakes (LFEs) within tremor episodes on the San Andreas fault near Cholame, California. Time windows of 1.5-second duration containing LFEs are selected from continuously recorded waveforms of the local seismic network filtered between 1-5 Hz. We propagate the time-reversed seismic signal back through the subsurface using a staggered-grid finite-difference code. Assuming all rebroadcasted waveforms result from similar wave fields at the source origin, we search for wave field coherence in time and space to obtain the source location and origin time where the constructive interference is a maximum. We use an interpolated velocity model with a grid spacing of 100 m and a 5 ms time step to calculate the relative curl field energy amplitudes for each rebroadcasted seismogram every 50 ms for each grid point in the model. Finally, we perform a grid search for coherency in the curl field using a sliding time window, and taking the absolute value of the correlation coefficient to account for differences in radiation pattern. The highest median cross-correlation coefficient value over at a given grid point indicates the source location for the rebroadcasted event. Horizontal location errors based on the spatial extent of the highest 10% cross-correlation coefficient are on the order of 4 km, and vertical errors on the order of 3 km. Furthermore, a test of the method using earthquake data shows that the method produces an identical hypocentral location (within errors) as that obtained by standard ray-tracing methods. We also compare the event locations to a LFE catalog that locates the LFEs from stacked waveforms of repeated LFEs identified by cross-correlation techniques [Shelly and Hardebeck, 2010]. The LFE catalog uses stacks of at least several hundred templates to identify phase arrivals used to estimate the location. We find epicentral locations for individual LFEs based on the time-reverse-imaging technique are within ~4 km relative to the LFE catalog [Shelly and Hardebeck, 2010]. LFEs locate between 15-25 km depth, and have similar focal depths found in previous studies of the region. Overall, the method can provide robust locations of individual LFEs without identifying and stacking hundreds of LFE templates; the locations are also more accurate than envelope location methods, which have errors on the order of tens of km [Horstmann et al., 2013].
A one-dimensional water balance model was developed and used to simulate water balance for the Columbia River Basin. he model was run over a 10 km X 10 km grid for the United State's portion of the basin. he regional water balance was calculated using a monthly time-step for a re...
Characterization of Mesoscale Predictability
2013-09-30
2009), which, it had been argued, had high mesoscale predictability. More recently, we have considered the prediction of lowland snow in the Puget ...averaged total and perturbation kinetic energy spectra on the 5-km, convection-permitting grid. The ensembles clearly captured the observed k-5/3 total...kinetic energy spectrum at wavelengths less than approximately 400 km and also showed a transition to a roughly k-3 dependence at longer wavelengths
MISR Level 2 TOA/Cloud Classifier parameters (MIL2TCCL_V3)
NASA Technical Reports Server (NTRS)
Diner, David J. (Principal Investigator)
The TOA/Cloud Classifiers contain the Angular Signature Cloud Mask (ASCM), a scene classifier calculated using support vector machine technology (SVM) both of which are on a 1.1 km grid, and cloud fractions at 17.6 km resolution that are available in different height bins (low, middle, high) and are also calculated on an angle-by-angle basis. [Temporal_Coverage: Start_Date=2000-02-24; Stop_Date=] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1.1 km; Longitude_Resolution=1.1 km; Temporal_Resolution=about 15 orbits/day].
Regional Data Assimilation Using a Stretched-Grid Approach and Ensemble Calculations
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, M. S.; Takacs, L. L.; Govindaraju, R. C.; Atlas, Robert (Technical Monitor)
2002-01-01
The global variable resolution stretched grid (SG) version of the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS) incorporating the GEOS SG-GCM (Fox-Rabinovitz 2000, Fox-Rabinovitz et al. 2001a,b), has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The major area of interest with enhanced regional resolution used in different SG-DAS experiments includes a rectangle over the U.S. with 50 or 60 km horizontal resolution. The analyses and diagnostics are produced for all mandatory levels from the surface to 0.2 hPa. The assimilated regional mesoscale products are consistent with global scale circulation characteristics due to using the SG-approach. Both the stretched grid and basic uniform grid DASs use the same amount of global grid-points and are compared in terms of regional product quality.
Satellite radar altimetry over ice. Volume 4: Users' guide for Antarctica elevation data from Seasat
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Major, Judith A.; Brenner, Anita C.; Bindschadler, Robert A.; Martin, Thomas V.
1990-01-01
A gridded surface-elevation data set and a geo-referenced data base for the Seasat radar altimeter data over Greenland are described. This is a user guide to accompany the data provided to data centers and other users. The grid points are on a polar stereographic projection with a nominal spacing of 20 km. The gridded elevations are derived from the elevation data in the geo-referenced data base by a weighted fitting of a surface in the neighborhood of each grid point. The gridded elevations are useful for the creating of large-scale contour maps, and the geo-referenced data base is useful for regridding, creating smaller-scale contour maps, and examinating individual elevation measurements in specific geographic areas. Tape formats are described, and a FORTRAN program for reading the data tape is listed and provided on the tape.
NASA Astrophysics Data System (ADS)
Chen, Y.; Ludwig, F.; Street, R.
2003-12-01
The Advanced Regional Prediction System (ARPS) was used to simulate weak synoptic wind conditions with stable stratification and pronounced drainage flow at night in the vicinity of the Jordan Narrows at the south end of Salt Lake Valley. The simulations showed the flow to be quite complex with hydraulic jumps and internal waves that make it essential to use a complete treatment of the fluid dynamics. Six one-way nested grids were used to resolve the topography; they ranged from 20-km grid spacing, initialized by ETA 40-km operational analyses down to 250-m horizontal resolution and 200 vertically stretched levels to a height of 20 km, beginning with a 10-m cell at the surface. Most of the features of interest resulted from interactions with local terrain features, so that little was lost by using one-way nesting. Canyon, gap, and over-terrain flows have a large effect on mixing and vertical transport, especially in the regions where hydraulic jumps are likely. Our results also showed that the effect of spatial resolution on simulation performance is profound. The horizontal resolution must be such that the smallest features that are likely to have important impact on the flow are spanned by at least a few grid points. Thus, the 250 m minimum resolution of this study is appropriate for treating the effects of features of about 1 km or greater extent. To be consistent, the vertical cell dimension must resolve the same terrain features resolved by the horizontal grid. These simulations show that many of the interesting flow features produce observable wind and temperature gradients at or near the surface. Accordingly, some relatively simple field measurements might be made to confirm that the mixing phenomena that were simulated actually take place in the real atmosphere, which would be very valuable for planning large, expensive field campaigns. The work was supported by the Atmospheric Sciences Program, Office of Biological and Environmental Research, U.S. Department of Energy. The National Energy Research Scientific Computing Center (NERSC) provided computational time. We thank Professor Ming Xue and others at the University of Oklahoma for their help.
Regionalisation of statistical model outputs creating gridded data sets for Germany
NASA Astrophysics Data System (ADS)
Höpp, Simona Andrea; Rauthe, Monika; Deutschländer, Thomas
2016-04-01
The goal of the German research program ReKliEs-De (regional climate projection ensembles for Germany, http://.reklies.hlug.de) is to distribute robust information about the range and the extremes of future climate for Germany and its neighbouring river catchment areas. This joint research project is supported by the German Federal Ministry of Education and Research (BMBF) and was initiated by the German Federal States. The Project results are meant to support the development of adaptation strategies to mitigate the impacts of future climate change. The aim of our part of the project is to adapt and transfer the regionalisation methods of the gridded hydrological data set (HYRAS) from daily station data to the station based statistical regional climate model output of WETTREG (regionalisation method based on weather patterns). The WETTREG model output covers the period of 1951 to 2100 with a daily temporal resolution. For this, we generate a gridded data set of the WETTREG output for precipitation, air temperature and relative humidity with a spatial resolution of 12.5 km x 12.5 km, which is common for regional climate models. Thus, this regionalisation allows comparing statistical to dynamical climate model outputs. The HYRAS data set was developed by the German Meteorological Service within the German research program KLIWAS (www.kliwas.de) and consists of daily gridded data for Germany and its neighbouring river catchment areas. It has a spatial resolution of 5 km x 5 km for the entire domain for the hydro-meteorological elements precipitation, air temperature and relative humidity and covers the period of 1951 to 2006. After conservative remapping the HYRAS data set is also convenient for the validation of climate models. The presentation will consist of two parts to present the actual state of the adaptation of the HYRAS regionalisation methods to the statistical regional climate model WETTREG: First, an overview of the HYRAS data set and the regionalisation methods for precipitation (REGNIE method based on a combination of multiple linear regression with 5 predictors and inverse distance weighting), air temperature and relative humidity (optimal interpolation) will be given. Finally, results of the regionalisation of WETTREG model output will be shown.
Evaluation of model-predicted hazardous air pollutants (HAPs) near a mid-sized U.S. airport
NASA Astrophysics Data System (ADS)
Vennam, Lakshmi Pradeepa; Vizuete, William; Arunachalam, Saravanan
2015-10-01
Accurate modeling of aircraft-emitted pollutants in the vicinity of airports is essential to study the impact on local air quality and to answer policy and health-impact related issues. To quantify air quality impacts of airport-related hazardous air pollutants (HAPs), we carried out a fine-scale (4 × 4 km horizontal resolution) Community Multiscale Air Quality model (CMAQ) model simulation at the T.F. Green airport in Providence (PVD), Rhode Island. We considered temporally and spatially resolved aircraft emissions from the new Aviation Environmental Design Tool (AEDT). These model predictions were then evaluated with observations from a field campaign focused on assessing HAPs near the PVD airport. The annual normalized mean error (NME) was in the range of 36-70% normalized mean error for all HAPs except for acrolein (>70%). The addition of highly resolved aircraft emissions showed only marginally incremental improvements in performance (1-2% decrease in NME) of some HAPs (formaldehyde, xylene). When compared to a coarser 36 × 36 km grid resolution, the 4 × 4 km grid resolution did improve performance by up to 5-20% NME for formaldehyde and acetaldehyde. The change in power setting (from traditional International Civil Aviation Organization (ICAO) 7% to observation studies based 4%) doubled the aircraft idling emissions of HAPs, but led to only a 2% decrease in NME. Overall modeled aircraft-attributable contributions are in the range of 0.5-28% near a mid-sized airport grid-cell with maximum impacts seen only within 4-16 km from the airport grid-cell. Comparison of CMAQ predictions with HAP estimates from EPA's National Air Toxics Assessment (NATA) did show similar annual mean concentrations and equally poor performance. Current estimates of HAPs for PVD are a challenge for modeling systems and refinements in our ability to simulate aircraft emissions have made only incremental improvements. Even with unrealistic increases in HAPs aviation emissions the model could not match observed concentrations near the runway airport site. Our results suggest other uncertainties in the modeling system such as meteorology, HAPs chemistry, or other emission sources require increased scrutiny.
NASA Astrophysics Data System (ADS)
Bouffon, T.; Rice, R.; Bales, R.
2006-12-01
The spatial distributions of snow water equivalent (SWE) and snow depth within a 1, 4, and 16 km2 grid element around two automated snow pillows in a forested and open- forested region of the Upper Merced River Basin (2,800 km2) of Yosemite National Park were characterized using field observations and analyzed using binary regression trees. Snow surveys occurred at the forested site during the accumulation and ablation seasons, while at the open-forest site a survey was performed only during the accumulation season. An average of 130 snow depth and 7 snow density measurements were made on each survey, within the 4 km2 grid. Snow depth was distributed using binary regression trees and geostatistical methods using the physiographic parameters (e.g. elevation, slope, vegetation, aspect). Results in the forest region indicate that the snow pillow overestimated average SWE within the 1, 4, and 16 km2 areas by 34 percent during ablation, but during accumulation the snow pillow provides a good estimate of the modeled mean SWE grid value, however it is suspected that the snow pillow was underestimating SWE. However, at the open forest site, during accumulation, the snow pillow was 28 percent greater than the mean modeled grid element. In addition, the binary regression trees indicate that the independent variables of vegetation, slope, and aspect are the most influential parameters of snow depth distribution. The binary regression tree and multivariate linear regression models explain about 60 percent of the initial variance for snow depth and 80 percent for density, respectively. This short-term study provides motivation and direction for the installation of a distributed snow measurement network to fill the information gap in basin-wide SWE and snow depth measurements. Guided by these results, a distributed snow measurement network was installed in the Fall 2006 at Gin Flat in the Upper Merced River Basin with the specific objective of measuring accumulation and ablation across topographic variables with the aim of providing guidance for future larger scale observation network designs.
Evaluation of global equal-area mass grid solutions from GRACE
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron
2015-04-01
The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.
Self-Avoiding Walks Over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man; Cheng, Anning
2007-01-01
The effects of subgrid-scale condensation and transport become more important as the grid spacings increase from those typically used in large-eddy simulation (LES) to those typically used in cloud-resolving models (CRMs). Incorporation of these effects can be achieved by a joint probability density function approach that utilizes higher-order moments of thermodynamic and dynamic variables. This study examines how well shallow cumulus and stratocumulus clouds are simulated by two versions of a CRM that is implemented with low-order and third-order turbulence closures (LOC and TOC) when a typical CRM horizontal resolution is used and what roles the subgrid-scale and resolved-scale processes play as the horizontal grid spacing of the CRM becomes finer. Cumulus clouds were mostly produced through subgrid-scale transport processes while stratocumulus clouds were produced through both subgrid-scale and resolved-scale processes in the TOC version of the CRM when a typical CRM grid spacing is used. The LOC version of the CRM relied upon resolved-scale circulations to produce both cumulus and stratocumulus clouds, due to small subgrid-scale transports. The mean profiles of thermodynamic variables, cloud fraction and liquid water content exhibit significant differences between the two versions of the CRM, with the TOC results agreeing better with the LES than the LOC results. The characteristics, temporal evolution and mean profiles of shallow cumulus and stratocumulus clouds are weakly dependent upon the horizontal grid spacing used in the TOC CRM. However, the ratio of the subgrid-scale to resolved-scale fluxes becomes smaller as the horizontal grid spacing decreases. The subcloud-layer fluxes are mostly due to the resolved scales when a grid spacing less than or equal to 1 km is used. The overall results of the TOC simulations suggest that a 1-km grid spacing is a good choice for CRM simulation of shallow cumulus and stratocumulus.
NASA Astrophysics Data System (ADS)
Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald
2017-04-01
With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.
Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5
NASA Astrophysics Data System (ADS)
Zhang, Kai; Zhao, Chun; Wan, Hui; Qian, Yun; Easter, Richard C.; Ghan, Steven J.; Sakaguchi, Koichi; Liu, Xiaohong
2016-02-01
This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography over land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. In Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.
Airborne Grid Sea-Ice Surveys for Comparison with CryoSat-2
NASA Astrophysics Data System (ADS)
Brozena, J. M.; Gardner, J. M.; Liang, R.; Hagen, R. A.; Ball, D.
2014-12-01
The U.S. Naval Research Laboratory is engaged in a study of the changing Arctic with a particular focus on ice thickness and distribution variability. The purpose is to optimize computer models used to predict sea ice changes. An important part of our study is to calibrate/validate CryoSat-2 ice thickness data prior to its incorporation into new ice forecast models. The large footprint of the CryoSat-2 altimeter over sea-ice is a significant issue in any attempt to ground-truth the data. Along-track footprints are reduced to ~ 300 m by SAR processing of the returns. However, the cross-track footprint is determined by the topography of the surface. Further, the actual return is the sum of the returns from individual reflectors within the footprint making it difficult to interpret the return, and optimize the waveform tracker. We therefore collected a series of grids of airborne scanning lidar and nadir pointing radar on sub-satellite tracks over sea-ice that would extend far enough cross-track to capture the illuminated area. One difficulty in the collection of grids comprised of adjacent overlapping tracks is that the ice moves as much as 300 m over the duration of a single track (~ 10 min). With a typical lidar swath width of 500m we needed to adjust the survey tracks in near real-time for the ice motion. This was accomplished by a photogrammetric method of ice velocity determination (RTIME) reported in another presentation. Post-processing refinements resulted in typical track-to-track miss-ties of ~ 1-2 m, much of which could be attributed to ice deformation over the period of the survey. An important factor is that we were able to reconstruct the ice configuration at the time of the satellite overflight, resulting in an accurate representation of the surface illuminated by CryoSat-2. Our intention is to develop a model of the ice surface using the lidar grid which includes both snow and ice using radar profiles to determine snow thickness. In 2013 a set of 6 usable grids 5-20 km wide (cross-track) by 10-30 km long were collected north of Barrow, AK. In 2014 a further 5 narrower grids (~5km) were collected. Data from these grids are shown here and will be used to examine the relationship of the tracked satellite waveform data to the actual surface.
Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5
Zhang, Kai; Zhao, Chun; Wan, Hui; ...
2016-02-12
This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography overmore » land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. Lastly, in Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.« less
Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Zhao, Chun; Wan, Hui
This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography overmore » land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. Lastly, in Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.« less
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Long, D. G.
2017-12-01
Since 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Up until recently, the available global gridded passive microwave data sets have not been produced consistently. Various projections (equal-area, polar stereographic), a number of different gridding techniques were used, along with various temporal sampling as well as a mix of Level 2 source data versions. In addition, not all data from all sensors have been processed completely and they have not been processed in any one consistent way. Furthermore, the original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. As part of NASA MEaSUREs, we have re-processed all data from SMMR, all SSM/I-SSMIS and AMSR-E instruments, using the most mature Level 2 data. The Calibrated, Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) gridded data are now available from the NSIDC DAAC. The data are distributed as netCDF files that comply with CF-1.6 and ACDD-1.3 conventions. The data have been produced on EASE 2.0 projections at smoothed, 25 kilometer resolution and spatially-enhanced resolutions, up to 3.125 km depending on channel frequency, using the radiometer version of the Scatterometer Image Reconstruction (rSIR) method. We expect this newly produced data set to enable scientists to better analyze trends in coastal regions, marginal ice zones and in mountainous terrain that were not possible with the previous gridded passive microwave data. The use of the EASE-Grid 2.0 definition and netCDF-CF formatting allows users to extract compliant geotiff images and provides for easy importing and correct reprojection interoperability in many standard packages. As a consistently-processed, high-quality satellite passive microwave ESDR, we expect this data set to replace earlier gridded passive microwave data sets, and to pave the way for new insights from higher-resolution derived geophysical products.
The abrupt development of adult-like grid cell firing in the medial entorhinal cortex
Wills, Thomas J.; Barry, Caswell; Cacucci, Francesca
2012-01-01
Understanding the development of the neural circuits subserving specific cognitive functions such as navigation remains a central problem in neuroscience. Here, we characterize the development of grid cells in the medial entorhinal cortex, which, by nature of their regularly spaced firing fields, are thought to provide a distance metric to the hippocampal neural representation of space. Grid cells emerge at the time of weaning in the rat, at around 3 weeks of age. We investigated whether grid cells in young rats are functionally equivalent to those observed in the adult as soon as they appear, or if instead they follow a gradual developmental trajectory. We find that, from the very youngest ages at which reproducible grid firing is observed (postnatal day 19): grid cells display adult-like firing fields that tessellate to form a coherent map of the local environment; that this map is universal, maintaining its internal structure across different environments; and that grid cells in young rats, as in adults, also encode a representation of direction and speed. To further investigate the developmental processes leading up to the appearance of grid cells, we present data from individual medial entorhinal cortex cells recorded across more than 1 day, spanning the period before and after the grid firing pattern emerged. We find that increasing spatial stability of firing was correlated with increasing gridness. PMID:22557949
Lateral Mixing DRI Analysis: Submesoscale Water-Mass Spectra
2013-09-30
program to determine submesoscale variability in the Sargasso Sea under weak-to-moderate mesoscale conditions. Two sites were examined, a quiet site...anomalies and dye streaks. Hammerhead carries finescale Sea -Bird sensors for temperature, conductivity and pressure as well as Chelsea and WetLab...m of dye-injection target densities. They were embedded in 35-km towyo grid surveys by Craig Lee’s Triaxus and 15-km butterfly surveys by Jody
Influence of model grid size on the simulation of PM2.5 and the related excess mortality in Japan
NASA Astrophysics Data System (ADS)
Goto, D.; Ueda, K.; Ng, C. F.; Takami, A.; Ariga, T.; Matsuhashi, K.; Nakajima, T.
2016-12-01
Aerosols, especially PM2.5, can affect air pollution, climate change, and human health. The estimation of health impacts due to PM2.5 is often performed using global and regional aerosol transport models with various horizontal resolutions. To investigate the dependence of the simulated PM2.5 on model grid sizes, we executed two simulations using a high-resolution model ( 10km; HRM) and a low-resolution model ( 100km; LRM, which is a typical value for general circulation models). In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a stretched grid system in HRM and a uniform grid system in LRM for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). These calculations were performed by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 for the elderly. Results showed the LRM underestimated by approximately 30 % (of PM2.5 concentrations in the 2000 and 2030), approximately 60 % (excess mortality in the 2000) and approximately 90 % (excess mortality in 2030) compared to the HRM results. The estimation of excess mortality therefore performed better with high-resolution grid sizes. In addition, we also found that our nesting method could be a useful tool to obtain better estimation results.
What Level 2 Products are available?
Atmospheric Science Data Center
2014-12-08
The Aerosol data (MIL2ASAE) contains aerosol optical depth, aerosol compositional model, ancillary meteorological data, and related parameters on a 17.6 km grid. The Land Surface data (MIL2ASLS) includes bihemispherical and...
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. P. Jensen; Toto, T.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.« less
2008-09-01
explosions (UNEs) at the Semipalatinsk Test Site and regional earthquakes recorded by station WMQ (Urumchi, China). Measurements from the grids are... Semipalatinsk , Lop Nor, Novaya Zemlya, and Nevada Test Sites (STS, LNTS, NZTS, NTS, respectively) and regional earthquakes. We used phase-specific window...stations (triangles) within 2000 km of STS and LNTS. Semipalatinsk Test Site Figure 2 shows Pn/Lg spectral ratios, corrected for site and distance
NASA Technical Reports Server (NTRS)
Comiso, Josefino C.; Zwally, H. Jay
1989-01-01
A time series of daily brightness temperature gridded maps (October 25, 1978 through August 15, 1987) were generated from all ten channels of the Nimbus-7 Scanning Multichannel Microwave Radiometer orbital data. This unique data set can be utilized in a wide range of applications including heat flux, ocean circulation, ice edge productivity, and climate studies. Two sets of data in polar stereographic format are created for the Arctic region: one with a grid size of about 30 km on a 293 by 293 array similar to that previously utilized for the Nimbus-5 Electrically Scanning Microwave Radiometer, while the other has a grid size of about 25 km on a 448 by 304 array identical to what is now being used for the DMSP Scanning Multichannel Microwave Imager. Data generated for the Antaractic region are mapped using the 293 by 293 grid only. The general technique for mapping, and a quality assessment of the data set are presented. Monthly and yearly averages are also generated from the daily data and sample geophysical ice images and products derived from the data are given. Contour plots of monthly ice concentrations derived from the data for October 1978 through August 1987 are presented to demonstrate spatial and temporal detail which this data set can offer, and to show potential research applications.
Using deconvolution to improve the metrological performance of the grid method
NASA Astrophysics Data System (ADS)
Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis
2013-06-01
The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.
Improving sub-grid scale accuracy of boundary features in regional finite-difference models
Panday, Sorab; Langevin, Christian D.
2012-01-01
As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.
St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.
2015-04-02
The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. However, how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer 'how far is far enough,' we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25–2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation lengthmore » $$\\xi $$ that falls at least as fast as $${{\\tau }^{-1}}$$ . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.« less
NASA Astrophysics Data System (ADS)
St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.
2015-04-01
The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation length ξ that falls at least as fast as {{τ }-1} . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.
Nine martian years of dust optical depth observations: A reference dataset
NASA Astrophysics Data System (ADS)
Montabone, Luca; Forget, Francois; Kleinboehl, Armin; Kass, David; Wilson, R. John; Millour, Ehouarn; Smith, Michael; Lewis, Stephen; Cantor, Bruce; Lemmon, Mark; Wolff, Michael
2016-07-01
We present a multi-annual reference dataset of the horizontal distribution of airborne dust from martian year 24 to 32 using observations of the martian atmosphere from April 1999 to June 2015 made by the Thermal Emission Spectrometer (TES) aboard Mars Global Surveyor, the Thermal Emission Imaging System (THEMIS) aboard Mars Odyssey, and the Mars Climate Sounder (MCS) aboard Mars Reconnaissance Orbiter (MRO). Our methodology to build the dataset works by gridding the available retrievals of column dust optical depth (CDOD) from TES and THEMIS nadir observations, as well as the estimates of this quantity from MCS limb observations. The resulting (irregularly) gridded maps (one per sol) were validated with independent observations of CDOD by PanCam cameras and Mini-TES spectrometers aboard the Mars Exploration Rovers "Spirit" and "Opportunity", by the Surface Stereo Imager aboard the Phoenix lander, and by the Compact Reconnaissance Imaging Spectrometer for Mars aboard MRO. Finally, regular maps of CDOD are produced by spatially interpolating the irregularly gridded maps using a kriging method. These latter maps are used as dust scenarios in the Mars Climate Database (MCD) version 5, and are useful in many modelling applications. The two datasets (daily irregularly gridded maps and regularly kriged maps) for the nine available martian years are publicly available as NetCDF files and can be downloaded from the MCD website at the URL: http://www-mars.lmd.jussieu.fr/mars/dust_climatology/index.html
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Jitendra; Hoffman, Forrest M.; Hargrove, William W.
This data set contain global gridded surfaces of Gross Primary Productivity (GPP) at 2 arc minute (approximately 4 km) spatial resolution monthly for the period of 2000-2014 derived from FLUXNET2015 (released July 12, 2016) observations using a representativeness based upscaling approach.
Covariance analysis of the airborne laser ranging system
NASA Technical Reports Server (NTRS)
Englar, T. S., Jr.; Hammond, C. L.; Gibbs, B. P.
1981-01-01
The requirements and limitations of employing an airborne laser ranging system for detecting crustal shifts of the Earth within centimeters over a region of approximately 200 by 400 km are presented. The system consists of an aircraft which flies over a grid of ground deployed retroreflectors, making six passes over the grid at two different altitudes. The retroreflector baseline errors are assumed to result from measurement noise, a priori errors on the aircraft and retroreflector positions, tropospheric refraction, and sensor biases.
Gasoline-powered series hybrid cars cause lower life cycle carbon emissions than battery cars
NASA Astrophysics Data System (ADS)
Meinrenken, Christoph; Lackner, Klaus S.
2012-02-01
Battery cars powered by grid electricity promise reduced life cycle green house gas (GHG) emissions from the automotive sector. Such scenarios usually point to the much higher emissions from conventional, internal combustion engine cars. However, today's commercially available series hybrid technology achieves the well known efficiency gains in electric drivetrains (regenerative breaking, lack of gearbox) even if the electricity is generated onboard, from conventional fuels. Here, we analyze life cycle GHG emissions for commercially available, state-of the-art plug-in battery cars (e.g. Nissan Leaf) and those of commercially available series hybrid cars (e.g., GM Volt, at same size and performance). Crucially, we find that series hybrid cars driven on (fossil) gasoline cause fewer emissions (126g CO2eq per km) than battery cars driven on current US grid electricity (142g CO2eq per km). We attribute this novel finding to the significant incremental emissions from plug-in battery cars due to losses during grid transmission and battery dis-/charging, and manufacturing larger batteries. We discuss crucial implications for strategic policy decisions towards a low carbon automotive sector as well as relative land intensity when powering cars by biofuel vs. bioelectricity.
Gasoline-powered serial hybrid cars cause lower life cycle carbon emissions than battery cars
NASA Astrophysics Data System (ADS)
Meinrenken, Christoph J.; Lackner, Klaus S.
2011-04-01
Battery cars powered by grid electricity promise reduced life cycle green house gas (GHG) emissions from the automotive sector. Such scenarios usually point to the much higher emissions from conventional, internal combustion engine cars. However, today's commercially available serial hybrid technology achieves the well known efficiency gains from regenerative breaking, lack of gearbox, and light weighting - even if the electricity is generated onboard, from conventional fuels. Here, we analyze emissions for commercially available, state-of the-art battery cars (e.g. Nissan Leaf) and those of commercially available serial hybrid cars (e.g., GM Volt, at same size and performance). Crucially, we find that serial hybrid cars driven on (fossil) gasoline cause fewer life cycle GHG emissions (126g CO2e per km) than battery cars driven on current US grid electricity (142g CO2e per km). We attribute this novel finding to the significant incremental life cycle emissions from battery cars from losses during grid transmission, battery dis-/charging, and larger batteries. We discuss crucial implications for strategic policy decisions towards a low carbon automotive sector as well as relative land intensity when powering cars by biofuel vs. bioelectricity.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.
2004-01-01
Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.
The Liverpool Bay Coastal Observatory
NASA Astrophysics Data System (ADS)
Howarth, Michael John; O'Neill, Clare K.; Palmer, Matthew R.
2010-05-01
A pre-operational Coastal Observatory has been functioning since August 2002 in Liverpool Bay, Irish Sea. Its rationale is to develop the science underpinning the ecosystem based approach to marine management, including distinguishing between natural and man-made variability, with particular emphasis on eutrophication and predicting responses of a coastal sea to climate change. Liverpool Bay has strong tidal mixing, receives fresh water principally from the Dee, Mersey and Ribble estuaries, each with different catchment influences, and has enhanced levels of nutrients. Horizontal and vertical density gradients are variable both in space and time. The challenge is to understand and model accurately this variable region which is turbulent, turbid, receives enhanced nutrients and is productive. The Observatory has three components, for each of which the goal is some (near) real-time operation - measurements; coupled 3-D hydrodynamic, wave and ecological models; a data management and web-based data delivery system which provides free access to the data, http://cobs.pol.ac.uk. The integrated measurements are designed to test numerical models and have as a major objective obtaining multi-year records, covering tidal, event (storm / calm / bloom), seasonal and interannual time scales. The four main strands on different complementary space or time scales are:- a) fixed point time series (in situ and shore-based); very good temporal and very poor spatial resolution. These include tide gauges; a meteorological station on Hilbre Island at the mouth of the Dee; two in situ sites, one by the Mersey Bar, measuring waves and the vertical structure of current, temperature and salinity. A CEFAS SmartBuoy whose measurements include surface nutrients is deployed at the Mersey Bar site. b) regular (nine times per year) spatial water column surveys on a 9 km grid; good vertical resolution for some variables, limited spatial coverage and resolution, and limited temporal resolution. The measurements include nutrients and on board pCO2. c) HF radar for surface currents and waves; very good temporal resolution, limited spatial resolution (4 km grid) and range (~75 km). d) an instrumented ferry between Birkenhead and Dublin; along track 100 m resolution, crossing there and back most days. These are supplemented by weekly composite (because of cloud cover) satellite images of sea surface temperature, suspended sediment and chlorophyll; excellent horizontal resolution for surface properties, poor temporal coverage. A suite of coupled 3-D hydrodynamic, wave and ecological models forced by forecast meteorology is being developed. The model domains are nested from a 12 km grid ocean / shelf domain, 1.8 km Irish Sea and finally to 180 m for Liverpool Bay. Making real time forecasts for comparison with measurements is difficult since the forecast is only as good as the forcing data, for instance the meteorology should be on spatial and temporal scales comparable with the oceanographic models' and real-time river flow data is needed (climatological mean data are not good enough, especially for local models). The Observatory's design naturally involved compromises where model predictions can help, for instance should the detailed coverage be wider, including more of the Irish Sea, and / or should it extend closer to the shore, where biologically activity is greater? How many cruises should there be per year - nine visits will over-sample for a well defined seasonal cycle, such as temperature, but not for a variable with a more unpredictable or shorter time scale, such as salinity or phytoplankton? After seven years the main scientific challenges remain both to understand the processes and to translate this into predictive models whose accuracy has been quantified. The challenges relate to physics (salinity, circulation in Liverpool Bay, the flow through the Irish Sea, flushing events); the role of sediments in the optical characteristics of the water column; the ecosystem and eutrophication.
Real-Time Very High-Resolution Regional 4D Assimilation in Supporting CRYSTAL-FACE Experiment
NASA Technical Reports Server (NTRS)
Wang, Donghai; Minnis, Patrick
2004-01-01
To better understand tropical cirrus cloud physical properties and formation processes with a view toward the successful modeling of the Earth's climate, the CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment) field experiment took place over southern Florida from 1 July to 29 July 2002. During the entire field campaign, a very high-resolution numerical weather prediction (NWP) and assimilation system was performed in support of the mission with supercomputing resources provided by NASA Center for Computational Sciences (NCCS). By using NOAA NCEP Eta forecast for boundary conditions and as a first guess for initial conditions assimilated with all available observations, two nested 15/3 km grids are employed over the CRYSTAL-FACE experiment area. The 15-km grid covers the southeast US domain, and is run two times daily for a 36-hour forecast starting at 0000 UTC and 1200 UTC. The nested 3-km grid covering only southern Florida is used for 9-hour and 18-hour forecasts starting at 1500 and 0600 UTC, respectively. The forecasting system provided more accurate and higher spatial and temporal resolution forecasts of 4-D atmospheric fields over the experiment area than available from standard weather forecast models. These forecasts were essential for flight planning during both the afternoon prior to a flight day and the morning of a flight day. The forecasts were used to help decide takeoff times and the most optimal flight areas for accomplishing the mission objectives. See more detailed products on the web site http://asd-www.larc.nasa.gov/mode/crystal. The model/assimilation output gridded data are archived on the NASA Center for Computational Sciences (NCCS) UniTree system in the HDF format at 30-min intervals for real-time forecasts or 5-min intervals for the post-mission case studies. Particularly, the data set includes the 3-D cloud fields (cloud liquid water, rain water, cloud ice, snow and graupe/hail).
The Impacts of Bowtie Effect and View Angle Discontinuity on MODIS Swath Data Gridding
NASA Technical Reports Server (NTRS)
Wang, Yujie; Lyapustin, Alexei
2007-01-01
We have analyzed two effects of the MODIS viewing geometry on the quality of gridded imagery. First, the fact that the MODIS scans a swath of the Earth 10 km wide at nadir, causes abrupt change of the view azimuth angle at the boundary of adjacent scans. This discontinuity appears as striping of the image clearly visible in certain cases with viewing geometry close to principle plane over the snow of the glint area of water. The striping is a true surface Bi-directional Reflectance Factor (BRF) effect and should be preserved during gridding. Second, due to bowtie effect, the observations in adjacent scans overlap each other. Commonly used method of calculating grid cell value by averaging all overlapping observations may result in smearing of the image. This paper describes a refined gridding algorithm that takes the above two effects into account. By calculating the grid cell value by averaging the overlapping observations from a single scan, the new algorithm preserves the measured BRF signal and enhances sharpness of the image.
NASA Astrophysics Data System (ADS)
Xue, L.; Newman, A. J.; Ikeda, K.; Rasmussen, R.; Clark, M. P.; Monaghan, A. J.
2016-12-01
A high-resolution (a 1.5 km grid spacing domain nested within a 4.5 km grid spacing domain) 10-year regional climate simulation over the entire Hawaiian archipelago is being conducted at the National Center for Atmospheric Research (NCAR) using the Weather Research and Forecasting (WRF) model version 3.7.1. Numerical sensitivity simulations of the Hawaiian Rainband Project (HaRP, a filed experiment from July to August in 1990) showed that the simulated precipitation properties are sensitive to initial and lateral boundary conditions, sea surface temperature (SST), land surface models, vertical resolution and cloud droplet concentration. The validations of model simulated statistics of the trade wind inversion, temperature, wind field, cloud cover, and precipitation over the islands against various observations from soundings, satellites, weather stations and rain gauges during the period from 2003 to 2012 will be presented at the meeting.
Climatological Impact of Atmospheric River Based on NARCCAP and DRI-RCM Datasets
NASA Astrophysics Data System (ADS)
Mejia, J. F.; Perryman, N. M.
2012-12-01
This study evaluates spatial responses of extreme precipitation environments, typically associated with Atmospheric River events, using Regional Climate Model (RCM) output from NARCCAP dataset (50km grid size) and the Desert Research Institute-RCM simulations (36 and 12 km grid size). For this study, a pattern-detection algorithm was developed to characterize Atmospheric Rivers (ARs)-like features from climate models. Topological analysis of the enhanced elongated moisture flux (500-300hPa; daily means) cores is used to objectively characterize such AR features in two distinct groups: (i) zonal, north Pacific ARs, and (ii) subtropical ARs, also known as "Pineapple Express" events. We computed the climatological responses of the different RCMs upon these two AR groups, from which intricate differences among RCMs stand out. This study presents these climatological responses from historical and scenario driven simulations, as well as implications for precipitation extreme-value analyses.
Form and flow of the Academy of Sciences Ice Cap, Severnaya Zemlya, Russian High Arctic
NASA Astrophysics Data System (ADS)
Dowdeswell, J. A.; Bassford, R. P.; Gorman, M. R.; Williams, M.; Glazovsky, A. F.; Macheret, Y. Y.; Shepherd, A. P.; Vasilenko, Y. V.; Savatyuguin, L. M.; Hubberten, H.-W.; Miller, H.
2002-04-01
The 5,575-km2 Academy of Sciences Ice Cap is the largest in the Russian Arctic. A 100-MHz airborne radar, digital Landsat imagery, and satellite synthetic aperture radar (SAR) interferometry are used to investigate its form and flow, including the proportion of mass lost through iceberg calving. The ice cap was covered by a 10-km-spaced grid of radar flight paths, and the central portion was covered by a grid at 5-km intervals: a total of 1,657 km of radar data. Digital elevation models (DEMs) of ice surface elevation, ice thickness, and bed elevation data sets were produced (cell size 500 m). The DEMs were used in the selection of a deep ice core drill site. Total ice cap volume is 2,184 km3 (~5.5 mm sea level equivalent). The ice cap has a single dome reaching 749 m. Maximum ice thickness is 819 m. About 200 km, or 42%, of the ice margin is marine. About 50% of the ice cap bed is below sea level. The central divide of the ice cap and several major drainage basins, in the south and east of the ice cap and of up to 975 km2, are delimited from satellite imagery. There is no evidence of past surge activity on the ice cap. SAR interferometric fringes and phase-unwrapped velocities for the whole ice cap indicate slow flow in the interior and much of the margin, punctuated by four fast flowing features with lateral shear zones and maximum velocity of 140 m yr-1. These ice streams extend back into the slower moving ice to within 5-10 km of the ice cap crest. They have lengths of 17-37 km and widths of 4-8 km. Mass flux from these ice streams is ~0.54 km3 yr-1. Tabular icebergs up to ~1.7 km long are produced. Total iceberg flux from the ice cap is ~0.65 km3 yr-1 and probably represents ~40% of the overall mass loss, with the remainder coming from surface melting. Driving stresses are generally lowest (<40 kPa) close to the ice cap divides and in several of the ice streams. Ice stream motion is likely to include a significant basal component and may involve deformable marine sediments.
Application of spatially gridded temperature and land cover data sets for urban heat island analysis
Gallo, Kevin; Xian, George Z.
2014-01-01
Two gridded data sets that included (1) daily mean temperatures from 2006 through 2011 and (2) satellite-derived impervious surface area, were combined for a spatial analysis of the urban heat-island effect within the Dallas-Ft. Worth Texas region. The primary advantage of using these combined datasets included the capability to designate each 1 × 1 km grid cell of available temperature data as urban or rural based on the level of impervious surface area within the grid cell. Generally, the observed differences in urban and rural temperature increased as the impervious surface area thresholds used to define an urban grid cell were increased. This result, however, was also dependent on the size of the sample area included in the analysis. As the spatial extent of the sample area increased and included a greater number of rural defined grid cells, the observed urban and rural differences in temperature also increased. A cursory comparison of the spatially gridded temperature observations with observations from climate stations suggest that the number and location of stations included in an urban heat island analysis requires consideration to assure representative samples of each (urban and rural) environment are included in the analysis.
Multivariate Spline Algorithms for CAGD
NASA Technical Reports Server (NTRS)
Boehm, W.
1985-01-01
Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.
NASA Technical Reports Server (NTRS)
Lambert, Winifred; Short, David; Wolkmer, Matthew; Sharp, David; Spratt, Scott
2006-01-01
Each morning, the forecasters at the National Weather Service in Melbourne, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in East Central Florida, especially during the warm season months of May September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to improve consistency between forecasters while allowing them to focus on the mesoscale detail of the forecast, ultimately benefiting the end-users of the product. Several studies took place at the Florida State University (FSU) and NWS Tallahassee (TAE) in which they created daily flow regimes using Florida 1200 UTC synoptic soundings and CG strike densities, or number of strikes per specified area. The soundings used to determine the flow regimes were taken at Miami (MIA), Tampa (TBW), and Jacksonville (JAX), FL, and the lightning data for the strike densities came from the National Lightning Detection Network (NLDN). The densities were created on a 2.5 km x 2.5 km grid for every hour of every day during the warm seasons in the years 1989-2004. The grids encompass an area that includes the entire state of Florida and adjacent Atlantic and Gulf of Mexico waters. Personnel at FSU and NWS TAE provided this data and supporting software for the work performed by the AMU.
NASA Technical Reports Server (NTRS)
Lambert, Winifred; Short, David; Volkmer, Matthew; Sharp, David; Spratt, Scott
2007-01-01
Each morning, the forecasters at the National Weather Service in Melbourne, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (httl://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in East Central Florida, especially during the warm season months of May September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Until recently, the forecasters created each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent was to improve consistency between forecasters while allowing them to focus on the mesoscale detail of the forecast. Several studies took place at the Florida State University (FSU) and NWS Tallahassee (TAE) in which they created daily flow regimes using Florida 1200 UTC synoptic soundings and CG strike densities, or number of strikes per specified area. The soundings used to determine the flow regimes were taken at Miami (MIA), Tampa (TBW), and Jacksonville (JAX), FL, and the lightning data for the strike densities came from the National Lightning Detection Network (NLDN). The densities were created on a 2.5 km x 2.5 km grid for every hour of every day during the warm seasons in the years 1989-2004. The grids encompass an area that includes the entire state of Florida and adjacent Atlantic and Gulf of Mexico waters. Personnel at FSU and NWS TAE provided this data and supporting software for the work performed by the AMU.
Sensitivities of Modeled Tropical Cyclones to Surface Friction and the Coriolis Parameter
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Tao, Wei-Kuo; Lau, William K. M. (Technical Monitor)
2002-01-01
In this investigation the sensitivities of a 2-D tropical cyclone (TC) model to surface frictional coefficient and the Coriolis parameter are studied and their implication is discussed. The model used is an axisymmetric version of the latest version of the Goddard cloud ensemble model. The model has stretched vertical grids with 33 levels varying from 30 m near the bottom to 1140 m near the top. The vertical domain is about 21 km. The horizontal domain covers a radius of 962 km (770 grids) with a grid size of 1.25 km. The time step is 10 seconds. An open lateral boundary condition is used. The sea surface temperature is specified at 29C. Unless specified otherwise, the Coriolis parameter is set at its value at 15 deg N. The Newtonian cooling is used with a time scale of 12 hours. The reference vertical temperature profile used in the Newtonian cooling is that of Jordan. The Newtonian cooling models not only the effect of radiative processes but also the effect of processes with scale larger than that of TC. Our experiments showed that if the Newtonian cooling is replaced by a radiation package, the simulated TC is much weaker. The initial condition has a temperature uniform in the radial direction and its vertical profile is that of Jordan. The initial winds are a weak Rankin vortex in the tangential winds superimposed on a resting atmosphere. The initial sea level pressure is set at 1015 hPa everywhere. Since there is no surface pressure perturbation, the initial condition is not in gradient balance. This initial condition is enough to lead to cyclogenesis, but the initial stage (say, the first 24 hrs) is not considered to resemble anything observed. The control experiment reaches quasi-equilibration after about 10 days with an eye wall extending from 15 to 25 km radius, reasonable comparing with the observations. The maximum surface wind of more than 70 m/s is located at about 18 km radius. The minimum sea level pressure on day 10 is about 886 hPa. Thus the overall simulation is considered successful and the model is considered adequate for our investigation.
NASA Technical Reports Server (NTRS)
O'Neill, P.; Podest, E.
2011-01-01
The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors, including its availability and ease of use, its inherent error and resulting impact on the overall soil moisture or freeze/thaw retrieval accuracy, and its compatibility with similar choices made by the SMOS mission. All decisions regarding SMAP ancillary data sources would be fully documented by the SMAP Project and made available to the user community.
Rousselet, Jérôme; Imbert, Charles-Edouard; Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Raj K.; Berg, Larry K.; Pekour, Mikhail
The assumption of sub-grid scale (SGS) horizontal homogeneity within a model grid cell, which forms the basis of SGS turbulence closures used by mesoscale models, becomes increasingly tenuous as grid spacing is reduced to a few kilometers or less, such as in many emerging high-resolution applications. Herein, we use the turbulence kinetic energy (TKE) budget equation to study the spatio-temporal variability in two types of terrain—complex (Columbia Basin Wind Energy Study [CBWES] site, north-eastern Oregon) and flat (ScaledWind Farm Technologies [SWiFT] site, west Texas) using the Weather Research and Forecasting (WRF) model. In each case six-nested domains (three domains eachmore » for mesoscale and large-eddy simulation [LES]) are used to downscale the horizontal grid spacing from 10 km to 10 m using the WRF model framework. The model output was used to calculate the values of the TKE budget terms in vertical and horizontal planes as well as the averages of grid cells contained in the four quadrants (a quarter area) of the LES domain. The budget terms calculated along the planes and the mean profile of budget terms show larger spatial variability at CBWES site than at the SWiFT site. The contribution of the horizontal derivative of the shear production term to the total production shear was found to be 45% and 15% of the total shear, at the CBWES and SWiFT sites, respectively, indicating that the horizontal derivatives applied in the budget equation should not be ignored in mesoscale model parameterizations, especially for cases with complex terrain with <10 km scale.« less
Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google street view could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google street view. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google street view were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google street view network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant. PMID:24130675
Management decision making for fisher populations informed by occupancy modeling
Fuller, Angela K.; Linden, Daniel W.; Royle, J. Andrew
2016-01-01
Harvest data are often used by wildlife managers when setting harvest regulations for species because the data are regularly collected and do not require implementation of logistically and financially challenging studies to obtain the data. However, when harvest data are not available because an area had not previously supported a harvest season, alternative approaches are required to help inform management decision making. When distribution or density data are required across large areas, occupancy modeling is a useful approach, and under certain conditions, can be used as a surrogate for density. We collaborated with the New York State Department of Environmental Conservation (NYSDEC) to conduct a camera trapping study across a 70,096-km2 region of southern New York in areas that were currently open to fisher (Pekania [Martes] pennanti) harvest and those that had been closed to harvest for approximately 65 years. We used detection–nondetection data at 826 sites to model occupancy as a function of site-level landscape characteristics while accounting for sampling variation. Fisher occupancy was influenced positively by the proportion of conifer and mixed-wood forest within a 15-km2 grid cell and negatively associated with road density and the proportion of agriculture. Model-averaged predictions indicated high occupancy probabilities (>0.90) when road densities were low (<1 km/km2) and coniferous and mixed forest proportions were high (>0.50). Predicted occupancy ranged 0.41–0.67 in wildlife management units (WMUs) currently open to trapping, which could be used to guide a minimum occupancy threshold for opening new areas to trapping seasons. There were 5 WMUs that had been closed to trapping but had an average predicted occupancy of 0.52 (0.07 SE), and above the threshold of 0.41. These areas are currently under consideration by NYSDEC for opening a conservative harvest season. We demonstrate the use of occupancy modeling as an aid to management decision making when harvest-related data are unavailable and when budgetary constraints do not allow for capture–recapture studies to directly estimate density.
Modeling surface trapped river plumes: A sensitivity study
Hyatt, Jason; Signell, Richard P.
2000-01-01
To better understand the requirements for realistic regional simulation of river plumes in the Gulf of Maine, we test the sensitivity of the Blumberg-Mellor hydrodynamic model to choice of advection scheme, grid resolution, and wind, using idealized geometry and forcing. The test case discharges 1500 m3/s of fresh water into a uniform 32 psu ocean along a straight shelf at 43?? north. The water depth is 15 m at the coast and increases linearly to 190 m at a distance 100 km offshore. Constant discharge runs are conducted in the presence of ambient alongshore current and with and without periodic alongshore wind forcing. Advection methods tested are CENTRAL, UPWIND, the standard Smolarkiewicz MPDATA and a recursive MPDATA scheme. For the no-wind runs, the UPWIND advection scheme performs poorly for grid resolutions typically used in regional simulations (grid spacing of 1-2 km, comparable to or slightly less than the internal Rossby radius, and vertical resolution of 10% of the water column), damping out much of the plume structure. The CENTRAL difference scheme also has problems when wind forcing is neglected, and generates too much structure, shedding eddies of numerical origin. When a weak 5 cm/s ambient current is present in the no-wind case, both the CENTRAL and standard MPDATA schemes produce a false fresh- and dense-water source just upstream of the river inflow due to a standing two-grid length oscillation in the salinity field. The recursive MPDATA scheme completely eliminates the false dense water source, and produces results closest to the grid-converged solution. The results are shown to be very sensitive to vertical grid resolution, and the presence of wind forcing dramatically changes the nature of the plume simulations. The implication of these idealized tests for realistic simulations is discussed, as well as ramifications on previous studies of idealized plume models.
NASA Astrophysics Data System (ADS)
Fernández, Alfonso; Najafi, Mohammad Reza; Durand, Michael; Mark, Bryan G.; Moritz, Mark; Jung, Hahn Chul; Neal, Jeffrey; Shastry, Apoorva; Laborde, Sarah; Phang, Sui Chian; Hamilton, Ian M.; Xiao, Ningchuan
2016-08-01
Recent innovations in hydraulic modeling have enabled global simulation of rivers, including simulation of their coupled wetlands and floodplains. Accurate simulations of floodplains using these approaches may imply tremendous advances in global hydrologic studies and in biogeochemical cycling. One such innovation is to explicitly treat sub-grid channels within two-dimensional models, given only remotely sensed data in areas with limited data availability. However, predicting inundated area in floodplains using a sub-grid model has not been rigorously validated. In this study, we applied the LISFLOOD-FP hydraulic model using a sub-grid channel parameterization to simulate inundation dynamics on the Logone River floodplain, in northern Cameroon, from 2001 to 2007. Our goal was to determine whether floodplain dynamics could be simulated with sufficient accuracy to understand human and natural contributions to current and future inundation patterns. Model inputs in this data-sparse region include in situ river discharge, satellite-derived rainfall, and the shuttle radar topography mission (SRTM) floodplain elevation. We found that the model accurately simulated total floodplain inundation, with a Pearson correlation coefficient greater than 0.9, and RMSE less than 700 km2, compared to peak inundation greater than 6000 km2. Predicted discharge downstream of the floodplain matched measurements (Nash-Sutcliffe efficiency of 0.81), and indicated that net flow from the channel to the floodplain was modeled accurately. However, the spatial pattern of inundation was not well simulated, apparently due to uncertainties in SRTM elevations. We evaluated model results at 250, 500 and 1000-m spatial resolutions, and found that results are insensitive to spatial resolution. We also compared the model output against results from a run of LISFLOOD-FP in which the sub-grid channel parameterization was disabled, finding that the sub-grid parameterization simulated more realistic dynamics. These results suggest that analysis of global inundation is feasible using a sub-grid model, but that spatial patterns at sub-kilometer resolutions still need to be adequately predicted.
A gridded hourly rainfall dataset for the UK applied to a national physically-based modelling system
NASA Astrophysics Data System (ADS)
Lewis, Elizabeth; Blenkinsop, Stephen; Quinn, Niall; Freer, Jim; Coxon, Gemma; Woods, Ross; Bates, Paul; Fowler, Hayley
2016-04-01
An hourly gridded rainfall product has great potential for use in many hydrological applications that require high temporal resolution meteorological data. One important example of this is flood risk management, with flooding in the UK highly dependent on sub-daily rainfall intensities amongst other factors. Knowledge of sub-daily rainfall intensities is therefore critical to designing hydraulic structures or flood defences to appropriate levels of service. Sub-daily rainfall rates are also essential inputs for flood forecasting, allowing for estimates of peak flows and stage for flood warning and response. In addition, an hourly gridded rainfall dataset has significant potential for practical applications such as better representation of extremes and pluvial flash flooding, validation of high resolution climate models and improving the representation of sub-daily rainfall in weather generators. A new 1km gridded hourly rainfall dataset for the UK has been created by disaggregating the daily Gridded Estimates of Areal Rainfall (CEH-GEAR) dataset using comprehensively quality-controlled hourly rain gauge data from over 1300 observation stations across the country. Quality control measures include identification of frequent tips, daily accumulations and dry spells, comparison of daily totals against the CEH-GEAR daily dataset, and nearest neighbour checks. The quality control procedure was validated against historic extreme rainfall events and the UKCP09 5km daily rainfall dataset. General use of the dataset has been demonstrated by testing the sensitivity of a physically-based hydrological modelling system for Great Britain to the distribution and rates of rainfall and potential evapotranspiration. Of the sensitivity tests undertaken, the largest improvements in model performance were seen when an hourly gridded rainfall dataset was combined with potential evapotranspiration disaggregated to hourly intervals, with 61% of catchments showing an increase in NSE between observed and simulated streamflows as a result of more realistic sub-daily meteorological forcing.
NASA Technical Reports Server (NTRS)
Al-Hamdan, Mohammad; Crosson, William; Economou, Sigrid; Estes, Maurice, Jr.; Estes, Sue; Hemmings, Sarah; Kent, Shia; Quattrochi, Dale; Wade, Gina; McClure, Leslie
2011-01-01
NASA Marshall Space Flight Center is collaborating with the University of Alabama at Birmingham (UAB) School of Public Health and the Centers for Disease Control and Prevention (CDC) National Center for Public Health Informatics to address issues of environmental health and enhance public health decision making by utilizing NASA remotely sensed data and products. The objectives of this study are to develop high-quality spatial data sets of environmental variables, link these with public health data from a national cohort study, and deliver the linked data sets and associated analyses to local, state and federal end-user groups. Three daily environmental data sets will be developed for the conterminous U.S. on different spatial resolutions for the period 2003-2008: (1) spatial surfaces of estimated fine particulate matter (PM2.5) exposures on a 10-km grid utilizing the US Environmental Protection Agency (EPA) ground observations and NASA's MODerate-resolution Imaging Spectroradiometer (MODIS) data; (2) a 1-km grid of Land Surface Temperature (LST) using MODIS data; and (3) a 12-km grid of daily Solar Insolation (SI) using the North American Land Data Assimilation System (NLDAS) forcing data. These environmental data sets will be linked with public health data from the UAB REasons for Geographic And Racial Differences in Stroke (REGARDS) national cohort study to determine whether exposures to these environmental risk factors are related to cognitive decline and other health outcomes. These environmental datasets and public health linkage analyses will be disseminated to end-users for decision making through the CDC Wide-ranging Online Data for Epidemiologic Research (WONDER) system.
NASA Astrophysics Data System (ADS)
Oo, Sungmin; Foelsche, Ulrich; Kirchengast, Gottfried; Fuchsberger, Jürgen
2016-04-01
The research level products of the Integrated Multi-Satellite Retrievals for Global Precipitation Measurement (IMERG "Final" run datasets) were compared with rainfall measurements from the WegenerNet high density network as part of ground validation (GV) projects of GPM missions. The WegenerNet network comprises 151 ground level weather stations in an area of 15 km × 20 km in south-eastern Austria (Feldbach region, ˜46.93° N, ˜15.90° E) designed to serve as a long-term monitoring and validation facility for weather and climate research and applications. While the IMERG provides rainfall estimations every half hour at 0.1° resolution, the WegenerNet network measures rainfall every 5 minutes at around 2 km2 resolution and produces 200 m × 200 m gridded datasets. The study was conducted on the domain of the WegenerNet network; eight IMERG grids are overlapped with the network, two of which are entirely covered by the WegenerNet (40 and 39 stations in each grid). We investigated data from April to September of the years 2014 to 2015; the date of first two years after the launch of the GPM Core Observatory. Since the network has a flexibility to work with various spatial and temporal scales, the comparison could be conducted on average-points to pixel basis at both sub-daily and daily timescales. This presentation will summarize the first results of the comparison and future plans to explore the characteristics of errors in the IMERG datasets.
An equivalent layer magnetization model for the United States derived from MAGSAT data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Galliher, S. C. (Principal Investigator)
1982-01-01
Long wavelength anomalies in the total magnetic field measured field measured by MAGSAT over the United States and adjacent areas are inverted to an equivalent layer crustal magnetization distribution. The model is based on an equal area dipole grid at the Earth's surface. Model resolution having physical significance, is about 220 km for MAGSAT data in the elevation range 300-500 km. The magnetization contours correlate well with large-scale tectonic provinces.
A study of overflow simulations using MPAS-Ocean: Vertical grids, resolution, and viscosity
NASA Astrophysics Data System (ADS)
Reckinger, Shanon M.; Petersen, Mark R.; Reckinger, Scott J.
2015-12-01
MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are carried out using three of the vertical coordinate types available in MPAS-Ocean, including z-star with partial bottom cells, z-star with full cells, and sigma coordinates. The results are first benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model, which are used to set the base case used for this work. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Lastly, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates.
NASA Technical Reports Server (NTRS)
Barker, Howard W.; Kato, Serji; Wehr, T.
2012-01-01
The main point of this study was to use realistic representations of cloudy atmospheres to assess errors in solar flux estimates associated with 1D radiative transfer models. A scene construction algorithm, developed for the EarthCARE satellite mission, was applied to CloudSat, CALIPSO, and MODIS satellite data thus producing 3D cloudy atmospheres measuring 60 km wide by 13,000 km long at 1 km grid-spacing. Broadband solar fluxes and radiances for each (1 km)2 column where then produced by a Monte Carlo photon transfer model run in both full 3D and independent column approximation mode (i.e., a 1D model).
Huang, Weiquan; Fang, Tao; Luo, Li; Zhao, Lin; Che, Fengzhu
2017-07-03
The grid strapdown inertial navigation system (SINS) used in polar navigation also includes three kinds of periodic oscillation errors as common SINS are based on a geographic coordinate system. Aiming ships which have the external information to conduct a system reset regularly, suppressing the Schuler periodic oscillation is an effective way to enhance navigation accuracy. The Kalman filter based on the grid SINS error model which applies to the ship is established in this paper. The errors of grid-level attitude angles can be accurately estimated when the external velocity contains constant error, and then correcting the errors of the grid-level attitude angles through feedback correction can effectively dampen the Schuler periodic oscillation. The simulation results show that with the aid of external reference velocity, the proposed external level damping algorithm based on the Kalman filter can suppress the Schuler periodic oscillation effectively. Compared with the traditional external level damping algorithm based on the damping network, the algorithm proposed in this paper can reduce the overshoot errors when the state of grid SINS is switched from the non-damping state to the damping state, and this effectively improves the navigation accuracy of the system.
Atmospheric Science Data Center
2018-06-28
... improving forecasting of near surface weather. DASF provides information critical to accounting for structural contributions to measurements ... derived products. We also provide two ancillary science data products. They are 10 km SIN Grid Land Cover Type and ...
An 11-Year Climatology of Storms in Which Most Cloud-to-Ground Flashes Lower Positive Charge
NASA Astrophysics Data System (ADS)
MacGorman, D. R.; Eddy, A.; Williams, E. R.; Calhoun, K. M.
2017-12-01
Previous studies have shown that storms which produce frequent cloud-to-ground (CG) lightning dominated by flashes lowering positive charge to ground (+CG flashes) tend to have a so called "inverted" vertical distribution of charge. Such storms have implications for our understanding of electrification processes. We have analyzed eleven years of National Lightning Detection Network data to count +CG and -CG flashes having peak currents ≥15 kA in grid cells with dimensions of 15 km x 15 km x 15 min, with overlapping grid boxes every 5 km along both x and y over the contiguous United States and grids every 5 min in time. These dimensions were chosen because 15 km corresponds roughly to the horizontal size of typical storm cells and 15 min is roughly half the typical duration of a cell. To focus on storms dominated by +CG flashes, we identified all grid cells satisfying one of four sets of thresholds: cells in which +CG flashes for 15 min constitute ≥80%, 90%, or 100% of ≥10 CG flashes or 100% of ≥20 CG flashes. These percentages are larger than those used in most previous studies of +CG flashes. Our primary goal is to investigate the environmental and storm characteristics conducive to +CG flashes and "inverted-polarity" charge distributions, but here we concentrate on the interannual and seasonal distributions of storms satisfying the above thresholds and examine also their relationship to severe weather. As in previous climatological studies of geographic variations in the +CG fraction of total CG flashes, most storms satisfying our thresholds were in a swath stretching from far eastern Colorado and western Kansas roughly northward through Nebraska, the Dakotas, and Minnesota. This region overlaps much of the region in which radar inferred that hail larger than 2.9 cm in diameter most often occurs, but is shifted westward and northward from maxima of observer reports of large-hail occurrence. Although the relationship with radar-inferred large-hail frequency suggests a common dependence on some storm characteristics, storms satisfying our thresholds for +CG flashes also occurred, although less frequently, in regions in which few storms were inferred to have produced large hail, such as east of mountain ranges in northwestern states, so relationships with severe weather will need to be examined on a storm-by-storm basis.
Application of OMI NO2 for Regional Air Quality Model Evaluation
NASA Astrophysics Data System (ADS)
Holloway, T.; Bickford, E.; Oberman, J.; Scotty, E.; Clifton, O. E.
2012-12-01
To support the application of satellite data for air quality analysis, we examine how column NO2 measurements from the Ozone Monitoring Instrument (OMI) aboard the NASA Aura satellite relate to ground-based and model estimates of NO2 and related species. Daily variability, monthly mean values, and spatial gradients in OMI NO2 from the Netherlands Royal Meteorological Institute (KNMI) are compared to ground-based measurements of NO2 from the EPA Air Quality System (AQS) database. Satellite data is gridded to two resolutions typical of regional air quality models - 36 km x 36 km over the continental U.S., and 12 km x 12 km over the Upper Midwestern U.S. Gridding is performed using the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS), a publicly available software to support gridding of satellite data to model grids. Comparing daily OMI retrievals (13:45 daytime local overpass time) with ground-based measurements (13:00), we find January and July 2007 correlation coefficients (r-values) generally positive, with values higher in the winter (January) than summer (July) for most sites. Incidences of anti-correlation or low-correlation are evaluated with model simulations from the U.S. EPA Community Multiscale Air Quality Model version 4.7 (CMAQ). OMI NO2 is also used to evaluate CMAQ output, and to compare performance metrics for CMAQ relative to AQS measurements. We compare simulated NO2 across both the U.S. and Midwest study domains with both OMI NO2 (total column CMAQ values, weighted with the averaging kernel) and with ground-based observations (lowest model layer CMAQ values). 2007 CMAQ simulations employ emissions from the Lake Michigan Air Directors Consortium (LADCO) and meteorology from the Weather Research and Forecasting (WRF) model. Over most of the U.S., CMAQ is too high in January relative to OMI NO2, but too low in January relative to AQS NO2. In contrast, CMAQ is too low in July relative to OMI NO2, but too high relative to AQS NO2. These biases are used to evaluate emission sources (and the importance of missing sources, such as lightning NOx), and to explain model performance for related secondary species, especially nitrate aerosol and ozone.
NASA Astrophysics Data System (ADS)
Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas
2017-06-01
The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.
NASA Astrophysics Data System (ADS)
Kawase, H.; Sasaki, H.; Murata, A.; Nosaka, M.; Ito, R.; Dairaku, K.; Sasai, T.; Yamazaki, T.; Sugimoto, S.; Watanabe, S.; Fujita, M.; Kawazoe, S.; Okada, Y.; Ishii, M.; Mizuta, R.; Takayabu, I.
2017-12-01
We performed large ensemble climate experiments to investigate future changes in extreme weather events using Meteorological Research Institute-Atmospheric General Circulation Model (MRI-AGCM) with about 60 km grid spacing and Non-Hydrostatic Regional Climate Model with 20 km grid spacing (NHRCM20). The global climate simulations are prescribed by the past and future sea surface temperature (SST). Two future climate simulations are conducted so that the global-mean surface air temperature rise 2 K and 4 K from the pre-industrial period. The non-warming simulations are also conducted by MRI-AGCM and NHRCM20. We focus on the future changes in snowfall in Japan. In winter, the Sea of Japan coast experiences heavy snowfall due to East Asian winter monsoon. The cold and dry air from the continent obtains abundant moisture from the warm Sea of Japan, causing enormous amount of snowfall especially in the mountainous area. The NHRCM20 showed winter total snowfall decreases in the most parts of Japan. In contrast, extremely heavy daily snowfall could increase at mountainous areas in the Central Japan and Northern parts of Japan when strong cold air outbreak occurs and the convergence zone appears over the Sea of Japan. The warmer Sea of Japan in the future climate could supply more moisture than that in the present climate, indicating that the cumulus convections could be enhanced around the convergence zone in the Sea of Japan. However, the horizontal resolution of 20 km is not enough to resolve Japan`s complex topography. Therefore, dynamical downscaling with 5 km grid spacing (NHRCM05) is also conducted using NHRCM20. The NHRCM05 does a better job simulating the regional boundary of snowfall and shows more detailed changes in future snowfall characteristics. The future changes in total and extremely heavy snowfall depend on the regions, elevations, and synoptic conditions around Japan.
NASA Astrophysics Data System (ADS)
Alonso-González, Esteban; López-Moreno, J. Ignacio; Gascoin, Simon; García-Valdecasas Ojeda, Matilde; Sanmiguel-Vallelado, Alba; Navarro-Serrano, Francisco; Revuelto, Jesús; Ceballos, Antonio; Jesús Esteban-Parra, María; Essery, Richard
2018-02-01
We present snow observations and a validated daily gridded snowpack dataset that was simulated from downscaled reanalysis of data for the Iberian Peninsula. The Iberian Peninsula has long-lasting seasonal snowpacks in its different mountain ranges, and winter snowfall occurs in most of its area. However, there are only limited direct observations of snow depth (SD) and snow water equivalent (SWE), making it difficult to analyze snow dynamics and the spatiotemporal patterns of snowfall. We used meteorological data from downscaled reanalyses as input of a physically based snow energy balance model to simulate SWE and SD over the Iberian Peninsula from 1980 to 2014. More specifically, the ERA-Interim reanalysis was downscaled to 10 km × 10 km resolution using the Weather Research and Forecasting (WRF) model. The WRF outputs were used directly, or as input to other submodels, to obtain data needed to drive the Factorial Snow Model (FSM). We used lapse rate coefficients and hygrobarometric adjustments to simulate snow series at 100 m elevations bands for each 10 km × 10 km grid cell in the Iberian Peninsula. The snow series were validated using data from MODIS satellite sensor and ground observations. The overall simulated snow series accurately reproduced the interannual variability of snowpack and the spatial variability of snow accumulation and melting, even in very complex topographic terrains. Thus, the presented dataset may be useful for many applications, including land management, hydrometeorological studies, phenology of flora and fauna, winter tourism, and risk management. The data presented here are freely available for download from Zenodo (https://doi.org/10.5281/zenodo.854618). This paper fully describes the work flow, data validation, uncertainty assessment, and possible applications and limitations of the database.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-11-01
The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.
A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.
2000-01-01
The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.
Seismic Structure of India from Regional Waveform Matching
NASA Astrophysics Data System (ADS)
Gaur, V.; Maggi, A.; Priestley, K.; Rai, S.
2003-12-01
We use a neighborhood adaptive grid search procedure and reflectivity synthetics to model regional distance range (500-2000~km) seismograms recorded in India and to determine the variation in the crust and uppermost mantle structure across the subcontinent. The portions of the regional waveform which are most influenced by the crust and uppermost mantle structure are the 10-100~s period Pnl and fundamental mode surface waves. We use the adaptive grid search algorithm to match both portions of the seismogram simultaneously. This procedure results in a family of 1-D path average crust and upper mantle velocity and attenuation models whose propagation characteristics closely match those of the real Earth. Our data set currently consist of ˜20 seismograms whose propagation paths are primarily confined to the Ganges Basin in north India and the East Dharwar Craton of south India. The East Dharwar Craton has a simple and uniform structure consisting of a 36+/-2 km thick two layer crust, and an uppermost mantle with a sub-Moho velocity of 4.5~km/s. The structure of northern India is more complicated, with pronounced low velocities in the upper crustal layer due to the large sediment thicknesses in the Ganges basin.
Hillslope chemical weathering across Paraná, Brazil: a data mining-GIS hybrid approach
Iwashita, Fabio; Friedel, Michael J.; Filho, Carlos Roberto de Souza; Fraser, Stephen J.
2011-01-01
Self-organizing map (SOM) and geographic information system (GIS) models were used to investigate the nonlinear relationships associated with geochemical weathering processes at local (~100 km2) and regional (~50,000 km2) scales. The data set consisted of 1) 22 B-horizon soil variables: P, C, pH, Al, total acidity, Ca, Mg, K, total cation exchange capacity, sum of exchangeable bases, base saturation, Cu, Zn, Fe, B, S, Mn, gammaspectrometry (total count, potassium, thorium, and uranium) and magnetic susceptibility measures; and 2) six topographic variables: elevation, slope, aspect, hydrological accumulated flux, horizontal curvature and vertical curvature. It is characterized at 304 locations from a quasi-regular grid spaced about 24 km across the state of Paraná. This data base was split into two subsets: one for analysis and modeling (274 samples) and the other for validation (30 samples) purposes. The self-organizing map and clustering methods were used to identify and classify the relations among solid-phase chemical element concentrations and GIS derived topographic models. The correlation between elevation and k-means clusters related the relative position inside hydrologic macro basins, which was interpreted as an expression of the weathering process reaching a steady-state condition at the regional scale. Locally, the chemical element concentrations were related to the vertical curvature representing concave–convex hillslope features, where concave hillslopes with convergent flux tends to be a reducing environment and convex hillslopes with divergent flux, oxidizing environments. Stochastic cross validation demonstrated that the SOM produced unbiased classifications and quantified the relative amount of uncertainty in predictions. This work strengthens the hypothesis that, at B-horizon steady-state conditions, the terrain morphometry were linked with the soil geochemical weathering in a two-way dependent process: the topographic relief was a factor on environmental geochemistry while chemical weathering was for terrain feature delineation.
Hillslope chemical weathering across Paraná, Brazil: A data mining-GIS hybrid approach
NASA Astrophysics Data System (ADS)
Iwashita, Fabio; Friedel, Michael J.; Filho, Carlos Roberto de Souza; Fraser, Stephen J.
2011-09-01
Self-organizing map (SOM) and geographic information system (GIS) models were used to investigate the nonlinear relationships associated with geochemical weathering processes at local (~100 km 2) and regional (~50,000 km 2) scales. The data set consisted of 1) 22 B-horizon soil variables: P, C, pH, Al, total acidity, Ca, Mg, K, total cation exchange capacity, sum of exchangeable bases, base saturation, Cu, Zn, Fe, B, S, Mn, gammaspectrometry (total count, potassium, thorium, and uranium) and magnetic susceptibility measures; and 2) six topographic variables: elevation, slope, aspect, hydrological accumulated flux, horizontal curvature and vertical curvature. It is characterized at 304 locations from a quasi-regular grid spaced about 24 km across the state of Paraná. This data base was split into two subsets: one for analysis and modeling (274 samples) and the other for validation (30 samples) purposes. The self-organizing map and clustering methods were used to identify and classify the relations among solid-phase chemical element concentrations and GIS derived topographic models. The correlation between elevation and k-means clusters related the relative position inside hydrologic macro basins, which was interpreted as an expression of the weathering process reaching a steady-state condition at the regional scale. Locally, the chemical element concentrations were related to the vertical curvature representing concave-convex hillslope features, where concave hillslopes with convergent flux tends to be a reducing environment and convex hillslopes with divergent flux, oxidizing environments. Stochastic cross validation demonstrated that the SOM produced unbiased classifications and quantified the relative amount of uncertainty in predictions. This work strengthens the hypothesis that, at B-horizon steady-state conditions, the terrain morphometry were linked with the soil geochemical weathering in a two-way dependent process: the topographic relief was a factor on environmental geochemistry while chemical weathering was for terrain feature delineation.
NASA Astrophysics Data System (ADS)
Petersson, Anders; Rodgers, Arthur
2010-05-01
The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.
Can fractal objects operate as efficient inline mixers?
NASA Astrophysics Data System (ADS)
Laizet, Sylvain; Vassilicos, John; Turbulence, Mixing; Flow Control Group Team
2011-11-01
Recently, Hurst & Vassilicos, PoF 2007, Seoud & Vassilicos, PoF 2007, Mazellier & Vassilicos, PoF, 2010 used different multiscale grids to generate turbulence in a wind tunnel and have shown that complex multiscale boundary/initial conditions can drastically influence the behaviour of a turbulent flow, but that the detailled specific nature of the multiscale geometry matters too. Multiscale (fractal) objects can be designed to be immersed in any fluid flow where there is a need to control and design the turbulence generated by the object. Different types of multiscale objects can be designed as different types of energy-efficient mixers with varying degrees of high turbulent intensities, small pressure drop and downstream distance from the grid where the turbulence is most vigorous. Here, we present a 3D DNS study of the stirring and mixing of a passive scalar by turbulence generated with either a fractal square grid or a regular grid in the presence of a mean scalar gradient. The results show that: (1) there is a linear increase for the passive scalar variance for both grids, (2) the passive scalar variance is ten times bigger for the fractal grid, (3) the passive scalar flux is constant after the production region for both grids, (4) the passive scalar flux is enhanced by an order of magnitude for the fractal grid. We acknowledge support from EPSRC, UK.
Airborne Grid Sea-Ice Surveys for Comparison with Cryosat-2
NASA Astrophysics Data System (ADS)
Brozena, J. M.; Gardner, J. M.; Liang, R.; Hagen, R. A.; Ball, D.; Newman, T.
2015-12-01
The Naval Research Laboratory is studying of the changing Arctic with a focus on ice thickness and distribution variability. The goal is optimization of computer models used to predict sea ice changes. An important part of our study is to calibrate/validate Cryosat-2 ice thickness data prior to its incorporation into new ice forecast models. The footprint of the altimeter over sea-ice is a significant issue in any attempt to ground-truth the data. Along-track footprints are reduced to ~ 300 m by SAR processing of the returns. However, the cross-track footprint is determined by the topography of the surface. Further, the actual return is the sum of the returns from individual reflectors within the footprint making it difficult to interpret the return, and optimize the waveform tracker. We therefore collected a series of grids of scanning LiDAR and radar on sub-satellite tracks over sea-ice that would extend far enough cross-track to capture the illuminated area. The difficulty in the collection of such grids, which are comprised of adjacent overlapping tracks is ice motion of as much as 300 m over the duration of a single flight track (~ 20 km) of data collection. With a typical LiDAR swath width of < 500m adjustment of the survey tracks in near real-time for the ice motion is necessary for a coherent data set. This was accomplished by a an NRL devised photogrammetric method of ice velocity determination. Post-processing refinements resulted in typical track-to-track miss-ties of ~ 1-2 m, much of which could be attributed to ice deformation over the period of the survey. This allows us to reconstruct the ice configuration to the time of the satellite overflight, resulting in a good picture of the surface actually illuminated by the radar. The detailed 2-d LiDAR image is the snow surface, not the underlying ice presumably illuminated by the radar. Our hope is that the 1-D radar profiles collected along the LiDAR swath centerlines will be sufficient to correct the grid for snow thickness. A total of 15 grids 5-20 km wide (cross-track) by 10-30 km long (along-track) centered on ice illuminated by CryoSat-2 were collected north of Barrow, AK. This occured over three field seasons which took place from 2013-15. Data from the grids are shown here and are being used to examine the relationship of the tracked satellite waveform data to the actual surface.
NASA Astrophysics Data System (ADS)
Wrona, Elizabeth; Rowlandson, Tracy L.; Nambiar, Manoj; Berg, Aaron A.; Colliander, Andreas; Marsh, Philip
2017-05-01
This study examines the Soil Moisture Active Passive soil moisture product on the Equal Area Scalable Earth-2 (EASE-2) 36 km Global cylindrical and North Polar azimuthal grids relative to two in situ soil moisture monitoring networks that were installed in 2015 and 2016. Results indicate that there is no relationship between the Soil Moisture Active Passive (SMAP) Level-2 passive soil moisture product and the upscaled in situ measurements. Additionally, there is very low correlation between modeled brightness temperature using the Community Microwave Emission Model and the Level-1 C SMAP brightness temperature interpolated to the EASE-2 Global grid; however, there is a much stronger relationship to the brightness temperature measurements interpolated to the North Polar grid, suggesting that the soil moisture product could be improved with interpolation on the North Polar grid.
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Efficient Delaunay Tessellation through K-D Tree Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less
Tomography of Pg and Sg Across the Western United States Using USArray Data
NASA Astrophysics Data System (ADS)
Steck, L.; Phillips, W. S.; Begnaud, M. L.; Stead, R.
2009-12-01
In this paper we explore the use of Pg and Sg for determining crustal structure in the western United States. Seismic data used in the study come from USArray, along with local and regional networks in the region. To invert the travel times for velocity structure we use the LSQR algorithm assuming a great circle arc path between source and receiver. First difference smoothing is used to regularize the model and we calculate station and event terms. For Pg we have about 160,000 arrivals from 30,000 events reporting at 1500 stations. If we trim data based on an epicentral ground truth level of 25 km or better, we have 53000 arrivals, 5000 events and 1300 stations. Data density is such that grids of 0.5 deg or better are possible. Velocity results show good correlation with tectonic provinces. We find fast velocities beneath the Snake River Plain, coastal Washington State, and for the coast ranges of California south of Point Reyes. Low velocities are observed on the border between Idaho and Montana, and in the Basin and Range of eastern Nevada, southeastern California, and southern Arizona. For Sg we have 48,813 arrivals for 13,548 events at 1052 stations, not filtering by ground truth level. Excellent coverage allows grids to 0.5 deg or lower. Prominent features of this model include high velocities in the Snake River Plain, Colorado Plateau, and the Cascades and Sierra Nevada. Low velocities are found in Southern California, the Basin and Range, and the Columbia Plateau. Root-mean-square residual reductions are 34% for Pg and 41% for Sg.
Pulsed laser-induced formation of silica nanogrids
2014-01-01
Silica grids with micron to sub-micron mesh sizes and wire diameters of 50 nm are fabricated on fused silica substrates. They are formed by single-pulse structured excimer laser irradiation of a UV-absorbing silicon suboxide (SiO x ) coating through the transparent substrate. A polydimethylsiloxane (PDMS) superstrate (cover layer) coated on top of the SiO x film prior to laser exposure serves as confinement for controlled laser-induced structure formation. At sufficiently high laser fluence, this process leads to grids consisting of a periodic loop network connected to the substrate at regular positions. By an additional high-temperature annealing, the residual SiO x is oxidized, and a pure SiO2 grid is obtained. PACS 81.07.-b; 81.07.Gf; 81.65.Cf PMID:24581305
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...
2016-09-16
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Examining Extreme Events Using Dynamically Downscaled 12-km WRF Simulations
Continued improvements in the speed and availability of computational resources have allowed dynamical downscaling of global climate model (GCM) projections to be conducted at increasingly finer grid scales and over extended time periods. The implementation of dynamical downscal...
HIGH-RESOLUTION DATASET OF URBAN CANOPY PARAMETERS FOR HOUSTON, TEXAS
Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...
Inventory of SREF Files on NOMADS
Inventory of SREF Files on NOMADS GRIB Filter options Description Filename Cycles Available 40km .PP.fFF.grib2 03,09,15,21 UTC OPeNDAP options Description Filename Cycles Available Grid 212 for all members and
NASA Astrophysics Data System (ADS)
Pauliquevis, T.; Gomes, H. B.; Barbosa, H. M.
2014-12-01
In this study we evaluate the skill of WRF model to simulate the actual diurnal cycle of convection in the Amazon basin. Models tipically are not capable to simulate the well documented cycle of 1) shallow cumulus in the morning; 2) towering process around noon; 3) shallow-to-deep convection and rain around 14h (LT). The fail in models is explained by the typical size of shallow cumulus (~0.5 - 2.0 km) and the coarse resolution of models using convection parameterisation (> 20 km). In this study we employed high spatial resolution (Dx = 0.625 km) to reach the shallow cumulus scale. . The simulations corresponds to a dynamical downscaling of ERA-Interim from 25 to 28 February 2013 with 40 vertical levels, 30 minutes outputs,and three nested grids (10 km, 2.5 km, 0.625 km). Improved vegetation (USGS + PROVEG), albedo and greenfrac (computed from MODIS-NDVI + LEAF-2 land surface parameterization), as well as pseudo analysis of soil moisture were used as input data sets, resulting in more realistic precipitation fields when compared to observations in sensitivity tests. Convective parameterization was switched off for the 2.5/0.625 km grids, where cloud formation was solely resolved by the microphysics module (WSM6 scheme, which provided better results). Results showed a significant improved capability of the model to simulate diurnal cycle. Shallow cumulus begin to appear in the first hours in the morning. They were followed by a towering process that culminates with precipitation in the early afternoon, which is a behavior well described by observations but rarely obtained in models. Rain volumes were also realistic (~20 mm for single events) when compared to typical events during the period, which is in the core of the wet season. Cloud fields evolution also differed with respect to Amazonas River bank, which is a clear evidence of the interaction between river breeze and large scale circulation.
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very similar results either for temperature or precipitation and no configuration seems to outperform the others both for the whole region and for every season. Nevertheless, some marked differences between areas within the domain appear when analyzing certain physics options, particularly for precipitation. Some of the physics options, such as radiation, have little impact on model performance with respect to precipitation and results do not vary when the scheme is modified. On the other hand, cumulus and boundary layer parameterizations are responsible for most of the differences obtained between configurations. Acknowledgements: The Spanish Ministry of Science and Innovation, with additional support from the European Community Funds (FEDER), project CGL2007-61151/CLI, and the Regional Government of Andalusia project P06-RNM-01622, have financed this study. The "Centro de Servicios de Informática y Redes de Comunicaciones" (CSIRC), Universidad de Granada, has provided the computing time. Key words: MM5 mesoscale model, parameterizations schemes, temperature and precipitation, South of Spain.
Understanding Predictability of the Ocean
2011-09-30
information if it does not display a currently valid OMB control number. 1. REPORT DATE 30 SEP 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00...source of barotropic-to-baroclinic tidal energy conversion. In the region around the islands, the internal tidal energy is as much as 50% of the...PacIOOS currently employs four nested ROMS models: 4km island-chain, 1km Oahu, 100m Oahu South-Shore, and 80m Oahu West-Coast. Each grid is nested in
NASA Astrophysics Data System (ADS)
Okabe, Ryo; Tanaka, Toshiki; Nishihara, Masato; Kai, Yutaka; Takahara, Tomoo; Chen, Hao; Yan, Weizhen; Tao, Zhenning; Rasmussen, Jens C.
2015-01-01
Discrete multi-tone (DMT) technology is an attractive modulation technique for short reach optical transmission system. One of the main factors that limit system performance is fiber dispersion, which is strongly influenced by the chirp characteristics of transmitters. We investigated the fiber dispersion impairment in a 400GbE (4 × 116.1-Gb/s) DMT system on LAN-WDM grid for reach enhancement up to 40 km through experiments and numerical simulations.
NASA Astrophysics Data System (ADS)
Zhai, Xiaofang; Zhu, Xinyan; Xiao, Zhifeng; Weng, Jie
2009-10-01
Historically, cellular automata (CA) is a discrete dynamical mathematical structure defined on spatial grid. Research on cellular automata system (CAS) has focused on rule sets and initial condition and has not discussed its adjacency. Thus, the main focus of our study is the effect of adjacency on CA behavior. This paper is to compare rectangular grids with hexagonal grids on their characteristics, strengths and weaknesses. They have great influence on modeling effects and other applications including the role of nearest neighborhood in experimental design. Our researches present that rectangular and hexagonal grids have different characteristics. They are adapted to distinct aspects, and the regular rectangular or square grid is used more often than the hexagonal grid. But their relative merits have not been widely discussed. The rectangular grid is generally preferred because of its symmetry, especially in orthogonal co-ordinate system and the frequent use of raster from Geographic Information System (GIS). However, in terms of complex terrain, uncertain and multidirectional region, we have preferred hexagonal grids and methods to facilitate and simplify the problem. Hexagonal grids can overcome directional warp and have some unique characteristics. For example, hexagonal grids have a simpler and more symmetric nearest neighborhood, which avoids the ambiguities of the rectangular grids. Movement paths or connectivity, the most compact arrangement of pixels, make hexagonal appear great dominance in the process of modeling and analysis. The selection of an appropriate grid should be based on the requirements and objectives of the application. We use rectangular and hexagonal grids respectively for developing city model. At the same time we make use of remote sensing images and acquire 2002 and 2005 land state of Wuhan. On the base of city land state in 2002, we make use of CA to simulate reasonable form of city in 2005. Hereby, these results provide a proof of concept for hexagonal which has great dominance.
3D data processing with advanced computer graphics tools
NASA Astrophysics Data System (ADS)
Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott
2012-09-01
Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.
Respiratory Disease in Relation to Outdoor Air Pollution in Kanpur, India
Liu, Hai-Ying; Bartonova, Alena; Schindler, Martin; Sharma, Mukesh; Behera, Sailesh N.; Katiyar, Kamlesh; Dikshit, Onkar
2013-01-01
ABSTRACT This paper examines the effect of outdoor air pollution on respiratory disease in Kanpur, India, based on data from 2006. Exposure to air pollution is represented by annual emissions of sulfur dioxide (SO2), particulate matter (PM), and nitrogen oxides (NOx) from 11 source categories, established as a geographic information system (GIS)-based emission inventory in 2 km × 2 km grid. Respiratory disease is represented by number of patients who visited specialist pulmonary hospital with symptoms of respiratory disease. The results showed that (1) the main sources of air pollution are industries, domestic fuel burning, and vehicles; (2) the emissions of PM per grid are strongly correlated to the emissions of SO2 and NOx; and (3) there is a strong correlation between visits to a hospital due to respiratory disease and emission strength in the area of residence. These results clearly indicate that appropriate health and environmental monitoring, actions to reduce emissions to air, and further studies that would allow assessing the development in health status are necessary. [Supplementary materials are available for this article. Go to the publisher's online edition of Archives of Environmental & Occupational Health for material on emission of SO2, PM, NOx from various sources, and total number of inhabitants, total number of patients in grid squares covering the Kanpur city.] PMID:23697693
Overflow Simulations using MPAS-Ocean in Idealized and Realistic Domains
NASA Astrophysics Data System (ADS)
Reckinger, S.; Petersen, M. R.; Reckinger, S. J.
2016-02-01
MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Also, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates. Additionally, preliminary measurements of overflow diagnostics on global simulations using a realistic oceanic domain are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S
A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand,more » and Vietnam. The data sets within this database are provided in three file formats: ARC/INFOTM exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages. This database includes ten ARC/INFO exported integer grid files (five with the pixel size 3.75 km x 3.75 km and five with the pixel size 0.25 degree longitude x 0.25 degree latitude) and 27 ASCII files. The first ASCII file contains the documentation associated with this database. Twenty-four of the ASCII files were generated by means of the ARC/INFO GRIDASCII command and can be used by most raster-based GIS software packages. The 24 files can be subdivided into two groups of 12 files each. These files contain real data values representing actual carbon and potential carbon density in Mg C/ha (1 megagram = 10{sup 6} grams) and integer-coded values for country name, Weck's Climatic Index, ecofloristic zone, elevation, forest or non-forest designation, population density, mean annual precipitation, slope, soil texture, and vegetation classification. One set of 12 files contains these data at a spatial resolution of 3.75 km, whereas the other set of 12 files has a spatial resolution of 0.25 degree. The remaining two ASCII data files combine all of the data from the 24 ASCII data files into 2 single generic data files. The first file has a spatial resolution of 3.75 km, and the second has a resolution of 0.25 degree. Both files also provide a grid-cell identification number and the longitude and latitude of the center-point of each grid cell. The 3.75-km data in this numeric data package yield an actual total carbon estimate of 42.1 Pg (1 petagram = 10{sup 15} grams) and a potential carbon estimate of 73.6 Pg; whereas the 0.25-degree data produced an actual total carbon estimate of 41.8 Pg and a total potential carbon estimate of 73.9 Pg. Fortran and SAS{trademark} access codes are provided to read the ASCII data files, and ARC/INFO and ARCVIEW command syntax are provided to import the ARC/INFO exported integer grid files. The data files and this documentation are available without charge on a variety of media and via the Internet from the Carbon Dioxide Information Analysis Center (CDIAC).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.
A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand,more » and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO{trademark} exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages. This database includes ten ARC/INFO exported integer grid files (five with the pixel size 3.75 km x 3.75 km and five with the pixel size 0.25 degree longitude x 0.25 degree latitude) and 27 ASCII files. The first ASCII file contains the documentation associated with this database. Twenty-four of the ASCII files were generated by means of the ARC/INFO GRIDASCII command and can be used by most raster-based GIS software packages. The 24 files can be subdivided into two groups of 12 files each. These files contain real data values representing actual carbon and potential carbon density in Mg C/ha (1 megagram = 10{sup 6} grams) and integer- coded values for country name, Weck's Climatic Index, ecofloristic zone, elevation, forest or non-forest designation, population density, mean annual precipitation, slope, soil texture, and vegetation classification. One set of 12 files contains these data at a spatial resolution of 3.75 km, whereas the other set of 12 files has a spatial resolution of 0.25 degree. The remaining two ASCII data files combine all of the data from the 24 ASCII data files into 2 single generic data files. The first file has a spatial resolution of 3.75 km, and the second has a resolution of 0.25 degree. Both files also provide a grid-cell identification number and the longitude and latitude of the centerpoint of each grid cell. The 3.75-km data in this numeric data package yield an actual total carbon estimate of 42.1 Pg (1 petagram = 10{sup 15} grams) and a potential carbon estimate of 73.6 Pg; whereas the 0.25-degree data produced an actual total carbon estimate of 41.8 Pg and a total potential carbon estimate of 73.9 Pg. Fortran and SASTM access codes are provided to read the ASCII data files, and ARC/INFO and ARCVIEW command syntax are provided to import the ARC/INFO exported integer grid files. The data files and this documentation are available without charge on a variety of media and via the Internet from the Carbon Dioxide Information Analysis Center (CDIAC).« less
URBAN MORPHOLOGY FOR HOUSTON TO DRIVE MODELS-3/CMAQ AT NEIGHBORHOOD SCALES
Air quality simulation models applied at various horizontal scales require different degrees of treatment in the specifications of the underlying surfaces. As we model neighborhood scales ( 1 km horizontal grid spacing), the representation of urban morphological structures (e....
A FEDERATED PARTNERSHIP FOR URBAN METEOROLOGICAL AND AIR QUALITY MODELING
Recently, applications of urban meteorological and air quality models have been performed at resolutions on the order of km grid sizes. This necessitated development and incorporation of high resolution landcover data and additional boundary layer parameters that serve to descri...
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...
NASA Astrophysics Data System (ADS)
Campagnolo, M.; Schaaf, C.
2016-12-01
Due to the necessity of time compositing and other user requirements, vegetation indices, as well as many other EOS derived products, are distributed in a gridded format (level L2G or higher) using an equal area sinusoidal grid, at grid sizes of 232 m, 463 m or 926 m. In this process, the actual surface signal suffers somewhat of a degradation, caused by both the sensor's point spread function and this resampling from swath to the regular grid. The magnitude of that degradation depends on a number of factors, such as surface heterogeneity, band nominal resolution, observation geometry and grid size. In this research, the effect of grid size is quantified for MODIS and VIIRS (at five EOS validation sites with distinct land covers), for the full range of view zenith angles, and at grid sizes of 232 m, 253 m, 309 m, 371 m, 397 m and 463 m. This allows us to compare MODIS and VIIRS gridded products for the same scenes, and to determine the grid size at which these products are most similar. Towards that end, simulated MODIS and VIIRS bands are generated from Landsat 8 surface reflectance images at each site and gridded products are then derived by using maximum obscov resampling. Then, for every grid size, the original Landsat 8 NDVI and the derived MODIS and VIIRS NDVI products are compared. This methodology can be applied to other bands and products, to determine which spatial aggregation overall is best suited for EOS to S-NPP product continuity. Results for MODIS (250 m bands) and VIIRS (375 m bands) NDVI products show that finer grid sizes tend to be better at preserving the original signal. Significant degradation for gridded NDVI occurs when grid size is larger then 253 m (MODIS) and 371 m (VIIRS). Our results suggest that current MODIS "500 m" (actually 463 m) grid size is best for product continuity. Note however, that up to that grid size value, MODIS gridded products are somewhat better at preserving the surface signal than VIIRS, except for at very high VZA.
NASA Astrophysics Data System (ADS)
Vennam, L. P.; Vizuete, W.; Talgo, K.; Omary, M.; Binkowski, F. S.; Xing, J.; Mathur, R.; Arunachalam, S.
2017-12-01
Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9-12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed 1.3% (mean, min-max: 0.46, 0.3-0.5 ppbv) and 0.2% (0.013, 0.004-0.02 μg/m3) of total O3 and PM2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O3 0.69, 0.5-0.85 ppbv) and 0.5% (PM2.5 0.03, 0.01-0.05 μg/m3)) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km2) and fine (36 × 36 km2) grid resolutions in NA showed 70 times and 13 times higher aviation impacts for O3 and PM2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions.
Developing High-resolution Soil Database for Regional Crop Modeling in East Africa
NASA Astrophysics Data System (ADS)
Han, E.; Ines, A. V. M.
2014-12-01
The most readily available soil data for regional crop modeling in Africa is the World Inventory of Soil Emission potentials (WISE) dataset, which has 1125 soil profiles for the world, but does not extensively cover countries Ethiopia, Kenya, Uganda and Tanzania in East Africa. Another dataset available is the HC27 (Harvest Choice by IFPRI) in a gridded format (10km) but composed of generic soil profiles based on only three criteria (texture, rooting depth, and organic carbon content). In this paper, we present a development and application of a high-resolution (1km), gridded soil database for regional crop modeling in East Africa. Basic soil information is extracted from Africa Soil Information Service (AfSIS), which provides essential soil properties (bulk density, soil organic carbon, soil PH and percentages of sand, silt and clay) for 6 different standardized soil layers (5, 15, 30, 60, 100 and 200 cm) in 1km resolution. Soil hydraulic properties (e.g., field capacity and wilting point) are derived from the AfSIS soil dataset using well-proven pedo-transfer functions and are customized for DSSAT-CSM soil data requirements. The crop model is used to evaluate crop yield forecasts using the new high resolution soil database and compared with WISE and HC27. In this paper we will present also the results of DSSAT loosely coupled with a hydrologic model (VIC) to assimilate root-zone soil moisture. Creating a grid-based soil database, which provides a consistent soil input for two different models (DSSAT and VIC) is a critical part of this work. The created soil database is expected to contribute to future applications of DSSAT crop simulation in East Africa where food security is highly vulnerable.
Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby; ...
2016-10-22
Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less
Vennam, L. P.; Vizuete, W.; Talgo, K.; Omary, M.; Binkowski, F. S.; Xing, J.; Mathur, R.; Arunachalam, S.
2018-01-01
Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9–12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed ~1.3% (mean, min–max: 0.46, 0.3–0.5 ppbv) and 0.2% (0.013, 0.004–0.02 μg/m3) of total O3 and PM2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O3 0.69, 0.5–0.85 ppbv) and 0.5% (PM2.5 0.03, 0.01–0.05 μg/m3)) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km2) and fine (36 × 36 km2) grid resolutions in NA showed ~70 times and ~13 times higher aviation impacts for O3 and PM2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions. PMID:29707471
Vennam, L P; Vizuete, W; Talgo, K; Omary, M; Binkowski, F S; Xing, J; Mathur, R; Arunachalam, S
2017-01-01
Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9-12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality are not well understood. Our key objective is to use chemical transport models (with prescribed meteorology) to predict aircraft emissions impacts on the troposphere and surface air quality. We quantified the importance of including full-flight intercontinental emissions and increased horizontal grid resolution. The full-flight aviation emissions in the Northern Hemisphere contributed ~1.3% (mean, min-max: 0.46, 0.3-0.5 ppbv) and 0.2% (0.013, 0.004-0.02 μg/m 3 ) of total O 3 and PM 2.5 concentrations at the surface, with Europe showing slightly higher impacts (1.9% (O 3 0.69, 0.5-0.85 ppbv) and 0.5% (PM 2.5 0.03, 0.01-0.05 μg/m 3 )) than North America (NA) and East Asia. We computed seasonal aviation-attributable mass flux vertical profiles and aviation perturbations along isentropic surfaces to quantify the transport of cruise altitude emissions at the hemispheric scale. The comparison of coarse (108 × 108 km 2 ) and fine (36 × 36 km 2 ) grid resolutions in NA showed ~70 times and ~13 times higher aviation impacts for O 3 and PM 2.5 in coarser domain. These differences are mainly due to the inability of the coarse resolution simulation to capture nonlinearities in chemical processes near airport locations and other urban areas. Future global studies quantifying aircraft contributions should consider model resolution and perhaps use finer scales near major aviation source regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby
Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less
Thompson, Robert Stephen; Hostetler, Steven W.; Bartlein, Patrick J.; Anderson, Katherine H.
1998-01-01
Historical and geological data indicate that significant changes can occur in the Earth's climate on time scales ranging from years to millennia. In addition to natural climatic change, climatic changes may occur in the near future due to increased concentrations of carbon dioxide and other trace gases in the atmosphere that are the result of human activities. International research efforts using atmospheric general circulation models (AGCM's) to assess potential climatic conditions under atmospheric carbon dioxide concentrations of twice the pre-industrial level (a '2 X CO2' atmosphere) conclude that climate would warm on a global basis. However, it is difficult to assess how the projected warmer climatic conditions would be distributed on a regional scale and what the effects of such warming would be on the landscape, especially for temperate mountainous regions such as the Western United States. In this report, we present a strategy to assess the regional sensitivity to global climatic change. The strategy makes use of a hierarchy of models ranging from an AGCM, to a regional climate model, to landscape-scale process models of hydrology and vegetation. A 2 X CO2 global climate simulation conducted with the National Center for Atmospheric Research (NCAR) GENESIS AGCM on a grid of approximately 4.5o of latitude by 7.5o of longitude was used to drive the NCAR regional climate model (RegCM) over the Western United States on a grid of 60 km by 60 km. The output from the RegCM is used directly (for hydrologic models) or interpolated onto a 15-km grid (for vegetation models) to quantify possible future environmental conditions on a spatial scale relevant to policy makers and land managers.
Comparison of measuring strategies for the 3-D electrical resistivity imaging of tumuli
NASA Astrophysics Data System (ADS)
Tsourlos, Panagiotis; Papadopoulos, Nikos; Yi, Myeong-Jong; Kim, Jung-Ho; Tsokas, Gregory
2014-02-01
Artificial erected hills like tumuli, mounds, barrows and kurgans comprise monuments of the past human activity and offer opportunities to reconstruct habitation models regarding the life and customs during their building period. These structures also host features of archeological significance like architectural relics, graves or chamber tombs. Tumulus exploration is a challenging geophysical problem due to the complex distribution of the subsurface physical properties, the size and burial depth of potential relics and the uneven topographical terrain. Geoelectrical methods by means of three-dimensional (3-D) inversion are increasingly popular for tumulus investigation. Typically data are obtained by establishing a regular rectangular grid and assembling the data collected by parallel two-dimensional (2-D) tomographies. In this work the application of radial 3-D mode is studied, which is considered as the assembly of data collected by radially positioned Electrical Resistivity Tomography (ERT) lines. The relative advantages and disadvantages of this measuring mode over the regular grid measurements were investigated and optimum ways to perform 3-D ERT surveys for tumuli investigations were proposed. Comparative test was performed by means of synthetic examples as well as by tests with field data. Overall all tested models verified the superiority of the radial mode in delineating bodies positioned at the central part of the tumulus while regular measuring mode proved superior in recovering bodies positioned away from the center of the tumulus. The combined use of radial and regular modes seems to produce superior results in the expense of time required for data acquisition and processing.
NASA Astrophysics Data System (ADS)
Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.
2017-02-01
An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.
Some analysis on the diurnal variation of rainfall over the Atlantic Ocean
NASA Technical Reports Server (NTRS)
Gill, T.; Perng, S.; Hughes, A.
1981-01-01
Data collected from the GARP Atlantic Tropical Experiment (GATE) was examined. The data were collected from 10,000 grid points arranged as a 100 x 100 array; each grid covered a 4 square km area. The amount of rainfall was measured every 15 minutes during the experiment periods using c-band radars. Two types of analyses were performed on the data: analysis of diurnal variation was done on each of grid points based on the rainfall averages at noon and at midnight, and time series analysis on selected grid points based on the hourly averages of rainfall. Since there are no known distribution model which best describes the rainfall amount, nonparametric methods were used to examine the diurnal variation. Kolmogorov-Smirnov test was used to test if the rainfalls at noon and at midnight have the same statistical distribution. Wilcoxon signed-rank test was used to test if the noon rainfall is heavier than, equal to, or lighter than the midnight rainfall. These tests were done on each of the 10,000 grid points at which the data are available.
NASA Astrophysics Data System (ADS)
Ojeda, GermáN. Y.; Whitman, Dean
2002-11-01
The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.
Cascading failures in ac electricity grids.
Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan
2016-09-01
Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.
Investigation of the air pollutant distribution over Northeast Asia using Models-3/CMAQ
NASA Astrophysics Data System (ADS)
Kim, J. Y.; Ghim, Y. S.; Won, J.-G.; Yoon, S.-C.; Woo, J.-H.
2003-04-01
Northeast Asia is one of the most densely populated areas in the world. Huge amount of air pollutants emitted in the area is transported to the east along with prevailing westerlies. In spring of Northeast Asia, migratory anticyclones are frequent. Transport and distribution of air pollutants can be substantially altered according to the locations of anticyclones. In this work, two different synoptic meteorological conditions associated with different locations of anticyclones in May 1999 were identified. The distributions of gaseous and particulate pollutants in these meteorological conditions were predicted and compared. Models-3/CMAQ (USEPA Models-3/Community Multi-scale Air Quality) and MM5 (PSU/NCAR Mesoscale Modeling System) were used to predict air quality and meteorology, respectively. The modeling domain was 5,184 km x 3,456 km centering on the Korean Peninsula (130o N, 40o E). The grid size was 108 km x 108 km and the number of grids was 48 in the west-east direction and 32 in the south-north direction. The number of layers in the vertical direction was six to the height of 500 hPa. Emission data were taken from the Center for Global and Regional Environmental Research, University of Iowa for anthropogenic emissions and from GEIA (Global Emissions Inventory Activity) for biogenic emissions. The GDAPS (Global Data Assimilation and Prediction System) data of six-hour intervals were used for initial and boundary conditions of MM5.
Widespread Mega-Pockmarks Imaged Along the Western Edge of the Cocos Ridge
NASA Astrophysics Data System (ADS)
Gibson, J. C.; Kluesner, J. W.; Silver, E. A.; Bangs, N. L.; McIntosh, K. D.
2012-12-01
A large field (245km2) of 31 seabed mega-pockmarks was imaged between the Cocos ridge and the Quepos plateau on ~16.5 Ma oceanic crust generated at the Cocos-Nazca spreading center. The imaged pockmarks represent only a fraction of the much larger pockmark field evident in 100 m grid cell bathymetry data secured from MGDS. The pockmarks are clustered around 1800-2100 mbsl and were mapped using EM122 multibeam sonar, a 3.5 kHz sub-bottom profiler, and 3D Multi-Channel Seismic (MCS) aboard R/V Marcus G. Langseth during the CRISP seismic survey (2011). Using a constrained swath width of 1.4 km, the increased sounding density facilitated bathymetry/backscatter to be gridded at 10m and 8m respectively. The diameter of the pockmarks varies from ~1 km to ~2 km with a relief range of ~30-80 m, and average slopes of 15°. The MCS data also reveal older buried pockmarks in trench adjacent sediments. Small high-backscatter mounds occur within a subset of the pockmarks, which may indicate bioherms or carbonate banks above focused fluid flow conduits. Based on drilling results of DSDP Site 158 and ODP Site 1381, the pockmarks appear to be the result of paleo-differential advancement of a silica diagenetic front (opal-A to opal-CT). Although, the pockmarks may be erosional features sourced at depth from dewatering of sediments inter-bedded with igneous layers.
NASA Astrophysics Data System (ADS)
Sagawa, Hiroyuki
How cosmic rays obtain energies of about 1020 eV and where they come from are big mysteries in physics. The Telescope Array (TA) is comprised of Surface Detectors (SDs) and Fluorescence Detectors (FDs) located in Utah, U.S.A., and aims to explore the origin of highest-energy cosmic rays. The SD array consists of 507 scintillation detectors arranged on a square grid of 1.2-km spacing, covering approximately 700 km2. The FD telescopes, located at three sites, look over the surface array. Using the first five years of data collected by the surface detectors, we found a cluster of cosmic rays with energies greater than 5.7 × 1019 eV that we call the hot spot. With enhanced statistics, we expect to observe the structure of that hot spot along with other possible excesses, and point sources along with the correlations with extreme phenomena in the nearby universe. We plan to make the area of the TA SD array four times larger to approximately 3,000 km2, by adding 500 SDs on a square grid of 2.08-km spacing. Two FD stations will be built viewing the new SD array. This TA extension that we call TA×4 will greatly accelerate the speed at which we will reach the goals mentioned above, and will enhance cosmic-ray energy spectrum measurement and composition study at the highest energies by TA. At this conference, we present our plan for TA×4.
NASA Astrophysics Data System (ADS)
Özacar, Arda A.; Abgarmi, Bizhan
2017-04-01
The North Anatolian Fault Zone (NAFZ) is an active continental transform plate boundary that accommodates the westward extrusion of the Anatolian plate. The central segment of NAFZ displays northward convex surface trace which coincides partly with the Paleo-Tethyan suture formed during the early Cenozoic. The depth extent and detailed structure of the actively deforming crust along the NAF is still under much debate and processes responsible from rapid uplift are enigmatic. In this study, over five thousand high quality P receiver functions are computed using teleseismic earthquakes recorded by permanent stations of national agencies and temporary North Anatolian Fault Passive Seismic experiment (2005-2008). In order to map the crustal thickness and Vp/Vs variations accurately, the study area is divided into grids with 20 km spacing and along each grid line Moho phase and its multiples are picked through constructed common conversion point (CCP) profiles. According to our results, nature of discontinuities and crustal thickness display sharp changes across the main strand of NAFZ supporting a lithospheric scale faulting that offsets Moho discontinuity. In the southern block, crust is relatively thin in the west ( 35 km) and becomes thicker gradually towards east ( 40 km). In contrast, the northern block displays a strong lateral change in crustal thickness reaching up to 10 km across a narrow roughly N-S oriented zone which is interpreted as the subsurface signature of the ambiguous boundary between Istanbul Block and Pontides located further west at the surface.
Peacock, Jared R.; Mangan, Margaret T.; McPhee, Darcy K.; Wannamaker, Phil E.
2016-01-01
Though shallow flow of hydrothermal fluids in Long Valley Caldera, California, has been well studied, neither the hydrothermal source reservoir nor heat source has been well characterized. Here a grid of magnetotelluric data were collected around the Long Valley volcanic system and modeled in 3-D. The preferred electrical resistivity model suggests that the source reservoir is a narrow east-west elongated body 4 km below the west moat. The heat source could be a zone of 2–5% partial melt 8 km below Deer Mountain. Additionally, a collection of hypersaline fluids, not connected to the shallow hydrothermal system, is found 3 km below the medial graben, which could originate from a zone of 5–10% partial melt 8 km below the south moat. Below Mammoth Mountain is a 3 km thick isolated body containing fluids and gases originating from an 8 km deep zone of 5–10% basaltic partial melt.
Landuyt, Wouter Van; Vanhecke, Leo; Brosens, Dimitri
2012-01-01
Abstract Florabank1 is a database that contains distributional data on the wild flora (indigenous species, archeophytes and naturalised aliens) of Flanders and the Brussels Capital Region. It holds about 3 million records of vascular plants, dating from 1800 till present. Furthermore, it includes ecological data on vascular plant species, redlist category information, Ellenberg values, legal status, global distribution, seed bank etc. The database is an initiative of “Flo.Wer” (www.plantenwerkgroep.be), the Research Institute for Nature and Forest (INBO: www.inbo.be) and the National Botanic Garden of Belgium (www.br.fgov.be). Florabank aims at centralizing botanical distribution data gathered by both professional and amateur botanists and to make these data available to the benefit of nature conservation, policy and scientific research. The occurrence data contained in Florabank1 are extracted from checklists, literature and herbarium specimen information. Of survey lists, the locality name (verbatimLocality), species name, observation date and IFBL square code, the grid system used for plant mapping in Belgium (Van Rompaey 1943), is recorded. For records dating from the period 1972–2004 all pertinent botanical journals dealing with Belgian flora were systematically screened. Analysis of herbarium specimens in the collection of the National Botanic Garden of Belgium, the University of Ghent and the University of Liège provided interesting distribution knowledge concerning rare species, this information is also included in Florabank1. The data recorded before 1972 is available through the Belgian GBIF node (http://data.gbif.org/datasets/resource/10969/), not through FLORABANK1, to avoid duplication of information. A dedicated portal providing access to all published Belgian IFBL records at this moment is available at: http://projects.biodiversity.be/ifbl All data in Florabank1 is georeferenced. Every record holds the decimal centroid coordinates of the IFBL square containing the observation. The uncertainty radius is the smallest circle possible covering the whole IFBL square, which can measure 1 Km² or 4 Km². Florabank is a work in progress and new occurrences are added as they become available; the dataset will be updated through GBIF on a regularly base. PMID:22649282
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
A DYNAMIC SIMULATOR OF ENVIRONMENTAL CHEMICAL PARTITIONING
A version of the Community Multiscale Air Quality (CMAQ) model has been developed by the U.S. EPA that is capable of addressing the atmospheric fate, transport and deposition of some common trace toxics. An initial, 36-km rectangular grid-cell application for atrazine has been...
MISR Level 1 Near Real Time Products
Atmospheric Science Data Center
2016-10-31
Level 1 Near Real Time The MISR Near Real Time Level 1 data products ... km MISR swath and projected onto a Space-Oblique Mercator (SOM) map grid. The Ellipsoid-projected and Terrain-projected top-of-atmosphere (TOA) radiance products provide measurements respectively resampled onto the ...
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...
Spatial heterogeneity in the carrying capacity of sika deer in Japan.
Iijima, Hayato; Ueno, Mayumi
2016-06-09
Carrying capacity is 1 driver of wildlife population dynamics. Although in previous studies carrying capacity was considered to be a fixed entity, it may differ among locations due to environmental variation. The factors underlying variability in carrying capacity, however, have rarely been examined. Here, we investigated spatial heterogeneity in the carrying capacity of Japanese sika deer ( Cervus nippon ) from 2005 to 2014 in Yamanashi Prefecture, central Japan (mesh with grid cells of 5.5×4.6 km) by state-space modeling. Both carrying capacity and density dependence differed greatly among cells. Estimated carrying capacities ranged from 1.34 to 98.4 deer/km 2 . According to estimated population dynamics, grid cells with larger proportions of artificial grassland and deciduous forest were subject to lower density dependence and higher carrying capacity. We conclude that population dynamics of ungulates may vary spatially through spatial variation in carrying capacity and that the density level for controlling ungulate abundance should be based on the current density level relative to the carrying capacity for each area.
Respiratory disease in relation to outdoor air pollution in Kanpur, India.
Liu, Hai-Ying; Bartonova, Alena; Schindler, Martin; Sharma, Mukesh; Behera, Sailesh N; Katiyar, Kamlesh; Dikshit, Onkar
2013-01-01
This paper examines the effect of outdoor air pollution on respiratory disease in Kanpur, India, based on data from 2006. Exposure to air pollution is represented by annual emissions of sulfur dioxide (SO(2)), particulate matter (PM), and nitrogen oxides (NO(x)) from 11 source categories, established as a geographic information system (GIS)-based emission inventory in 2 km × 2 km grid. Respiratory disease is represented by number of patients who visited specialist pulmonary hospital with symptoms of respiratory disease. The results showed that (1) the main sources of air pollution are industries, domestic fuel burning, and vehicles; (2) the emissions of PM per grid are strongly correlated to the emissions of SO(2) and NO(x); and (3) there is a strong correlation between visits to a hospital due to respiratory disease and emission strength in the area of residence. These results clearly indicate that appropriate health and environmental monitoring, actions to reduce emissions to air, and further studies that would allow assessing the development in health status are necessary.
NASA Technical Reports Server (NTRS)
Parkinson, C. L.; Comiso, J. C.; Zwally, H. J.
1987-01-01
A summary data set for four years (mid 70's) of Arctic sea ice conditions is available on magnetic tape. The data include monthly and yearly averaged Nimbus 5 electrically scanning microwave radiometer (ESMR) brightness temperatures, an ice concentration parameter derived from the brightness temperatures, monthly climatological surface air temperatures, and monthly climatological sea level pressures. All data matrices are applied to 293 by 293 grids that cover a polar stereographic map enclosing the 50 deg N latitude circle. The grid size varies from about 32 X 32 km at the poles to about 28 X 28 km at 50 deg N. The ice concentration parameter is calculated assuming that the field of view contains only open water and first-year ice with an ice emissivity of 0.92. To account for the presence of multiyear ice, a nomogram is provided relating the ice concentration parameter, the total ice concentration, and the fraction of the ice cover which is multiyear ice.
NASA Astrophysics Data System (ADS)
Motyka, R.; Fahnestock, M.; Howat, I.; Truffer, M.; Brecher, H.; Luethi, M.
2008-12-01
Jakobshavn Isbrae drains about 7 % of the Greenland Ice Sheet and is the ice sheet's largest outlet glacier. Two sets of high elevation (~13,500 m), high resolution (2 m) aerial photographs of Jakobshavn Isbrae were obtained about two weeks apart during July 1985 (Fastook et al, 1995). These historic photo sets have become increasingly important for documenting and understanding the dynamic state of this outlet stream prior to the rapid retreat and massive ice loss that began in 1998 and continues today. The original photogrammetric analysis of this imagery is summarized in Fastook et al. (1995). They derived a coarse DEM (3 km grid spacing) covering an area of approximately 100 km x 100 km by interpolating several hundred positions determined manually from block-aerial triangulation. We have re-analyzed these photos sets using digital photogrammetry (BAE Socet Set©) and significantly improved DEM quality and resolution (20, 50, and 100 m grids). The DEMs were in turn used to produce high quality orthophoto mosaics. Comparing our 1985 DEM to a DEM we derived from May 2006 NASA ATM measurements showed a total ice volume loss of ~ 105 km3 over the lower drainage area; almost all of this loss has occurred since 1997. Ice stream surface velocities derived from the 1985 orthomosaics showed speeds of 20 m/d on the floating tongue, diminishing to 5 m/d at 50 km further upstream. Velocities have since nearly doubled along the ice stream during its current retreat. Fastook, J.L., H.H. Brecher, and T.J. Hughes, 1995. J.of Glaciol. 11 (137), 161-173.
NASA Astrophysics Data System (ADS)
Jin, Meibing; Deal, Clara; Maslowski, Wieslaw; Matrai, Patricia; Roberts, Andrew; Osinski, Robert; Lee, Younjoo J.; Frants, Marina; Elliott, Scott; Jeffery, Nicole; Hunke, Elizabeth; Wang, Shanlin
2018-01-01
The current coarse-resolution global Community Earth System Model (CESM) can reproduce major and large-scale patterns but is still missing some key biogeochemical features in the Arctic Ocean, e.g., low surface nutrients in the Canada Basin. We incorporated the CESM Version 1 ocean biogeochemical code into the Regional Arctic System Model (RASM) and coupled it with a sea-ice algal module to investigate model limitations. Four ice-ocean hindcast cases are compared with various observations: two in a global 1° (40˜60 km in the Arctic) grid: G1deg and G1deg-OLD with/without new sea-ice processes incorporated; two on RASM's 1/12° (˜9 km) grid R9km and R9km-NB with/without a subgrid scale brine rejection parameterization which improves ocean vertical mixing under sea ice. Higher-resolution and new sea-ice processes contributed to lower model errors in sea-ice extent, ice thickness, and ice algae. In the Bering Sea shelf, only higher resolution contributed to lower model errors in salinity, nitrate (NO3), and chlorophyll-a (Chl-a). In the Arctic Basin, model errors in mixed layer depth (MLD) were reduced 36% by brine rejection parameterization, 20% by new sea-ice processes, and 6% by higher resolution. The NO3 concentration biases were caused by both MLD bias and coarse resolution, because of excessive horizontal mixing of high NO3 from the Chukchi Sea into the Canada Basin in coarse resolution models. R9km showed improvements over G1deg on NO3, but not on Chl-a, likely due to light limitation under snow and ice cover in the Arctic Basin.
CRYSTAL-FACE Analysis and Simulations of the July 23rd Extended Anvil Case
NASA Technical Reports Server (NTRS)
Starr, David
2003-01-01
A key focus of CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and cirrus Layers - Florida Area Cirrus Experiment) was the generation and subsequent evolution of cirrus outflow from deep convective cloud systems. Present theoretical background and motivations will be discussed. An integrated look at the observations of an extended cirrus anvil cloud system observed on 23 July 2002 will be presented, including lidar and millimeter radar observation; from NASA s ER-2 and in-situ observations from NASA s WB-57 and University of North Dakota Citation. The observations will be compared to results of simulations using 1-D and 2-D high-resolution (100 meter) cloud resolving models. The CRMs explicitly account for cirrus microphysical development by resolving the evolving ice crystal size distribution (bin model) in time and space. Both homogeneous and heterogeneous nucleation are allowed in the model. The CRM simulations are driven using the output of regional simulations using MM5 that produces deep convection similar to what was observed. The MM5 model employs a 2 km inner grid (32 layers) over a 360 km domain, nested within a 6-km grid over a 600-km domain. Initial and boundary conditions for the 36-hour MM5 simulation are taken from NCEP Eta model analysis at 32 km resolution. Key issues to be explored are the settling of the observed anvil versus the model simulations, and comparisons of dynamical properties, such as vertical motions, occurring in the observations and models. The former provides an integrated measure of the validity of the model microphysics (fallspeed) while the latter is the key factor in forcing continued ice generation.
NASA Technical Reports Server (NTRS)
Wood, Lance; Medlin, Jeffrey M.; Case, Jon
2012-01-01
A joint collaborative modeling effort among the NWS offices in Mobile, AL, and Houston, TX, and NASA Short-term Prediction Research and Transition (SPoRT) Center began during the 2011-2012 cold season, and continued into the 2012 warm season. The focus was on two frequent U.S. Deep South forecast challenges: the initiation of deep convection during the warm season; and heavy precipitation during the cold season. We wanted to examine the impact of certain NASA produced products on the Weather Research and Forecasting Environmental Modeling System in improving the model representation of mesoscale boundaries such as the local sea-, bay- and land-breezes (which often leads to warm season convective initiation); and improving the model representation of slow moving, or quasi-stationary frontal boundaries (which focus cold season storm cell training and heavy precipitation). The NASA products were: the 4-km Land Information System, a 1-km sea surface temperature analysis, and a 4-km greenness vegetation fraction analysis. Similar domains were established over the southeast Texas and Alabama coastlines, each with an outer grid with a 9 km spacing and an inner nest with a 3 km grid spacing. The model was run at each NWS office once per day out to 24 hours from 0600 UTC, using the NCEP Global Forecast System for initial and boundary conditions. Control runs without the NASA products were made at the NASA SPoRT Center. The NCAR Model Evaluation Tools verification package was used to evaluate both the positive and negative impacts of the NASA products on the model forecasts. Select case studies will be presented to highlight the influence of the products.
Pilly, Praveen K.; Grossberg, Stephen
2013-01-01
Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous adaptive robots capable of spatial navigation. PMID:23577130
SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, M; Tobias, R; Pankuch, M
Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less
NASA Astrophysics Data System (ADS)
Yano, Tomoko E.; Takeda, Tetsuya; Matsubara, Makoto; Shiomi, Katsuhiko
2017-04-01
We have generated a high-resolution catalog called the ;Japan Unified hIgh-resolution relocated Catalog for Earthquakes; (JUICE), which can be used to evaluate the geometry and seismogenic depth of active faults in Japan. We relocated > 1.1 million hypocenters from the NIED Hi-net catalog for events which occurred between January 2001 and December 2012, to a depth of 40 km. We apply a relative hypocenter determination method to the data in each grid square, in which entire Japan is divided into 1257 grid squares to parallelize the relocation procedure. We used a double-difference method, incorporating cross-correlating differential times as well as catalog differential times. This allows us to resolve, in detail, a seismicity distribution for the entire Japanese Islands. We estimated location uncertainty by a statistical resampling method, using Jackknife samples, and show that the uncertainty can be within 0.37 km in the horizontal and 0.85 km in the vertical direction with a 90% confidence interval for areas with good station coverage. Our seismogenic depth estimate agrees with the lower limit of the hypocenter distribution for a recent earthquake on the Kamishiro fault (2014, Mj 6.7), which suggests that the new catalog should be useful for estimating the size of future earthquakes for inland active faults.
Appleton, J D; Doyle, E; Fenton, D; Organo, C
2011-06-01
The probability of homes in Ireland having high indoor radon concentrations is estimated on the basis of known in-house radon measurements averaged over 10 km × 10 km grid squares. The scope for using airborne gamma-ray spectrometer data for the Tralee-Castleisland area of county Kerry and county Cavan to predict the radon potential (RP) in two distinct areas of Ireland is evaluated in this study. Airborne data are compared statistically with in-house radon measurements in conjunction with geological and ground permeability data to establish linear regression models and produce radon potential maps. The best agreement between the percentage of dwellings exceeding the reference level (RL) for radon concentrations in Ireland (% > RL), estimated from indoor radon data, and modelled RP in the Tralee-Castleisland area is produced using models based on airborne gamma-ray spectrometry equivalent uranium (eU) and ground permeability data. Good agreement was obtained between the % > RL from indoor radon data and RP estimated from eU data in the Cavan area using terrain specific models. In both areas, RP maps derived from eU data are spatially more detailed than the published 10 km grid map. The results show the potential for using airborne radiometric data for producing RP maps.
Polar Geophysics Products Derived from AVHRR: The "AVHRR Polar Pathfinder
NASA Technical Reports Server (NTRS)
Maslanik, James; Fowler, Charles; Scambos, Theodore
1999-01-01
This NOAA/NASA Pathfinder effort was established to locate, acquire, and process Advanced Very High Resolution Radiometer (AVHRR) imagery into geo-located and calibrated radiances, cloud masks, surface clear-sky broadband albedo, clear-sky skin temperatures, satellite viewing times, and viewing and solar geometry for the, high-latitude portions of the northern and southern hemispheres (all area north of 48N and south of 53S). AVHRR GAC data for August 1981 - July 1998 were acquired, with some gaps remaining, and processed into twice-daily 5-km grids, with some products also provided at 25-km resolution. AVHRR LAC data for 3.5 years of coverage in the northern hemisphere and 2.75 years of coverage in the southern hemisphere were processed into 1.25-km grids for the same suite of products. The resulting data sets are presently being transferred to the National Snow and Ice Data Center (NSIDC) for archiving and distribution. Using these data, researchers now have at their disposal an extensive AVHRR data set for investigations of high-latitude processes. In addition, the data lend themselves to development and testing of algorithms. The products are particularly relevant for climate research and algorithm development as applied to relatively long time periods and large areas.
Inner-shelf ocean dynamics and seafloor morphologic changes during Hurricane Sandy
NASA Astrophysics Data System (ADS)
Warner, John C.; Schwab, William C.; List, Jeffrey H.; Safak, Ilgar; Liste, Maria; Baldwin, Wayne
2017-04-01
Hurricane Sandy was one of the most destructive hurricanes in US history, making landfall on the New Jersey coast on October 30, 2012. Storm impacts included several barrier island breaches, massive coastal erosion, and flooding. While changes to the subaerial landscape are relatively easily observed, storm-induced changes to the adjacent shoreface and inner continental shelf are more difficult to evaluate. These regions provide a framework for the coastal zone, are important for navigation, aggregate resources, marine ecosystems, and coastal evolution. Here we provide unprecedented perspective regarding regional inner continental shelf sediment dynamics based on both observations and numerical modeling over time scales associated with these types of large storm events. Oceanographic conditions and seafloor morphologic changes are evaluated using both a coupled atmospheric-ocean-wave-sediment numerical modeling system that covered spatial scales ranging from the entire US east coast (1000 s of km) to local domains (10 s of km). Additionally, the modeled response for the region offshore of Fire Island, NY was compared to observational analysis from a series of geologic surveys from that location. The geologic investigations conducted in 2011 and 2014 revealed lateral movement of sedimentary structures of distances up to 450 m and in water depths up to 30 m, and vertical changes in sediment thickness greater than 1 m in some locations. The modeling investigations utilize a system with grid refinement designed to simulate oceanographic conditions with progressively increasing resolutions for the entire US East Coast (5-km grid), the New York Bight (700-m grid), and offshore of Fire Island, NY (100-m grid), allowing larger scale dynamics to drive smaller scale coastal changes. Model results in the New York Bight identify maximum storm surge of up to 3 m, surface currents on the order of 2 ms-1 along the New Jersey coast, waves up to 8 m in height, and bottom stresses exceeding 10 Pa. Flow down the Hudson Shelf Valley is shown to result in convergent sediment transport and deposition along its axis. Modeled sediment redistribution along Fire Island showed erosion across the crests of inner shelf sand ridges and sedimentation in adjacent troughs, consistent with the geologic observations.
From grid cells to place cells with realistic field sizes
2017-01-01
While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005
Im, Seokjin; Choi, JinTak
2014-06-17
In the pervasive computing environment using smart devices equipped with various sensors, a wireless data broadcasting system for spatial data items is a natural way to efficiently provide a location dependent information service, regardless of the number of clients. A non-flat wireless broadcast system can support the clients in accessing quickly their preferred data items by disseminating the preferred data items more frequently than regular data on the wireless channel. To efficiently support the processing of spatial window queries in a non-flat wireless data broadcasting system, we propose a distributed air index based on a maximum boundary rectangle (MaxBR) over grid-cells (abbreviated DAIM), which uses MaxBRs for filtering out hot data items on the wireless channel. Unlike the existing index that repeats regular data items in close proximity to hot items at same frequency as hot data items in a broadcast cycle, DAIM makes it possible to repeat only hot data items in a cycle and reduces the length of the broadcast cycle. Consequently, DAIM helps the clients access the desired items quickly, improves the access time, and reduces energy consumption. In addition, a MaxBR helps the clients decide whether they have to access regular data items or not. Simulation studies show the proposed DAIM outperforms existing schemes with respect to the access time and energy consumption.
Regional photochemical air quality modeling in the Mexico-US border area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza, A.; Russell, A.G.; Mejia, G.M.
1998-12-31
The Mexico-United States border area has become an increasingly important region due to its commercial, industrial and urban growth. As a result, environmental concerns have risen. Treaties like the North American Free Trade Agreement (NAFTA) have further motivated the development of environmental impact assessment in the area. Of particular concern are air quality, and how the activities on both sides of the border contribute to its degradation. This paper presents results of applying a three-dimensional photochemical airshed model to study air pollution dynamics along the Mexico-United States border. In addition, studies were conducted to assess how size resolution impacts themore » model performance. The model performed within acceptable statistic limits using 12.5 x 12.5 km{sup 2} grid cells, and the benefits using finer grids were limited. Results were further used to assess the influence of grid-cell size on the modeling of control strategies, where coarser grids lead to significant loss of information.« less
NASA Astrophysics Data System (ADS)
Chao, C. K.; Su, S.-Y.; Yeh, H. C.
2003-12-01
The ROCSAT-1 satellite circulating at 600 km altitude in the low- and mid-latitude topside ionosphere carries a retarding potential analyzer to measure the ion composition, temperature, and the plasma flow velocity in the ram direction. Based on an existing three-dimensional model, the particle's motion inside the instrument is simulated with the exact wire and mesh sizes but with a smaller aperture of the real sensor configuration. The simulation results indicate that the retarding grids could not provide a uniform retarding potential barrier to completely repel low energy particles. Some of low energy particles could pass through those grids and arrive at the collector. The leakage will cause the ram velocity to be over-estimated for by about 180 m/sec. Furthermore, the simulated O + temperature derived from the I-V curve is lower than the input temperature due to ion losses from colliding with the grids from the non-uniform potential field generated by the high retarding voltage.
Food Self-Sufficiency across scales: How local can we go?
NASA Astrophysics Data System (ADS)
Pradhan, Prajal; Lüdeke, Matthias K. B.; Reusser, Dominik E.; Kropp, Jürgen P.
2013-04-01
"Think global, act local" is a phrase often used in sustainability debates. Here, we explore the potential of regions to go for local supply in context of sustainable food consumption considering both the present state and the plausible future scenarios. We analyze data on the gridded crop calories production, the gridded livestock calories production, the gridded feed calories use and the gridded food calories consumption in 5' resolution. We derived these gridded data from various sources: Global Agro-ecological Zone (GAEZ v3.0), Gridded Livestock of the World (GLW), FAOSTAT, and Global Rural-Urban Mapping Project (GRUMP). For scenarios analysis, we considered changes in population, dietary patterns and possibility of obtaining the maximum potential yield. We investigate the food self-sufficiency multiple spatial scales. We start from the 5' resolution (i.e. around 10 km x 10 km in the equator) and look at 8 levels of aggregation ranging from the plausible lowest administrative level to the continental level. Results for the different spatial scales show that about 1.9 billion people live in the area of 5' resolution where enough calories can be produced to sustain their food consumption and the feed used. On the country level, about 4.4 billion population can be sustained without international food trade. For about 1 billion population from Asia and Africa, there is a need for cross-continental food trade. However, if we were able to achieve the maximum potential crop yield, about 2.6 billion population can be sustained within their living area of 5' resolution. Furthermore, Africa and Asia could be food self-sufficient by achieving their maximum potential crop yield and only round 630 million populations would be dependent on the international food trade. However, the food self-sufficiency status might differ under consideration of the future change in population, dietary patterns and climatic conditions. We provide an initial approach for investigating the regional and the local potential to address food security across multiple spatial scales. We identify the areas where one can depend more on local/regional products as a transition path towards sustainable consumption and production.
Modeling Potential Tephra Dispersal at Yucca Mountain, Nevada
NASA Astrophysics Data System (ADS)
Hooper, D.; Franklin, N.; Adams, N.; Basu, D.
2006-12-01
Quaternary basaltic volcanoes exist within 20 km [12 mi] of the potential radioactive waste repository at Yucca Mountain, Nevada, and future basaltic volcanism at the repository is considered a low-probability, potentially high-consequence event. If radioactive waste was entrained in the conduit of a future volcanic event, tephra and waste could be transported in the resulting eruption plume. During an eruption, basaltic tephra would be dispersed primarily according to the height of the eruption column, particle-size distribution, and structure of the winds aloft. Following an eruption, contaminated tephra-fall deposits would be affected by surface redistribution processes. The Center for Nuclear Waste Regulatory Analyses developed the computer code TEPHRA to calculate atmospheric dispersion and subsequent deposition of tephra and spent nuclear fuel from a potential eruption at Yucca Mountain and to help prepare the U.S. Nuclear Regulatory Commission to review a potential U.S. Department of Energy license application. The TEPHRA transport code uses the Suzuki model to simulate the thermo-fluid dynamics of atmospheric tephra dispersion. TEPHRA models the transport of airborne pyroclasts based on particle diffusion from an eruption column, horizontal diffusion of particles by atmospheric and plume turbulence, horizontal advection by atmospheric circulation, and particle settling by gravity. More recently, TEPHRA was modified to calculate potential tephra deposit distributions using stratified wind fields based on upper atmosphere data from the Nevada Test Site. Wind data are binned into 1-km [0.62-mi]-high intervals with coupled distributions of wind speed and direction produced for each interval. Using this stratified wind field and discretization with respect to height, TEPHRA calculates particle fall and lateral displacement for each interval. This implementation permits modeling of split wind fields. We use a parallel version of the code to calculate expected tephra and high-level waste accumulation at specified points on a two-dimensional spatial grid, thereby simulating a three- dimensional initial deposit. To assess subsequent tephra and high-level waste redistribution and resuspension, modeling grids were devised to measure deposition in eolian and fluvial source regions. The eolian grid covers an area of 2,600 km2 [1,000 mi2] and the fluvial grid encompasses 318 km2 [123 mi2] of the southernmost portion of the Fortymile Wash catchment basin. Because each realization is independent, distributions of tephra and high-level waste reflect anticipated variations in source-term and transport characteristics. This abstract is an independent product of the Center for Nuclear Waste Regulatory Analyses and does not necessarily reflect the view or regulatory position of the U.S. Nuclear Regulatory Commission.
2016-12-01
roughness that is an input variable. For the FP2 site in Kansas, we searched for the climatological surface roughness height used in the Navy’s...COAMPS model for the latitude and longitude of FP2 and in the month of June/July. The climatological roughness height was found to be 0.25m. This is the...mean surface roughness for an area of 1 km on the side near FP2 as the climatological data has a horizontal grid resolution of 1 km. This roughness
Quantitative characterization of the small-scale fracture patterns on the plains of Venus
NASA Technical Reports Server (NTRS)
Sammis, Charles G.; Bowman, David D.
1995-01-01
The objectives of this research project were to (1) compile a comprehensive database of the occurrence of regularly spaced kilometer scale lineations on the volcanic plains of Venus in an effort to verify the effectiveness of the shear-lag model developed by Banerdt and Sammis (1992), and (2) develop a model for the formation of irregular kilometer scale lineations such as typified in the gridded plains region of Guinevere Planitia. Attached to this report is the paper 'A Tectonic Model for the Formation of the Gridded Plains on Guinevere Planitia, Venus, and Implications for the Elastic Thickness of the Lithosphere'.
GLOBAL GRIDS FROM RECURSIVE DIAMOND SUBDIVISIONS OF THE SURFACE OF AN OCTAHEDRON OR ICOSAHEDRON
In recent years a number of methods have been developed for subdividing the surface of the earth to meet the needs of applications in dynamic modeling, survey sampling, and information storage and display. One set of methods uses the surfaces of Platonic solids, or regular polyhe...
Lamichhane, Babu Ram; Subedi, Naresh; Pokheral, Chiranjibi Prasad; Dhakal, Maheshwar; Acharya, Krishna Prasad; Pradhan, Narendra Man Babu; Smith, James L. David; Malla, Sabita; Thakuri, Bishnu Singh; Yackulic, Charles B.
2018-01-01
Understanding how wide-ranging animals use landscapes in which human use is highly heterogeneous is important for determining patterns of human–wildlife conflict and designing mitigation strategies. Here, we show how biological sign surveys in forested components of a human-dominated landscape can be combined with human interviews in agricultural portions of a landscape to provide a full picture of seasonal use of different landscape components by wide-ranging animals and resulting human–wildlife conflict. We selected Asian elephants (Elephas maximus) in Nepal to illustrate this approach. Asian elephants are threatened throughout their geographic range, and there are large gaps in our understanding of their landscape-scale habitat use. We identified all potential elephant habitat in Nepal and divided the potential habitat into sampling units based on a 10 km by 10 km grid. Forested areas within grids were surveyed for signs of elephant use, and local villagers were interviewed regarding elephant use of agricultural areas and instances of conflict. Data were analyzed using single-season and multi-season (dynamic) occupancy models. A single-season occupancy model applied to data from 139 partially or wholly forested grid cells estimated that 0.57 of grid cells were used by elephants. Dynamic occupancy models fit to data from interviews across 158 grid cells estimated that monthly use of non-forested, human-dominated areas over the preceding year varied between 0.43 and 0.82 with a minimum in February and maximum in October. Seasonal patterns of crop raiding by elephants coincided with monthly elephant use of human-dominated areas, and serious instances of human–wildlife conflict were common. Efforts to mitigate human–elephant conflict in Nepal are likely to be most effective if they are concentrated during August through December when elephant use of human-dominated landscapes and human–elephant conflict are most common.
The 2012 Arctic Field Season of the NRL Sea-Ice Measurement Program
NASA Astrophysics Data System (ADS)
Gardner, J. M.; Brozena, J. M.; Hagen, R. A.; Liang, R.; Ball, D.
2012-12-01
The U.S. Naval Research Laboratory (NRL) is beginning a five year study of the changing Arctic with a particular focus on ice thickness and distribution variability with the intent of optimizing state-of-the-art computer models which are currently used to predict sea ice changes. An important part of our study is to calibrate/validate CryoSat2 ice thickness data prior to its incorporation into new ice forecast models. NRL Code 7420 collected coincident data with the CryoSat2 satellite in both 2011 and 2012 using a LiDAR (Riegl Q560) to measure combined snow and ice thickness and a 10 GHz pulse-limited precision radar altimeter to measure sea-ice freeboard. These measurements were coordinated with the Seasonal Ice Zone Observing Network (SIZONet) group who conducted surface based ice thickness surveys using a Geonics EM-31 along hunter trails on the landfast ice near Barrow as well as on drifting ice offshore during helicopter landings. On two sorties, a twin otter carrying the NRL LiDAR and radar altimeter flew in tandem with the helicopter carrying the EM-31 to achieve synchronous data acquisition. Data from these flights are shown here along with a digital elevation map. The LiDAR and radar altimeter were also flown on grid patterns over the ice that were synchronous with 5 Cryosat2 satellite passes. These grids were intended to cover roughly 10 km long segments of Cryosat2 tracks with widths similar to the footprint of the satellite (~2 km). Reduction of these grids is challenging because of ice drift which can be many hundreds of meters over the 1-2 hours collection period of each grid. Relocation of the individual scanning LiDAR tracks is done by means of tie-points observed in the overlapping swaths. Data from these grids are shown here and will be used to examine the relationship of the tracked satellite waveform data to the actual surface across the footprint.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Real-Time Rotational Activity Detection in Atrial Fibrillation
Ríos-Muñoz, Gonzalo R.; Arenal, Ángel; Artés-Rodríguez, Antonio
2018-01-01
Rotational activations, or spiral waves, are one of the proposed mechanisms for atrial fibrillation (AF) maintenance. We present a system for assessing the presence of rotational activity from intracardiac electrograms (EGMs). Our system is able to operate in real-time with multi-electrode catheters of different topologies in contact with the atrial wall, and it is based on new local activation time (LAT) estimation and rotational activity detection methods. The EGM LAT estimation method is based on the identification of the highest sustained negative slope of unipolar signals. The method is implemented as a linear filter whose output is interpolated on a regular grid to match any catheter topology. Its operation is illustrated on selected signals and compared to the classical Hilbert-Transform-based phase analysis. After the estimation of the LAT on the regular grid, the detection of rotational activity in the atrium is done by a novel method based on the optical flow of the wavefront dynamics, and a rotation pattern match. The methods have been validated using in silico and real AF signals. PMID:29593566
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence; Govindaraju, Ravi C.; Atlas, Robert (Technical Monitor)
2002-01-01
The new stretched-grid design with multiple (four) areas of interest, one at each global quadrant, is implemented into both a stretched-grid GCM (general circulation model) and a stretched-grid data assimilation system (DAS). The four areas of interest include: the U.S./Northern Mexico, the El Nino area/Central South America, India/China, and the Eastern Indian Ocean/Australia. Both the stretched-grid GCM and DAS annual (November 1997 through December 1998) integrations are performed with 50 km regional resolution. The efficient regional down-scaling to mesoscales is obtained for each of the four areas of interest while the consistent interactions between regional and global scales and the high quality of global circulation, are preserved. This is the advantage of the stretched-grid approach. The global variable resolution DAS incorporating the stretched-grid GCM has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The anomalous regional climate events of 1998 that occurred over the U.S., Mexico, South America, China, India, African Sahel, and Australia are investigated in both simulation and data assimilation modes. Tree assimilated products are also used, along with gauge precipitation data, for validating the simulation results. The obtained results show that the stretched-grid GCM and DAS are capable of producing realistic high quality simulated and assimilated products at mesoscale resolution for regional climate studies and applications.
Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2.
NASA Astrophysics Data System (ADS)
Devarakonda, R.
2014-12-01
Daymet: Daily Surface Weather Data and Climatological Summaries provides gridded estimates of daily weather parameters for North America, including daily continuous surfaces of minimum and maximum temperature, precipitation occurrence and amount, humidity, shortwave radiation, snow water equivalent, and day length. The current data product (Version 2) covers the period January 1, 1980 to December 31, 2013 [1]. Data are available on a daily time step at a 1-km x 1-km spatial resolution in Lambert Conformal Conic projection with a spatial extent that covers the North America as meteorological station density allows. Daymet data can be downloaded from 1) the ORNL Distributed Active Archive Center (DAAC) search and order tools (http://daac.ornl.gov/cgi-bin/cart/add2cart.pl?add=1219) or directly from the DAAC FTP site (http://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1219) and 2) the Single Pixel Tool (http://daymet.ornl.gov/singlepixel.html) and THREDDS (Thematic Real-time Environmental Data Services) Data Server (TDS) (http://daymet.ornl.gov/thredds_mosaics.html). The Single Pixel Data Extraction Tool [2] allows users to enter a single geographic point by latitude and longitude in decimal degrees. A routine is executed that translates the (lon, lat) coordinates into projected Daymet (x,y) coordinates. These coordinates are used to access the Daymet database of daily-interpolated surface weather variables. The Single Pixel Data Extraction Tool also provides the option to download multiple coordinates programmatically. The ORNL DAAC's TDS provides customized visualization and access to Daymet time series of North American mosaics. Users can subset and download Daymet data via a variety of community standards, including OPeNDAP, NetCDF Subset service, and Open Geospatial Consortium (OGC) Web Map/Coverage Service. References: [1] Thornton, P. E., Thornton, M. M., Mayer, B. W., Wilhelmi, N., Wei, Y., Devarakonda, R., & Cook, R. (2012). "Daymet: Daily surface weather on a 1 km grid for North America, 1980-2008". Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center for Biogeochemical Dynamics (DAAC), 1. [2] Devarakonda R., et al. 2012. Daymet: Single Pixel Data Extraction Tool. Available [http://daymet.ornl.go/singlepixel.html].
NASA Astrophysics Data System (ADS)
Belmadani, A.; Palany, P.; Dalphinet, A.; Pilon, R.; Chauvin, F.
2017-12-01
Tropical cyclones (TCs) are a major environmental hazard in numerous small islands such as the French West Indies (Guadeloupe, Martinique, St-Martin, St-Barthélémy). The intense associated winds, which can reach 300 km/h or more, can cause serious damage in the islands and their coastlines. In particular, the combined action of waves, currents and low atmospheric pressure leads to severe storm surge and coastal flooding. Here we report on future changes in cyclonic wave climate for the North Atlantic basin, as a preliminary step for downscaled projections over the French West Indies at sub-kilometer-scale resolution. A new configuration of the Météo-France ARPEGE atmospheric general circulation model on a stretched grid with increased resolution in the tropical North Atlantic ( 15 km) is able to reproduce the observed distribution of maximum surface winds, including extreme events corresponding to Category 5 hurricanes. Ensemble historical simulations (1985-2014, 5 members) and future projections with the IPCC (Intergovernmental Panel on Climate Change) RCP8.5 scenario (2051-2080, 5 members) are used to drive the MFWAM (Météo-France Wave Action Model) over the North Atlantic basin. A lower 50-km resolution grid is used to propagate distant mid-latitude swells into a higher 10-km resolution grid over the cyclonic basin. Wave model performance is evaluated over a few TC case studies including the Sep-Oct 2016 Category 5 Hurricane Matthew, using an operational version of ARPEGE at similar resolution to force MFWAM together with wave buoy data. The latter are also used to compute multi-year wave statistics, which then allow assessing the realism of the MFWAM historical runs. For each climate scenario and ensemble member, a simulation of the cyclonic season (July to mid-November) is performed every year. The simulated sea states over the North Atlantic cyclonic basin over 150 historical simulations are compared to their counterparts over 150 future simulations. Changes in cyclonic wave climate are discussed in the light of concurrent changes in TC activity, inferred from objective tracking of individual TCs.
A first large-scale flood inundation forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie
2013-11-04
At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less
Source process and tectonic implication of the January 20, 2007 Odaesan earthquake, South Korea
NASA Astrophysics Data System (ADS)
Abdel-Fattah, Ali K.; Kim, K. Y.; Fnais, M. S.; Al-Amri, A. M.
2014-04-01
The source process for the 20th of January 2007, Mw 4.5 Odaesan earthquake in South Korea is investigated in the low- and high-frequency bands, using velocity and acceleration waveform data recorded by the Korea Meteorological Administration Seismographic Network at distances less than 70 km from the epicenter. Synthetic Green functions are adopted for the low-frequency band of 0.1-0.3 Hz by using the wave-number integration technique and the one dimensional velocity model beneath the epicentral area. An iterative technique was performed by a grid search across the strike, dip, rake, and focal depth of rupture nucleation parameters to find the best-fit double-couple mechanism. To resolve the nodal plane ambiguity, the spatiotemporal slip distribution on the fault surface was recovered using a non-negative least-square algorithm for each set of the grid-searched parameters. The focal depth of 10 km was determined through the grid search for depths in the range of 6-14 km. The best-fit double-couple mechanism obtained from the finite-source model indicates a vertical strike-slip faulting mechanism. The NW faulting plane gives comparatively smaller root-mean-squares (RMS) error than its auxiliary plane. Slip pattern event provides simple source process due to the effect of Low-frequency that acted as a point source model. Three empirical Green functions are adopted to investigate the source process in the high-frequency band. A set of slip models was recovered on both nodal planes of the focal mechanism with various rupture velocities in the range of 2.0-4.0 km/s. Although there is a small difference between the RMS errors produced by the two orthogonal nodal planes, the SW dipping plane gives a smaller RMS error than its auxiliary plane. The slip distribution is relatively assessable by the oblique pattern recovered around the hypocenter in the high-frequency analysis; indicating a complex rupture scenario for such moderate-sized earthquake, similar to those reported for large earthquakes.
Suzuki, Noriyuki; Murasawa, Kaori; Sakurai, Takeo; Nansai, Keisuke; Matsuhashi, Keisuke; Moriguchi, Yuichi; Tanabe, Kiyoshi; Nakasugi, Osami; Morita, Masatoshi
2004-11-01
A spatially resolved and geo-referenced dynamic multimedia environmental fate model, G-CIEMS (Grid-Catchment Integrated Environmental Modeling System) was developed on a geographical information system (GIS). The case study for Japan based on the air grid cells of 5 x 5 km resolution and catchments with an average area of 9.3 km2, which corresponds to about 40,000 air grid cells and 38,000 river segments/catchment polygons, were performed for dioxins, benzene, 1,3-butadiene, and di-(2-ethyhexyl)phthalate. The averaged concentration of the model and monitoring output were within a factor of 2-3 for all the media. Outputs from G-CIEMS and the generic model were essentially comparable when identical parameters were employed, whereas the G-CIEMS model gave explicit information of distribution of chemicals in the environment. Exposure-weighted averaged concentrations (EWAC) in air were calculated to estimate the exposure ofthe population, based on the results of generic, G-CIEMS, and monitoring approaches. The G-CIEMS approach showed significantly better agreement with the monitoring-derived EWAC than the generic model approach. Implication for the use of a geo-referenced modeling approach in the risk assessment scheme is discussed as a generic-spatial approach, which can be used to provide more accurate exposure estimation with distribution information, using generally available data sources for a wide range of chemicals.
Risk assessment of aircraft noise on sleep in Montreal.
Tétreault, Louis-Francois; Plante, Céline; Perron, Stéphane; Goudreau, Sophie; King, Norman; Smargiassi, Audrey
2012-05-24
Estimate the number of awakenings additional to spontaneous awakenings, induced by the nighttime aircraft movements at an international airport in Montreal, in the population residing nearby in 2009. Maximum sound levels (LAS,max) were derived from aircraft movements using the Integrated Noise Model 7.0b, on a 28 x 28 km grid centred on the airport and with a 0.1 x 0.1 km resolution. Outdoor LAS,max were converted to indoor LAS,max by reducing noise levels by 15 dB(A) or 21 dB(A). For all grid points, LAS,max were transformed into probabilities of additional awakening using a function developed by Basner et al. (2006). The probabilities of additional awakening were linked to estimated numbers of exposed residents for each grid location to assess the number of aircraft-noise-induced awakenings in Montreal. Using a 15 dB(A) sound attenuation, 590 persons would, on average, have one or more additional awakenings per night for the year 2009. In the scenario using a 21 dB(A) sound attenuation, on average, no one would be subjected to one or more additional awakenings per night due to aircraft noise. Using the 2009 flight patterns, our data suggest that a small number of Montreal residents are exposed to noise levels that could induce one or more awakenings additional to spontaneous awakenings per night.
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
glideinWMS—a generic pilot-based workload management system
NASA Astrophysics Data System (ADS)
Sfiligoi, I.
2008-07-01
The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. glideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.
glideinWMS - A generic pilot-based Workload Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sfiligoi, Igor; /Fermilab
The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. GlideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a setmore » of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.« less
REGIONAL MODELING OF THE ATMOSPHERIC TRANSPORT AND DEPOSITION OF ATRAZINE
A version of the Community Multiscale Air Quality (CMAQ) model has been developed by the U.S. EPA that is capable of addressing the atmospheric fate, transport and deposition of some common trace toxics. An initial, 36-km rectangular grid-cell application for atrazine has been...
Global climate models (GCMs) are currently used to obtain information about future changes in the large-scale climate. However, such simulations are typically done at coarse spatial resolutions, with model grid boxes on the order of 100 km on a horizontal side. Therefore, techniq...
Aviation is a unique anthropogenic source with four-dimensional varying emissions, peaking at cruise altitudes (9–12 km). Aircraft emission budgets in the upper troposphere lower stratosphere region and their potential impacts on upper troposphere and surface air quality ar...
Modeling crop residue burning experiments and assessing the fire impacts on air quality
Prescribed burning is a common land management practice that results in ambient emissions of a variety of primary and secondary pollutants with negative health impacts. The community Multiscale Air Quality (CMAQ) model is used to conduct 2 km grid resolution simulations of prescr...
The evaluation of anthropogenic impact on the ecological stability of landscape.
Michaeli, Eva; Ivanová, Monika; Koco, Štefan
2015-01-01
The model area is the northern surrounding of the water reservoir Zemplinska Irava in the east of Slovakia. Selection of the examined territory and the time horizons was not random. The aim was to capture the intensity level of anthropogenic impact on the values of the coefficient of ecological stability after the construction of water reservoir, Zempifnska Irava. The contribution evaluates ecological stability of landscape in the years 1956 and 2009 by GIS technology, using two methods. The first method determines the rate of ecological stability of landscape on the basis of the significance of land cover classes in the regular network of squares (the real size of the square is 0.5 square km). The second method determines the ecological stability of landscape secondary on the basis of the man influence on the landscape. A comparison of two methods has been made, as well as interpretation of the output data (e.g., monitoring the impact of marginal land cover classes with the minimal surfaces in the grid of square at the fluctuation of the index of ecological stability, respectively, it considers the possibilities to streamline the research results using homogeneous spatial units) and it also allows to track the changes in the ecological stability of the landscape in chronological development.
Application of a photochemical grid model to milan metropolitan area
NASA Astrophysics Data System (ADS)
Silibello, C.; Calori, G.; Brusasca, G.; Catenacci, G.; Finzi, G.
High ozone levels are regularly reached during summer period in South-European urban areas, calling for careful design of primary pollutants emission reduction strategies. In this perspective the CALGRID modelling system has been applied to Milan metropolitan area, located in the Po Valley, the most industrialised and populated area in Italy. For the first modelling exercise, a simulation domain of 100×100 km 2 has been considered and a summer period, characterised by high photochemical activity, has been selected. Hourly emissions have been derived by spatially and temporally disaggregating national inventories data, while standard upper-air and ground-based meteorological data have been used as input to the CALMET pre-processor. A careful analysis of simulation results versus local network monitoring data has revealed some critical points, related to both modelling assumptions and practical data availability. A satisfactory reproduction of daytime ozone behaviour has been, in fact, accomplished, both in urban and suburban sites, while nighttime primary pollutants accumulations and consequent ozone consumption simulated by the model have not found correspondence in the measurements. Nitrogen dioxide has been also successfully modelled, mostly in city surroundings, whereas higher discrepancies have been found in some urban stations. Possible explanations of these facts are discussed in the paper, giving an insight for further work.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.
Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.
A multi-resolution approach to electromagnetic modelling
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-07-01
We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
NASA Astrophysics Data System (ADS)
Fisher, A. T.; Lauer, R. M.; Winslow, D. M.
2015-12-01
There is a region of 20-24 M.y. old seafloor on the eastern flank of the East Pacific Rise, offshore of Costa Rica, where the advective heat loss from the crust is 60-85% of lithospheric. Much of this advective flux occurs through basement outcrops that penetrate regionally thick sediments, but rates and patterns of hydrothermal circulation in this area are poorly understood. We have run a series of numerical simulations of coupled fluid-heat transport to assess how crustal aquifer and outcrop properties and the distance(s) between outcrops control ridge-flank hydrothermal flows in this setting. Extracting a large fraction of lithospheric heat through this process requires crustal aquifer permeability on the order of 10-10 to 10-9 m2, values considerably higher than seen on other ridge flanks (where advective heat extraction is less efficient). In simulations using two crustal outcrops having a different size, vigorous discharge of outcrop-to-outcrop flow is favored through the smaller and/or less permeable outcrop. In addition, simulations with a larger grid (40 km square versus 20 km square) result in higher fluid flow rates, apparently because there is more heat to be mined by flow between the outcrops. For simulations matching regional heat extraction observations, the outcrop-to-outcrop flow rates from the smaller outcrops are 1,000-3,000 kg/s (for the smaller grids) and 2,000-10,000 kg/s (for larger grids), values consistent with predictions made on the basis of a regional heat flux budget. In many simulations, local convection in and out of individual, large outcrops also removes a significant fraction of lithospheric heat. Additional simulations were conducted with three or four outcrops per simulation grid, to further explore relationships between the geometry, properties, and advective heat extraction.
Deterministic seismic hazard macrozonation of India
NASA Astrophysics Data System (ADS)
Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guangxing; Qian, Yun; Yan, Huiping
One limitation of most global climate models (GCMs) is that with the horizontal resolutions they typically employ, they cannot resolve the subgrid variability (SGV) of clouds and aerosols, adding extra uncertainties to the aerosol radiative forcing estimation. To inform the development of an aerosol subgrid variability parameterization, here we analyze the aerosol SGV over the southern Pacific Ocean simulated by the high-resolution Weather Research and Forecasting model coupled to Chemistry. We find that within a typical GCM grid, the aerosol mass subgrid standard deviation is 15% of the grid-box mean mass near the surface on a 1 month mean basis.more » The fraction can increase to 50% in the free troposphere. The relationships between the sea-salt mass concentration, meteorological variables, and sea-salt emission rate are investigated in both the clear and cloudy portion. Under clear-sky conditions, marine aerosol subgrid standard deviation is highly correlated with the standard deviations of vertical velocity, cloud water mixing ratio, and sea-salt emission rates near the surface. It is also strongly connected to the grid box mean aerosol in the free troposphere (between 2 km and 4 km). In the cloudy area, interstitial sea-salt aerosol mass concentrations are smaller, but higher correlation is found between the subgrid standard deviations of aerosol mass and vertical velocity. Additionally, we find that decreasing the model grid resolution can reduce the marine aerosol SGV but strengthen the correlations between the aerosol SGV and the total water mixing ratio (sum of water vapor, cloud liquid, and cloud ice mixing ratios).« less
High-resolution grids of hourly meteorological variables for Germany
NASA Astrophysics Data System (ADS)
Krähenmann, S.; Walter, A.; Brienen, S.; Imbery, F.; Matzarakis, A.
2018-02-01
We present a 1-km2 gridded German dataset of hourly surface climate variables covering the period 1995 to 2012. The dataset comprises 12 variables including temperature, dew point, cloud cover, wind speed and direction, global and direct shortwave radiation, down- and up-welling longwave radiation, sea level pressure, relative humidity and vapour pressure. This dataset was constructed statistically from station data, satellite observations and model data. It is outstanding in terms of spatial and temporal resolution and in the number of climate variables. For each variable, we employed the most suitable gridding method and combined the best of several information sources, including station records, satellite-derived data and data from a regional climate model. A module to estimate urban heat island intensity was integrated for air and dew point temperature. Owing to the low density of available synop stations, the gridded dataset does not capture all variations that may occur at a resolution of 1 km2. This applies to areas of complex terrain (all the variables), and in particular to wind speed and the radiation parameters. To achieve maximum precision, we used all observational information when it was available. This, however, leads to inhomogeneities in station network density and affects the long-term consistency of the dataset. A first climate analysis for Germany was conducted. The Rhine River Valley, for example, exhibited more than 100 summer days in 2003, whereas in 1996, the number was low everywhere in Germany. The dataset is useful for applications in various climate-related studies, hazard management and for solar or wind energy applications and it is available via doi: 10.5676/DWD_CDC/TRY_Basis_v001.
Gridded rainfall estimation for distributed modeling in western mountainous areas
NASA Astrophysics Data System (ADS)
Moreda, F.; Cong, S.; Schaake, J.; Smith, M.
2006-05-01
Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall difference, b) annual difference, c) typical year monthly comparison, and d) regression fit of the MAPX and MAP data. In terms of mean areal precipitation, overall differences between the MAP and MAPX time series are very small for the North Fork American River elevation zones. For the East Fork Carson River zones, the over all difference is up to 10 percent. The difference tends to be high when the elevation zones are small in area. In our presentation, we will show the results of our analyses and discuss future evaluations of these precipitation estimates using distributed and lumped hydrologic models.
Mass production of extensive air showers for the Pierre Auger Collaboration using Grid Technology
NASA Astrophysics Data System (ADS)
Lozano Bahilo, Julio; Pierre Auger Collaboration
2012-06-01
When ultra-high energy cosmic rays enter the atmosphere they interact producing extensive air showers (EAS) which are the objects studied by the Pierre Auger Observatory. The number of particles involved in an EAS at these energies is of the order of billions and the generation of a single simulated EAS requires many hours of computing time with current processors. In addition, the storage space consumed by the output of one simulated EAS is very high. Therefore we have to make use of Grid resources to be able to generate sufficient quantities of showers for our physics studies in reasonable time periods. We have developed a set of highly automated scripts written in common software scripting languages in order to deal with the high number of jobs which we have to submit regularly to the Grid. In spite of the low number of sites supporting our Virtual Organization (VO) we have reached the top spot on CPU consumption among non LHC (Large Hadron Collider) VOs within EGI (European Grid Infrastructure).
Tompkins, Adrian M; McCreesh, Nicky
2016-03-31
One year of mobile phone location data from Senegal is analysed to determine the characteristics of journeys that result in an overnight stay, and are thus relevant for malaria transmission. Defining the home location of each person as the place of most frequent calls, it is found that approximately 60% of people who spend nights away from home have regular destinations that are repeatedly visited, although only 10% have 3 or more regular destinations. The number of journeys involving overnight stays peaks at a distance of 50 km, although roughly half of such journeys exceed 100 km. Most visits only involve a stay of one or two nights away from home, with just 4% exceeding one week. A new agent-based migration model is introduced, based on a gravity model adapted to represent overnight journeys. Each agent makes journeys involving overnight stays to either regular or random locations, with journey and destination probabilities taken from the mobile phone dataset. Preliminary simulations show that the agent-based model can approximately reproduce the patterns of migration involving overnight stays.
NASA Astrophysics Data System (ADS)
Letcher, T.; Minder, J. R.
2015-12-01
High resolution regional climate models are used to characterize and quantify the snow albedo feedback (SAF) over the complex terrain of the Colorado Headwaters region. Three pairs of 7-year control and pseudo global warming simulations (with horizontal grid spacings of 4, 12, and 36 km) are used to study how the SAF modifies the regional climate response to a large-scale thermodynamic perturbation. The SAF substantially enhances warming within the Headwaters domain, locally as much as 5 °C in regions of snow loss. The SAF also increases the inter-annual variability of the springtime warming within Headwaters domain under the perturbed climate. Linear feedback analysis is used quantify the strength of the SAF. The SAF attains a maximum value of 4 W m-2 K-1 during April when snow loss coincides with strong incoming solar radiation. On sub-seasonal timescales, simulations at 4 km and 12 km horizontal grid-spacing show good agreement in the strength and timing of the SAF, whereas a 36km simulation shows greater discrepancies that are tired to differences in snow accumulation and ablation caused by smoother terrain. An analysis of the regional energy budget shows that transport by atmospheric motion acts as a negative feedback to regional warming, damping the effects of the SAF. On the mesoscale, this transport causes non-local warming in locations with no snow. The methods presented here can be used generally to quantify the role of the SAF in other regional climate modeling experiments.
Regional simulation of Indian summer monsoon intraseasonal oscillations at gray-zone resolution
NASA Astrophysics Data System (ADS)
Chen, Xingchao; Pauluis, Olivier M.; Zhang, Fuqing
2018-01-01
Simulations of the Indian summer monsoon by the cloud-permitting Weather Research and Forecasting (WRF) model at gray-zone resolution are described in this study, with a particular emphasis on the model ability to capture the monsoon intraseasonal oscillations (MISOs). Five boreal summers are simulated from 2007 to 2011 using the ERA-Interim reanalysis as the lateral boundary forcing data. Our experimental setup relies on a horizontal grid spacing of 9 km to explicitly simulate deep convection without the use of cumulus parameterizations. When compared to simulations with coarser grid spacing (27 km) and using a cumulus scheme, the 9 km simulations reduce the biases in mean precipitation and produce more realistic low-frequency variability associated with MISOs. Results show that the model at the 9 km gray-zone resolution captures the salient features of the summer monsoon. The spatial distributions and temporal evolutions of monsoon rainfall in the WRF simulations verify qualitatively well against observations from the Tropical Rainfall Measurement Mission (TRMM), with regional maxima located over Western Ghats, central India, Himalaya foothills, and the west coast of Myanmar. The onset, breaks, and withdrawal of the summer monsoon in each year are also realistically captured by the model. The MISO-phase composites of monsoon rainfall, low-level wind, and precipitable water anomalies in the simulations also agree qualitatively with the observations. Both the simulations and observations show a northeastward propagation of the MISOs, with the intensification and weakening of the Somali Jet over the Arabian Sea during the active and break phases of the Indian summer monsoon.
Evaluation of WRF Parameterizations for Air Quality Applications over the Midwest USA
NASA Astrophysics Data System (ADS)
Zheng, Z.; Fu, K.; Balasubramanian, S.; Koloutsou-Vakakis, S.; McFarland, D. M.; Rood, M. J.
2017-12-01
Reliable predictions from Chemical Transport Models (CTMs) for air quality research require accurate gridded weather inputs. In this study, a sensitivity analysis of 17 Weather Research and Forecast (WRF) model runs was conducted to explore the optimum configuration in six physics categories (i.e., cumulus, surface layer, microphysics, land surface model, planetary boundary layer, and longwave/shortwave radiation) for the Midwest USA. WRF runs were initally conducted over four days in May 2011 for a 12 km x 12 km domain over contiguous USA and a nested 4 km x 4 km domain over the Midwest USA (i.e., Illinois and adjacent areas including Iowa, Indiana, and Missouri). Model outputs were evaluated statistically by comparison with meteorological observations (DS337.0, METAR data, and the Water and Atmospheric Resources Monitoring Network) and resulting statistics were compared to benchmark values from the literature. Identified optimum configurations of physics parametrizations were then evaluated for the whole months of May and October 2011 to evaluate WRF model performance for Midwestern spring and fall seasons. This study demonstrated that for the chosen physics options, WRF predicted well temperature (Index of Agreement (IOA) = 0.99), pressure (IOA = 0.99), relative humidity (IOA = 0.93), wind speed (IOA = 0.85), and wind direction (IOA = 0.97). However, WRF did not predict daily precipitation satisfactorily (IOA = 0.16). Developed gridded weather fields will be used as inputs to a CTM ensemble consisting of the Comprehensive Air Quality Model with Extensions to study impacts of chemical fertilizer usage on regional air quality in the Midwest USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Yuxing; Fan, Jiwen; Xiao, Heng
Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32more » km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.« less
Simulations and Evaluation of Mesoscale Convective Systems in a Multi-scale Modeling Framework (MMF)
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.
2017-12-01
It is well known that the mesoscale convective systems (MCS) produce more than 50% of rainfall in most tropical regions and play important roles in regional and global water cycles. Simulation of MCSs in global and climate models is a very challenging problem. Typical MCSs have horizontal scale of a few hundred kilometers. Models with a domain of several hundred kilometers and fine enough resolution to properly simulate individual clouds are required to realistically simulate MCSs. The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has shown some capabilities of simulating organized MCS-like storm signals and propagations. However, its embedded CRMs typically have small domain (less than 128 km) and coarse resolution ( 4 km) that cannot realistically simulate MCSs and individual clouds. In this study, a series of simulations were performed using the Goddard MMF. The impacts of the domain size and model grid resolution of the embedded CRMs on simulating MCSs are examined. The changes of cloud structure, occurrence, and properties such as cloud types, updraft and downdraft, latent heating profile, and cold pool strength in the embedded CRMs are examined in details. The simulated MCS characteristics are evaluated against satellite measurements using the Goddard Satellite Data Simulator Unit. The results indicate that embedded CRMs with large domain and fine resolution tend to produce better simulations compared to those simulations with typical MMF configuration (128 km domain size and 4 km model grid spacing).
Regularization techniques on least squares non-uniform fast Fourier transform.
Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena
2013-05-01
Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.
Geometric Stitching Method for Double Cameras with Weak Convergence Geometry
NASA Astrophysics Data System (ADS)
Zhou, N.; He, H.; Bao, Y.; Yue, C.; Xing, K.; Cao, S.
2017-05-01
In this paper, a new geometric stitching method is proposed which utilizes digital elevation model (DEM)-aided block adjustment to solve relative orientation parameters for dual-camera with weak convergence geometry. A rational function model (RFM) with affine transformation is chosen as the relative orientation model. To deal with the weak geometry, a reference DEM is used in this method as an additional constraint in the block adjustment, which only calculates the planimetry coordinates of tie points (TPs). After that we can use the obtained affine transform coefficients to generate virtual grid, and update rational polynomial coefficients (RPCs) to complete the geometric stitching. Our proposed method was tested on GaoFen-2(GF-2) dual-camera panchromatic (PAN) images. The test results show that the proposed method can achieve an accuracy of better than 0.5 pixel in planimetry and have a seamless visual effect. For regions with small relief, when global DEM with 1 km grid, SRTM with 90 m grid and ASTER GDEM V2 with 30 m grid replaced DEM with 1m grid as elevation constraint it is almost no loss of accuracy. The test results proved the effectiveness and feasibility of the stitching method.
Zhang, Xiao-Bo; Qu, Xian-You; Li, Meng; Wang, Hui; Jing, Zhi-Xian; Liu, Xiang; Zhang, Zhi-Wei; Guo, Lan-Ping; Huang, Lu-Qi
2017-11-01
After the end of the national and local medicine resources census work, a large number of Chinese medicine resources and distribution of data will be summarized. The species richness between the regions is a valid indicator for objective reflection of inter-regional resources of Chinese medicine. Due to the large difference in the size of the county area, the assessment of the intercropping of the resources of the traditional Chinese medicine by the county as a statistical unit will lead to the deviation of the regional abundance statistics. Based on the rule grid or grid statistical methods, the size of the statistical unit due to different can be reduced, the differences in the richness of traditional Chinese medicine resources are caused. Taking Chongqing as an example, based on the existing survey data, the difference of richness of traditional Chinese medicine resources under different grid scale were compared and analyzed. The results showed that the 30 km grid could be selected and the richness of Chinese medicine resources in Chongqing could reflect the objective situation of intercropping resources richness in traditional Chinese medicine better. Copyright© by the Chinese Pharmaceutical Association.
How Perturbing Ocean Floor Disturbs Tsunami Waves
NASA Astrophysics Data System (ADS)
Salaree, A.; Okal, E.
2017-12-01
Bathymetry maps play, perhaps the most crucial role in optimal tsunami simulations. Regardless of the simulation method, on one hand, it is desirable to include every detailed bathymetry feature in the simulation grids in order to predict tsunami amplitudes as accurately as possible, but on the other hand, large grids result in long simulation times. It is therefore, of interest to investigate a "sufficiency" level - if any - for the amount of details in bathymetry grids needed to reconstruct the most important features in tsunami simulations, as obtained from the actual bathymetry. In this context, we use a spherical harmonics series approach to decompose the bathymetry of the Pacific ocean into its components down to a resolution of 4 degrees (l=100) and create bathymetry grids by accumulating the resulting terms. We then use these grids to simulate the tsunami behavior from pure thrust events around the Pacific through the MOST algorithm (e.g. Titov & Synolakis, 1995; Titov & Synolakis, 1998). Our preliminary results reveal that one would only need to consider the sum of the first 40 coefficients (equivalent to a resolution of 1000 km) to reproduce the main components of the "real" results. This would result in simpler simulations, and potentially allowing for more efficient tsunami warning algorithms.
Fast and accurate 3D tensor calculation of the Fock operator in a general basis
NASA Astrophysics Data System (ADS)
Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.
2012-11-01
The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.
NASA Astrophysics Data System (ADS)
Xu, K.; Sühring, M.; Metzger, S.; Desai, A. R.
2017-12-01
Most eddy covariance (EC) flux towers suffer from footprint bias. This footprint not only varies rapidly in time, but is smaller than the resolution of most earth system models, leading to a systemic scale mismatch in model-data comparison. Previous studies have suggested this problem can be mitigated (1) with multiple towers, (2) by building a taller tower with a large flux footprint, and (3) by applying advanced scaling methods. Here we ask: (1) How many flux towers are needed to sufficiently sample the flux mean and variation across an Earth system model domain? (2) How tall is tall enough for a single tower to represent the Earth system model domain? (3) Can we reduce the requirements derived from the first two questions with advanced scaling methods? We test these questions with output from large eddy simulations (LES) and application of the environmental response function (ERF) upscaling method. PALM LES (Maronga et al. 2015) was set up over a domain of 12 km x 16 km x 1.8 km at 7 m spatial resolution and produced 5 hours of output at a time step of 0.3 s. The surface Bowen ratio alternated between 0.2 and 1 among a series of 3 km wide stripe-like surface patches, with horizontal wind perpendicular to the surface heterogeneity. A total of 384 virtual towers were arranged on a regular grid across the LES domain, recording EC observations at 18 vertical levels. We use increasing height of a virtual flux tower and increasing numbers of virtual flux towers in the domain to compute energy fluxes. Initial results show a large (>25) number of towers is needed sufficiently sample the mean domain energy flux. When the ERF upscaling method was applied to the virtual towers in the LES environment, we were able to map fluxes over the domain to within 20% precision with a significantly smaller number of towers. This was achieved by relating sub-hourly turbulent fluxes to meteorological forcings and surface properties. These results demonstrate how advanced scaling techniques can decrease the number of towers, and thus experimental expense, required for domain-scaling over heterogeneous surface.
Global behavior of the height/seasonal structure of tides between 40 deg and 60 deg latitude
NASA Technical Reports Server (NTRS)
Manson, A. H.; Meek, C. E.; Teitelbaum, H.; Fraser, G. J.; Smith, M. J.; Clark, R. R.; Schminder, R.; Kuerschner, D.
1989-01-01
The radars utilized are meteor (2), medium frequency (2) and the new low frequency (1) systems: analysis techniques were exhaustively studied internally and comparatively and are not thought to affect the results. Emphasis is placed upon the new height-time contours of 24-, 12-h tidal amplitudes and phases, which best display height and seasonal structures; where possible high resolution (10 d) is used (Saskatoon), but all stations provide monthly mean resolution. At these latitudes the diurnal tide is generally smaller than the semidiurnal, and displays more variability. However, there is a tendency for vertical wavelengths and amplitudes to be larger during summer months. On occasions in winter and fall, wavelengths may be less than 50 km. The dominant semidiurnal tide shows significant regular season structure; wavelengths are generally small (about 50 km) in winter, large in summer (equal to or greater than 100 km), and these states are separated by rapid equinoctial transitions. There is some evidence for less regularity toward 40 deg. Coupling with mean winds is apparent. Data from earlier ATMAP campaigns are mentioned, and reasons for their inadequacies presented.
Mapping PetaSHA Applications to TeraGrid Architectures
NASA Astrophysics Data System (ADS)
Cui, Y.; Moore, R.; Olsen, K.; Zhu, J.; Dalguer, L. A.; Day, S.; Cruz-Atienza, V.; Maechling, P.; Jordan, T.
2007-12-01
The Southern California Earthquake Center (SCEC) has a science program in developing an integrated cyberfacility - PetaSHA - for executing physics-based seismic hazard analysis (SHA) computations. The NSF has awarded PetaSHA 15 million allocation service units this year on the fastest supercomputers available within the NSF TeraGrid. However, one size does not fit all, a range of systems are needed to support this effort at different stages of the simulations. Enabling PetaSHA simulations on those TeraGrid architectures to solve both dynamic rupture and seismic wave propagation have been a challenge from both hardware and software levels. This is an adaptation procedure to meet specific requirements of each architecture. It is important to determine how fundamental system attributes affect application performance. We present an adaptive approach in our PetaSHA application that enables the simultaneous optimization of both computation and communication at run-time using flexible settings. These techniques optimize initialization, source/media partition and MPI-IO output in different ways to achieve optimal performance on the target machines. The resulting code is a factor of four faster than the orignial version. New MPI-I/O capabilities have been added for the accurate Staggered-Grid Split-Node (SGSN) method for dynamic rupture propagation in the velocity-stress staggered-grid finite difference scheme (Dalguer and Day, JGR, 2007), We use execution workflow across TeraGrid sites for managing the resulting data volumes. Our lessons learned indicate that minimizing time to solution is most critical, in particular when scheduling large scale simulations across supercomputer sites. The TeraShake platform has been ported to multiple architectures including TACC Dell lonestar and Abe, Cray XT3 Bigben and Blue Gene/L. Parallel efficiency of 96% with the PetaSHA application Olsen-AWM has been demonstrated on 40,960 Blue Gene/L processors at IBM TJ Watson Center. Notable accomplishments using the optimized code include the M7.8 ShakeOut rupture scenario, as part of the southern San Andreas Fault evaluation SoSAFE. The ShakeOut simulation domain is the same as used for the SCEC TeraShake simulations (600 km by 300 km by 80 km). However, the higher resolution of 100 m with frequency content up to 1 Hz required 14.4 billion grid points, eight times more than the TeraShake scenarios. The simulation used 2000 TACC Dell linux Lonestar processors and took 56 hours to compute 240 seconds of wave propagation. The pre-processing input partition, as well as post-processing analysis has been performed on the SDSC IBM Datastar p655 and p690. In addition, as part of the SCEC DynaShake computational platform, the SGSN capability was used to model dynamic rupture propagation for the ShakeOut scenario that match the proposed surface slip and size of the event. Mapping applications to different architectures require coordination of many areas of expertise in hardware and application level, an outstanding challenge faced on the current petascale computing effort. We believe our techniques as well as distributed data management through data grids have provided a practical example of how to effectively use multiple compute resources, and our results will benefit other geoscience disciplines as well.
Rafkin, Scot C R; Sta Maria, Magdalena R V; Michaels, Timothy I
2002-10-17
Mesoscale (<100 km) atmospheric phenomena are ubiquitous on Mars, as revealed by Mars Orbiter Camera images. Numerical models provide an important means of investigating martian atmospheric dynamics, for which data availability is limited. But the resolution of general circulation models, which are traditionally used for such research, is not sufficient to resolve mesoscale phenomena. To provide better understanding of these relatively small-scale phenomena, mesoscale models have recently been introduced. Here we simulate the mesoscale spiral dust cloud observed over the caldera of the volcano Arsia Mons by using the Mars Regional Atmospheric Modelling System. Our simulation uses a hierarchy of nested models with grid sizes ranging from 240 km to 3 km, and reveals that the dust cloud is an indicator of a greater but optically thin thermal circulation that reaches heights of up to 30 km, and transports dust horizontally over thousands of kilometres.
NASA Astrophysics Data System (ADS)
Wang, Xiaoran; Li, Qiusheng; Li, Guohui; Zhou, Yuanze; Ye, Zhuo; Zhang, Hongshuang
2018-03-01
We provided a new study of the seismic velocity structure of the mantle transition zone (MTZ) beneath the northeastern South China Sea using P-wave triplications from two earthquakes at the central Philippines recorded by the Chinese Digital Seismic Network. Through fitting the observed and theoretical triplications modeled by the dynamic ray tracing method for traveltimes, and the reflectivity method for synthetic waveforms using grid-searching method, best-fit velocity models based on IASP91 were obtained to constrain the P-wave velocity structure of the MTZ. The models show that a high-velocity anomaly (HVA) resides at the bottom of MTZ. The HVA is 215 km to 225 km thick, with a P-wave velocity increment of 1.0% between 450 km and 665 km or 675 km transition and increase by 2.5-3.5% at 665 km or 675 km depth. The P-wave velocity increment ranges from approximately 0.3% to 0.8% below the 665 km or 675 km. We proposed that the HVA in the MTZ was caused by the broken fragments of a diving oceanic plate falling into the MTZ at a high angle, and/or by unstable thick continental lithosphere dropping into the MTZ sequentially or almost simultaneously.
NASA Astrophysics Data System (ADS)
Kittel, Christoph; Lang, Charlotte; Agosta, Cécile; Prignon, Maxime; Fettweis, Xavier; Erpicum, Michel
2016-04-01
This study presents surface mass balance (SMB) results at 5 km resolution with the regional climate MAR model over the Greenland ice sheet. Here, we use the last MAR version (v3.6) where the land-ice module (SISVAT) using a high resolution grid (5km) for surface variables is fully coupled while the MAR atmospheric module running at a lower resolution of 10km. This online downscaling technique enables to correct near-surface temperature and humidity from MAR by a gradient based on elevation before forcing SISVAT. The 10 km precipitation is not corrected. Corrections are stronger over the ablation zone where topography presents more variations. The model has been force by ERA-Interim between 1979 and 2014. We will show the advantages of using an online SMB downscaling technique in respect to an offline downscaling extrapolation based on local SMB vertical gradients. Results at 5 km show a better agreement with the PROMICE surface mass balance data base than the extrapolated 10 km MAR SMB results.
NASA Astrophysics Data System (ADS)
Guo, Z.; Zhou, Y.
2017-12-01
We report global structure of the 410-km and 660-km discontinuities from finite-frequency tomography using frequency-dependent traveltime measurements of SS precursors recorded at the Global Seismological Network (GSN). Finite-frequency sensitivity kernels for discontinuity depth perturbations are calculated in the framework of traveling-wave mode coupling. We parametrize the global discontinuities using a set of spherical triangular grid points and solve the tomographic inverse problem based on singular value decomposition. Our global 410-km and 660-km discontinuity models reveal distinctly different characteristics beneath the oceans and subduction zones. In general, oceanic regions are associated with a thinner mantle transition zone and depth perturbations of the 410-km and 660-km discontinuities are anti-correlated, in agreement with a thermal origin and an overall warm and dry mantle beneath the oceans. The perturbations are not uniform throughout the oceans but show strong small-scale variations, indicating complex processes in the mantle transition zone. In major subduction zones (except for South America where data coverage is sparse), depth perturbations of the 410-km and 660-km discontinuities are correlated, with both the 410-km and the 660-km discontinuities occurring at greater depths. The distributions of the anomalies are consistent with cold stagnant slabs just above the 660-km discontinuity and ascending return flows in a superadiabatic upper mantle.
On the Surprising Salience of Curvature in Grouping by Proximity
ERIC Educational Resources Information Center
Strother, Lars; Kubovy, Michael
2006-01-01
The authors conducted 3 experiments to explore the roles of curvature, density, and relative proximity in the perceptual organization of ambiguous dot patterns. To this end, they developed a new family of regular dot patterns that tend to be perceptually grouped into parallel contours, dot-sampled structured grids (DSGs). DSGs are similar to the…
Discovering Structural Regularity in 3D Geometry
Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.
2010-01-01
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292
Development and Validation of The SMAP Enhanced Passive Soil Moisture Product
NASA Technical Reports Server (NTRS)
Chan, S.; Bindlish, R.; O'Neill, P.; Jackson, T.; Chaubell, J.; Piepmeier, J.; Dunbar, S.; Colliander, A.; Chen, F.; Entekhabi, D.;
2017-01-01
Since the beginning of its routine science operation in March 2015, the NASA SMAP observatory has been returning interference-mitigated brightness temperature observations at L-band (1.41 GHz) frequency from space. The resulting data enable frequent global mapping of soil moisture with a retrieval uncertainty below 0.040 cu m/cu m at a 36 km spatial scale. This paper describes the development and validation of an enhanced version of the current standard soil moisture product. Compared with the standard product that is posted on a 36 km grid, the new enhanced product is posted on a 9 km grid. Derived from the same time-ordered brightness temperature observations that feed the current standard passive soil moisture product, the enhanced passive soil moisture product leverages on the Backus-Gilbert optimal interpolation technique that more fully utilizes the additional information from the original radiometer observations to achieve global mapping of soil moisture with enhanced clarity. The resulting enhanced soil moisture product was assessed using long-term in situ soil moisture observations from core validation sites located in diverse biomes and was found to exhibit an average retrieval uncertainty below 0.040 cu m/cu m. As of December 2016, the enhanced soil moisture product has been made available to the public from the NASA Distributed Active Archive Center at the National Snow and Ice Data Center.
Three-dimensional estimate of the lithospheric effective elastic thickness of the Line ridge
NASA Astrophysics Data System (ADS)
Hu, Minzhang; Li, Jiancheng; Jin, Taoyong; Xu, Xinyu; Xing, Lelin; Shen, Chongyang; Li, Hui
2015-09-01
Using a new bathymetry grid formed with vertical gravity gradient anomalies and ship soundings (BAT_VGG), a 1° × 1° lithospheric effective elastic thickness (Te) grid of the Line ridge was calculated with the moving window admittance technique. As a comparison, both the GEBCO_08 and SIO V15.1 bathymetry datasets were used to calculate Te as well. The results show that BAT_VGG is suitable for the calculation of lithospheric effective elastic thickness. The lithospheric effective elastic thickness of the Line ridge is shown to be low, in the range of 5.5-13 km, with an average of 8 km and a standard deviation of 1.3 km. Using the plate cooling model as a reference, most of the effective elastic thicknesses are controlled by the 150-300 °C isotherm. Seamounts are primarily present in two zones, with lithospheric ages of 20-35 Ma and 40-60 Ma, at the time of loading. Unlike the Hawaiian-Emperor chain, the lithospheric effective elastic thickness of the Line ridge does not change monotonously. The tectonic setting of the Line ridge is discussed in detail based on our Te results and the seamount ages collected from the literature. The results show that thermal and fracture activities must have played an important role in the origin and evolution of the ridge.
NASA Astrophysics Data System (ADS)
Wang, Tingting; Sun, Fubao; Ge, Quansheng; Kleidon, Axel; Liu, Wenbin
2018-02-01
Although gridded air temperature data sets share much of the same observations, different rates of warming can be detected due to different approaches employed for considering elevation signatures in the interpolation processes. Here we examine the influence of varying spatiotemporal distribution of sites on surface warming in the long-term trend and over the recent warming hiatus period in China during 1951-2015. A suspicious cooling trend in raw interpolated air temperature time series is found in the 1950s, and 91% of which can be explained by the artificial elevation changes introduced by the interpolation process. We define the regression slope relating temperature difference and elevation difference as the bulk lapse rate of -5.6°C/km, which tends to be higher (-8.7°C/km) in dry regions but lower (-2.4°C/km) in wet regions. Compared to independent experimental observations, we find that the estimated monthly bulk lapse rates work well to capture the elevation bias. Significant improvement can be achieved in adjusting the interpolated original temperature time series using the bulk lapse rate. The results highlight that the developed bulk lapse rate is useful to account for the elevation signature in the interpolation of site-based surface air temperature to gridded data sets and is necessary for avoiding elevation bias in climate change studies.
Networks of channels for self-healing composite materials
NASA Astrophysics Data System (ADS)
Bejan, A.; Lorente, S.; Wang, K.-M.
2006-08-01
This is a fundamental study of how to vascularize a self-healing composite material so that healing fluid reaches all the crack sites that may occur randomly through the material. The network of channels is built into the material and is filled with pressurized healing fluid. When a crack forms, the pressure drops at the crack site and fluid flows from the network into the crack. The objective is to discover the network configuration that is capable of delivering fluid to all the cracks the fastest. The crack site dimension and the total volume of the channels are fixed. It is argued that the network must be configured as a grid and not as a tree. Two classes of grids are considered and optimized: (i) grids with one channel diameter and regular polygonal loops (square, triangle, hexagon) and (ii) grids with two channel sizes. The best architecture of type (i) is the grid with triangular loops. The best architecture of type (ii) has a particular (optimal) ratio of diameters that departs from 1 as the crack length scale becomes smaller than the global scale of the vascularized structure from which the crack draws its healing fluid. The optimization of the ratio of channel diameters cuts in half the time of fluid delivery to the crack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D
In this work, the net power delivered to the grid from a nonideal power take-off (PTO) is introduced followed by a review of the pseudo-spectral control theory. A power-to-load ratio, used to evaluate the pseudo-spectral controller performance, is discussed, and the results obtained from optimizing a multiterm objective function are compared against results obtained from maximizing the net output power to the grid. Simulation results are then presented for four different oscillating wave energy converter geometries to highlight the potential of combing both geometry and PTO control to maximize power while minimizing loads.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
Manonmani, N.; Subbiah, V.; Sivakumar, L.
2015-01-01
The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation. PMID:26516636
Manonmani, N; Subbiah, V; Sivakumar, L
2015-01-01
The key objective of wind turbine development is to ensure that output power is continuously increased. It is authenticated that wind turbines (WTs) supply the necessary reactive power to the grid at the time of fault and after fault to aid the flowing grid voltage. At this juncture, this paper introduces a novel heuristic based controller module employing differential evolution and neural network architecture to improve the low-voltage ride-through rate of grid-connected wind turbines, which are connected along with doubly fed induction generators (DFIGs). The traditional crowbar-based systems were basically applied to secure the rotor-side converter during the occurrence of grid faults. This traditional controller is found not to satisfy the desired requirement, since DFIG during the connection of crowbar acts like a squirrel cage module and absorbs the reactive power from the grid. This limitation is taken care of in this paper by introducing heuristic controllers that remove the usage of crowbar and ensure that wind turbines supply necessary reactive power to the grid during faults. The controller is designed in this paper to enhance the DFIG converter during the grid fault and this controller takes care of the ride-through fault without employing any other hardware modules. The paper introduces a double wavelet neural network controller which is appropriately tuned employing differential evolution. To validate the proposed controller module, a case study of wind farm with 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder is carried out by simulation.
High-resolution CSR GRACE RL05 mascons
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2016-10-01
The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.
Spatio-temporal modelling for assessing air pollution in Santiago de Chile
NASA Astrophysics Data System (ADS)
Nicolis, Orietta; Camaño, Christian; Mařın, Julio C.; Sahu, Sujit K.
2017-01-01
In this work, we propose a space-time approach for studying the PM2.5 concentration in the city of Santiago de Chile. In particular, we apply the autoregressive hierarchical model proposed by [1] using the PM2.5 observations collected by a monitoring network as a response variable and numerical weather forecasts from the Weather Research and Forecasting (WRF) model as covariate together with spatial and temporal (periodic) components. The approach is able to provide short-term spatio-temporal predictions of PM2.5 concentrations on a fine spatial grid (at 1km × 1km horizontal resolution.)
The current study examines predictions of transference ratios and related modeled parameters for oxidized sulfur and oxidized nitrogen using five years (2002-2006) of 12-km grid cell-specific annual estimates from EPA’s Community Air Quality Model (CMAQ) for five selected sub-re...
High-resolution weather forecasting is affected by many aspects, i.e. model initial conditions, subgrid-scale cumulus convection and cloud microphysics schemes. Recent 12km grid studies using the Weather Research and Forecasting (WRF) model have identified the importance of inco...
Losch, Martin; Menemenlis, Dimitris
2018-01-01
Abstract Sea ice models with the traditional viscous‐plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan‐Arctic sea ice‐ocean simulation, the small‐scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data. PMID:29576996
NASA Astrophysics Data System (ADS)
Hutter, Nils; Losch, Martin; Menemenlis, Dimitris
2018-01-01
Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.
Hutter, Nils; Losch, Martin; Menemenlis, Dimitris
2018-01-01
Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.
A new digital elevation model of Antarctica derived from CryoSat-2 altimetry
NASA Astrophysics Data System (ADS)
Slater, Thomas; Shepherd, Andrew; McMillan, Malcolm; Muir, Alan; Gilbert, Lin; Hogg, Anna E.; Konrad, Hannes; Parrinello, Tommaso
2018-05-01
We present a new digital elevation model (DEM) of the Antarctic ice sheet and ice shelves based on 2.5 × 108 observations recorded by the CryoSat-2 satellite radar altimeter between July 2010 and July 2016. The DEM is formed from spatio-temporal fits to elevation measurements accumulated within 1, 2, and 5 km grid cells, and is posted at the modal resolution of 1 km. Altogether, 94 % of the grounded ice sheet and 98 % of the floating ice shelves are observed, and the remaining grid cells north of 88° S are interpolated using ordinary kriging. The median and root mean square difference between the DEM and 2.3 × 107 airborne laser altimeter measurements acquired during NASA Operation IceBridge campaigns are -0.30 and 13.50 m, respectively. The DEM uncertainty rises in regions of high slope, especially where elevation measurements were acquired in low-resolution mode; taking this into account, we estimate the average accuracy to be 9.5 m - a value that is comparable to or better than that of other models derived from satellite radar and laser altimetry.
Spatial heterogeneity in the carrying capacity of sika deer in Japan
Iijima, Hayato; Ueno, Mayumi
2016-01-01
Abstract Carrying capacity is 1 driver of wildlife population dynamics. Although in previous studies carrying capacity was considered to be a fixed entity, it may differ among locations due to environmental variation. The factors underlying variability in carrying capacity, however, have rarely been examined. Here, we investigated spatial heterogeneity in the carrying capacity of Japanese sika deer ( Cervus nippon ) from 2005 to 2014 in Yamanashi Prefecture, central Japan (mesh with grid cells of 5.5×4.6 km) by state-space modeling. Both carrying capacity and density dependence differed greatly among cells. Estimated carrying capacities ranged from 1.34 to 98.4 deer/km 2 . According to estimated population dynamics, grid cells with larger proportions of artificial grassland and deciduous forest were subject to lower density dependence and higher carrying capacity. We conclude that population dynamics of ungulates may vary spatially through spatial variation in carrying capacity and that the density level for controlling ungulate abundance should be based on the current density level relative to the carrying capacity for each area. PMID:29692470
Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, Peter E; Thornton, Michele M; Mayer, Benjamin W
More information: http://daymet.ornl.gov Presenter: Ranjeet Devarakonda Environmental Sciences Division Oak Ridge National Laboratory (ORNL) Daymet: Daily Surface Weather Data and Climatological Summaries provides gridded estimates of daily weather parameters for North America, including daily continuous surfaces of minimum and maximum temperature, precipitation occurrence and amount, humidity, shortwave radiation, snow water equivalent, and day length. The current data product (Version 2) covers the period January 1, 1980 to December 31, 2013 [1]. The prior product (Version 1) only covered from 1980-2008. Data are available on a daily time step at a 1-km x 1-km spatial resolution in Lambert Conformal Conic projectionmore » with a spatial extent that covers the conterminous United States, Mexico, and Southern Canada as meteorological station density allows. Daymet data can be downloaded from 1) the ORNL Distributed Active Archive Center (DAAC) search and order tools (http://daac.ornl.gov/cgi-bin/cart/add2cart.pl?add=1219) or directly from the DAAC FTP site (http://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1219) and 2) the Single Pixel Tool [2] and THREDDS (Thematic Real-time Environmental Data Services) Data Server [3]. The Single Pixel Data Extraction Tool allows users to enter a single geographic point by latitude and longitude in decimal degrees. A routine is executed that translates the (lon, lat) coordinates into projected Daymet (x,y) coordinates. These coordinates are used to access the Daymet database of daily-interpolated surface weather variables. Daily data from the nearest 1 km x 1 km Daymet grid cell are extracted from the database and formatted as a table with one column for each Daymet variable and one row for each day. All daily data for selected years are returned as a single (long) table, formatted for display in the browser window. At the top of this table is a link to the same data in a simple comma-separated text format, suitable for import into a spreadsheet or other data analysis software. The Single Pixel Data Extraction Tool also provides the option to download multiple coordinates programmatically. A multiple extractor script is freely available to download at http://daymet.ornl.gov/files/daymet.zip. The ORNL DAAC s THREDDS data server (TDS) provides customized visualization and access to Daymet time series of North American mosaics. Users can subset and download Daymet data via a variety of community standards, including OPeNDAP, NetCDF Subset service, and Open Geospatial Consortium (OGC) Web Map/Coverage Service. The ORNL DAAC TDS also exposes Daymet metadata through its ncISO service to facilitate harvesting Daymet metadata records into 3rd party catalogs. References: [1] Thornton, P.E., M.M. Thornton, B.W. Mayer, N. Wilhelmi, Y. Wei, R. Devarakonda, and R.B. Cook. 2014. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2. Data set. Available on-line [http://daac.ornl.gov] from Oak Ridge National Laboratory Distributed Active Archive Center, Oak Ridge, Tennessee, USA. [2] Devarakonda R., et al. 2012. Daymet: Single Pixel Data Extraction Tool. Available on-line [http://daymet.ornl.go/singlepixel.html]. [3] Wei Y., et al. 2014. Daymet: Thematic Real-time Environmental Data Services. Available on-line [http://daymet.ornl.gov/thredds_tiles.html].« less
Multiscale Approach to Small River Plumes off California
NASA Astrophysics Data System (ADS)
Basdurak, N. B.; Largier, J. L.; Nidzieko, N.
2012-12-01
While larger scale plumes have received significant attention, the dynamics of plumes associated with small rivers typical of California are little studied. Since small streams are not dominated by a momentum flux, their plumes are more susceptible to conditions in the coastal ocean such as wind and waves. In order to correctly model water transport at smaller scales, there is a need to capture larger scale processes. To do this, one-way nested grids with varying grid resolution (1 km and 10 m for the parent and the child grid respectively) were constructed. CENCOOS (Central and Northern California Ocean Observing System) model results were used as boundary conditions to the parent grid. Semi-idealized model results for Santa Rosa Creek, California are presented from an implementation of the Regional Ocean Modeling System (ROMS v3.0), a three-dimensional, free-surface, terrain-following numerical model. In these preliminary results, the interaction between tides, winds, and buoyancy forcing in plume dynamics is explored for scenarios including different strengths of freshwater flow with different modes (steady and pulsed). Seasonal changes in transport dynamics and dispersion patterns are analyzed.
Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An
2018-05-01
In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.
Electrification of the transportation sector offers limited country-wide greenhouse gas reductions
NASA Astrophysics Data System (ADS)
Meinrenken, Christoph J.; Lackner, Klaus S.
2014-03-01
Compared with conventional propulsion, plugin and hybrid vehicles may offer reductions in greenhouse gas (GHG) emissions, regional air/noise pollution, petroleum dependence, and ownership cost. Comparing only plugins and hybrids amongst themselves, and focusing on GHG, relative merits of different options have been shown to be more nuanced, depending on grid-carbon-intensity, range and thus battery manufacturing and weight, and trip patterns. We present a life-cycle framework to compare GHG emissions for three drivetrains (plugin-electricity-only, gasoline-only-hybrid, and plugin-hybrid) across driving ranges and grid-carbon-intensities, for passenger cars, vans, buses, or trucks (well-to-wheel plus storage manufacturing). Parameter and model uncertainties are quantified via sensitivity analyses. We find that owing to the interplay of range, GHG/km, and portions of country-wide kms accessible to electrification, GHG reductions achievable from plugins (whether electricity-only or hybrids) are limited even when assuming low-carbon future grids. Furthermore, for policy makers considering GHG from electricity and transportation sectors combined, plugin technology may in fact increase GHG compared to gasoline-only-hybrids, regardless of grid-carbon-intensity.
NASA Astrophysics Data System (ADS)
Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.
2017-12-01
Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.
Predicting the Effects of Man-Made Fishing Canals on Floodplain Inundation - A Modelling Study
NASA Astrophysics Data System (ADS)
Shastry, A. R.; Durand, M. T.; Neal, J. C.; Fernandez, A.; Hamilton, I.; Kari, S.; Laborde, S.; Mark, B. G.; Arabi, M.; Moritz, M.; Phang, S. C.
2016-12-01
The Logone floodplain in northern Cameroon is an excellent example of coupled human-natural systems because of strong couplings between the social, ecological and hydrologic systems. Overbank flow from the Logone River in September and October is essential for agriculture and fishing livelihoods. Fishers dig canals to catch fish during the flood's recession to the river in November and December by installing nets at the intersection of canals and the river. Fishing canals connect the river to natural depressions in the terrain and may serve as a man-made extension of the river drainage network. In the last four decades, there has been an exponential increase in the number of canals which may affect flood hydraulics and the fishery. The goal of this study is to characterize the relationship between the fishing canals and flood dynamics in the Logone floodplain, specifically, parameters of flooding and recession timings and the duration of inundation. To do so, we model the Bara region ( 30 km2) of the floodplain using LISFLOOD-FP, a two-dimensional hydrodynamic model with sub-grid parameterizations of canals. We use a simplified version of the hydraulic system at a grid-cell size of 30-m, using synthetic topography, parameterized fishing canals, and representing fishnets as a combination of weir and mesh screens. The inflow at Bara is obtained from a separate, lower resolution (1-km grid-cell) model forced by daily discharge records obtained from Katoa, located 25-km upstream of Bara. Preliminary results show more canals lead to early recession of flood and a shorter duration of flood inundation. A shorter duration of flood inundation reduces the period of fish growth and will affect fisher catch returns. Understanding the couplings within the system is important for predicting long-term dynamics and the impact of building more fishing canals.
Assessing and correcting spatial representativeness of tower eddy-covariance flux measurements
NASA Astrophysics Data System (ADS)
Metzger, S.; Xu, K.; Desai, A. R.; Taylor, J. R.; Kljun, N.; Blanken, P.; Burns, S. P.; Scott, R. L.
2014-12-01
Estimating the landscape-scale exchange of ecologically relevant trace gas and energy fluxes from tower eddy-covariance (EC) measurements is often complicated by surface heterogeneity. For example, a tower EC measurement may represent less than 1% of a grid cell resolved by mechanistic models (order 100-1000 km2). In particular for data assimilation or comparison with large-scale observations, it is hence critical to assess and correct the spatial representativeness of tower EC measurements. We present a procedure that determines from a single EC tower the spatio-temporally explicit flux field of its surrounding. The underlying principle is to extract the relationship between biophysical drivers and ecological responses from measurements under varying environmental conditions. For this purpose, high-frequency EC flux processing and source area calculations (≈60 h-1) are combined with remote sensing retrievals of land surface properties and subsequent machine learning. Methodological details are provided in our companion presentation "Towards the spatial rectification of tower-based eddy-covariance flux observations". We apply the procedure to one year of data from each of four AmeriFlux sites under different climate and ecological environments: Lost Creek shrub fen wetland, Niwot Ridge subalpine conifer, Park Falls mixed forest, and Santa Rita mesquite savanna. We find that heat fluxes from the Park Falls 122-m-high EC measurement and from a surrounding 100 km2 target area differ up to 100 W m-2, or 65%. Moreover, 85% and 24% of the EC flux observations are adequate surrogates of the mean surface-atmosphere exchange and its spatial variability across a 900 km2 target area, respectively, at 5% significance and 80% representativeness levels. Alternatively, the resulting flux grids can be summarized as probability density functions, and used to inform mechanistic models directly with the mean flux value and its spatial variability across a model grid cell. Lastly, for each site we evaluate the applicability of the procedure based on a full bottom-up uncertainty budget.
Learning-based Wind Estimation using Distant Soundings for Unguided Aerial Delivery
NASA Astrophysics Data System (ADS)
Plyler, M.; Cahoy, K.; Angermueller, K.; Chen, D.; Markuzon, N.
2016-12-01
Delivering unguided, parachuted payloads from aircraft requires accurate knowledge of the wind field inside an operational zone. Usually, a dropsonde released from the aircraft over the drop zone gives a more accurate wind estimate than a forecast. Mission objectives occasionally demand releasing the dropsonde away from the drop zone, but still require accuracy and precision. Barnes interpolation and many other assimilation methods do poorly when the forecast error is inconsistent in a forecast grid. A machine learning approach can better leverage non-linear relations between different weather patterns and thus provide a better wind estimate at the target drop zone when using data collected up to 100 km away. This study uses the 13 km resolution Rapid Refresh (RAP) dataset available through NOAA and subsamples to an area around Yuma, AZ and up to approximately 10km AMSL. RAP forecast grids are updated with simulated dropsondes taken from analysis (historical weather maps). We train models using different data mining and machine learning techniques, most notably boosted regression trees, that can accurately assimilate the distant dropsonde. The model takes a forecast grid and simulated remote dropsonde data as input and produces an estimate of the wind stick over the drop zone. Using ballistic winds as a defining metric, we show our data driven approach does better than Barnes interpolation under some conditions, most notably when the forecast error is different between the two locations, on test data previously unseen by the model. We study and evaluate the model's performance depending on the size, the time lag, the drop altitude, and the geographic location of the training set, and identify parameters most contributing to the accuracy of the wind estimation. This study demonstrates a new approach for assimilating remotely released dropsondes, based on boosted regression trees, and shows improvement in wind estimation over currently used methods.
Mercury Slovenian soils: High, medium and low sample density geochemical maps
NASA Astrophysics Data System (ADS)
Gosar, Mateja; Šajn, Robert; Teršič, Tamara
2017-04-01
Regional geochemical survey was conducted in whole territory of Slovenia (20273 km2). High, medium and low sample density surveys were compared. High sample density represented the regional geochemical data set supplemented by local high-density sampling data (irregular grid, n=2835). Medium-density soil sampling was performed in a 5 x 5 km grid (n=817) and low-density geochemical survey was conducted in a sampling grid 25 x 25 km (n=54). Mercury distribution in Slovenian soils was determined with models of mercury distribution in soil using all three data sets. A distinct Hg anomaly in western part of Slovenia is evident on all three models. It is a consequence of 500-years of mining and ore processing in the second largest mercury mine in the world, the Idrija mine. The determined mercury concentrations revealed an important difference between the western and the eastern parts of the country. For the medium scale geochemical mapping is the median value (0.151 mg /kg) for western Slovenia almost 2-fold higher than the median value (0.083 mg/kg) in eastern Slovenia. Besides the Hg median for the western part of Slovenia exceeds the Hg median for European soil by a factor of 4 (Gosar et al., 2016). Comparing these sample density surveys, it was shown that high sampling density allows the identification and characterization of anthropogenic influences on a local scale, while medium- and low-density sampling reveal general trends in the mercury spatial distribution, but are not appropriate for identifying local contamination in industrial regions and urban areas. The resolution of the pattern generated is the best when the high-density survey on a regional scale is supplemented with the geochemical data of the high-density surveys on a local scale. References: Gosar, M, Šajn, R, Teršič, T. Distribution pattern of mercury in the Slovenian soil: geochemical mapping based on multiple geochemical datasets. Journal of geochemical exploration, 2016, 167/38-48.
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie
2018-01-01
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei
2018-04-13
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.
Finding the proper methodology for geodiversity assessment: a recent approach in Brazil and Portugal
NASA Astrophysics Data System (ADS)
Pereira, D.; Santos, L.; Silva, J.; Pereira, P.; Brilha, J.; França, J.; Rodrigues, C.
2012-04-01
Quantification of geodiversity is a quite new topic. A first set of assessment methodologies was developed during the last years, although with no fully satisfactory results. This is mainly because the whole concept of geodiversity does not tend to be considered, but also because the results are difficult to apply practically. Several major key-points remain unsolved, including the criteria to be used, the scale-factor to be dealt with, the influence of the size of the area under analysis in the type of criteria and indicators, and the graphic presentation of the results. A methodology for the quantitative assessment of geodiversity was defined and tested at various scales. It was applied to the Xingu River Basin, Amazon, Brazil (about 510,000 km2), Paraná state, Brazil (about 200,000 km2), and Portugal mainland (about 89,000 km2). This method is intended to assess all geodiversity components and to avoid overrating any particular component, such as lithology or relief, a common weakness of other methods. The method is based on the overlay of a grid over different maps at scales that range according to the areas under analysis, with the final Geodiversity Index being the sum of five partial indexes calculated on the grid. The partial indexes represent the main components of geodiversity, namely geology (stratigraphy and lithology), geomorphology, palaeontology and soils. Another partial index covers singular occurrences of geodiversity, such precious stones and metals, energy and industrial minerals, mineral waters and springs. Partial indexes were calculated using GIS software by counting all the occurrences present in the selected maps for each grid square. The Geodiversity Index can take the form of a GIS automatically generated isoline map, allowing an easy interpretation by those without or with little geological background. The map can be used as a tool in land-use planning, particularly in identifying priority areas for conservation, management and use of natural resources.
NASA Astrophysics Data System (ADS)
Calo, M.; Dorbath, C.; Luzio, D.; Rotolo, S. G.; D'Anna, G.
2007-12-01
The Calabrian Arc, Southern Italy, is characterised by the subduction of the Ionian lithosphere -since Middle Miocene- beneath the Tyrrhenian basin. The related Benioff zone is seismically active to a depth > 500 km. The tomoDD code [Zhang and Thurber, 2003] was adopted to perform the tomography, using a set of 2463 earthquakes located in the window 14°30' E - 17°E and 37°N - 41°N, and recorded by seismic networks of the INGV in the period 1981-2005. Several inversions were performed using different selections of absolute and differential data obtained varying the maximum RMS and the threshold of the inter-event distance. Various synthetic and experimental tests were executed to evaluate the resolution and stability of the tomographic inversion. The inversions carried out for the synthetic and the restoration-resolution test [Zhao et al., 1992] were repeated several times with the same procedure used in the inversion of experimental data. The lack of bias in the models, related to the different grid- node positions, was tested performing inversions rotating, translating and deforming the original grid. To evaluate the dependence on the initial model, several inversions were also done using different 1D and 3D models simulating slab features. Finally, 35 models resulting from the inversions were synthesized in an average model obtained by interpolating each velocity model into a fixed grid. Each velocity value interpolated was weighted with a corresponding DWS (Derivative Weight Sum) resulting thus a Weighted Average Velocity model. The highly resolved sections through the average Vp, Vs and Vp/Vs models allowed us to image several relevant features of the structure of the subducting Ionian slab and of the Southern Tyrrhenian mantle: -the hypocenters are localized in the NW dipping fast area (Vp>8.2 km/s), 50-60 km thick, most likely composed litospheric mantle. Just below, an aseismic low Vp zone (6.6 - 7.7 km/s) 20-25 km thick, is assigned to the partially hydrated (serpentinized) harzburgite. The relation between the decrease of Vp with increasing serpentinization in peridotites [Christensen, 2004] suggests that a Vp of 7.0 km/s can be achieved with a 30-40 vol % of serpentinization. The serpentinized harzburgite, which should coincide with the inner (i.e. colder) portion of the suducting slab, disappears at a depth of 230-250 km, closely corresponding to the experimentally determined maximum pressure stability of antigorite-chlorite assemblages in hydrous peridotites [ca. 8.0 GPa, Schmidt and Poli, 1998; Fumagalli and Poli, 2005]. The vanishing of the low-velocity region with increasing depth could thus be ascribed to the dehydration of the peridotite-serpentinite to less hydrous high pressure phases (e.g. the phase A) , whose seismic characteristics are akin to anhydrous lherzolite [Hacker et al., 2003]. Some other interesting features imaged in the tomography are instead related to the roots of the volcanism of the area (Aeolian islands): two vertically elongated low-velocity areas (Vp ≤ 7.0 km/s) and high Vp/Vs ratios (>1.85) characterize the mantle domains beneath Stromboli and Marsili volcanoes, reaching a maximum depth of 180 km. We relate these low-Vp, Vs and high Vp/Vs bodies to accumulation of significant amounts of mantle partial melts.
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
NASA Astrophysics Data System (ADS)
Van Der Velde, O. A.; Montanya, J.; López, J. A.
2017-12-01
A Lightning Mapping Array (LMA) maps radio pulses emitted by lightning leaders, displaying lightning flash development in the cloud in three dimensions. Since the last 10 years about a dozen of these advanced systems have become operational in the United States and in Europe, often with the purpose of severe weather monitoring or lightning research. We introduce new methods for the analysis of complex three-dimensional lightning data produced by LMAs and illustrate them by cases of a mid-latitude severe weather producing thunderstorm and a tropical thunderstorm in Colombia. The method is based on the characteristics of bidrectional leader development as observed in LMA data (van der Velde and Montanyà, 2013, JGR-Atmospheres), where mapped positive leaders were found to propagate at characteristic speeds around 2 · 104 m s-1, while negative leaders typically propagate at speeds around 105 m s-1. Here, we determine leader speed for every 1.5 x 1.5 x 0.75 km grid box in 3 ms time steps, using two time intervals (e.g., 9 ms and 27 ms) and circles (4.5 km and 2.5 km wide) in which a robust Theil-Sen fitting of the slope is performed for fast and slow leaders. The two are then merged such that important speed characteristics are optimally maintained in negative and positive leaders, and labeled with positive or negative polarity according to the resulting velocity. The method also counts how often leaders from a lightning flash initiate or pass through each grid box. This "local flash rate" may be used in severe thunderstorm or NOx production studies and shall be more meaningful than LMA source density which is biased by the detection efficiency. Additionally, in each grid box the median x, y and z components of the leader propagation vectors of all flashes result in a 3D vector grid which can be compared to vectors in numerical models of leader propagation in response to cloud charge structure. Finally, the charge region altitudes, thickness and rates are summarized from vertical profiles of positive and negative leader rates where these exceed their 7-point averaged profiles. The summarized data can be used to follow charge structure evolution over time, and will be useful for climatological studies and statistical comparison against the parameters of the meteorological environment of storms.
The global coastline dataset: the observed relation between erosion and sea-level rise
NASA Astrophysics Data System (ADS)
Donchyts, G.; Baart, F.; Luijendijk, A.; Hagenaars, G.
2017-12-01
Erosion of sandy coasts is considered one of the key risks of sea-level rise. Because sandy coastlines of the world are often highly populated, erosive coastline trends result in risk to populations and infrastructure. Most of our understanding of the relation between sea-level rise and coastal erosion is based on local or regional observations and generalizations of numerical and physical experiments. Until recently there was no reliable global scale assessment of the location of sandy coasts and their rate of erosion and accretion. Here we present the global coastline dataset that covers erosion indicators on a local scale with global coverage. The dataset uses our global coastline transects grid defined with an alongshore spacing of 250 m and a cross shore length extending 1 km seaward and 1 km landward. This grid matches up with pre-existing local grids where available. We present the latest results on validation of coastal-erosion trends (based on optical satellites) and classification of sandy versus non-sandy coasts. We show the relation between sea-level rise (based both on tide-gauges and multi-mission satellite altimetry) and observed erosion trends over the last decades, taking into account broken-coastline trends (for example due to nourishments).An interactive web application presents the publicly-accessible results using a backend based on Google Earth Engine. It allows both researchers and stakeholders to use objective estimates of coastline trends, particularly when authoritative sources are not available.
Sparse spikes super-resolution on thin grids II: the continuous basis pursuit
NASA Astrophysics Data System (ADS)
Duval, Vincent; Peyré, Gabriel
2017-09-01
This article analyzes the performance of the continuous basis pursuit (C-BP) method for sparse super-resolution. The C-BP has been recently proposed by Ekanadham, Tranchina and Simoncelli as a refined discretization scheme for the recovery of spikes in inverse problems regularization. One of the most well known discretization scheme, the basis pursuit (BP, also known as \
NASA Astrophysics Data System (ADS)
Wang, Qing; Zhao, Xinyu; Ihme, Matthias
2017-11-01
Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).
Sensor network based solar forecasting using a local vector autoregressive ridge framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Yoo, S.; Heiser, J.
2016-04-04
The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations duemore » to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.« less
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
A stand-alone tidal prediction application for mobile devices
NASA Astrophysics Data System (ADS)
Tsai, Cheng-Han; Fan, Ren-Ye; Yang, Yi-Chung
2017-04-01
It is essential for people conducting fishing, leisure, or research activities at the coasts to have timely and handy tidal information. Although tidal information can be found easily on the internet or using mobile device applications, this information is all applicable for only certain specific locations, not anywhere on the coast, and they need an internet connection. We have developed an application for Android devices, which allows the user to obtain hourly tidal height anywhere on the coast for the next 24 hours without having to have any internet connection. All the necessary information needed for the tidal height calculation is stored in the application. To develop this application, we first simulate tides in the Taiwan Sea using the hydrodynamic model (MIKE21 HD) developed by the DHI. The simulation domain covers the whole coast of Taiwan and the surrounding seas with a grid size of 1 km by 1 km. This grid size allows us to calculate tides with high spatial resolution. The boundary conditions for the simulation domain were obtained from the Tidal Model Driver of the Oregon State University, using its tidal constants of eight constituents: M2, S2, N2, K2, K1, O1, P1, and Q1. The simulation calculates tides for 183 days so that the tidal constants for the above eight constituents of each water grid can be extracted by harmonic analysis. Using the calculated tidal constants, we can predict the tides in each grid of our simulation domain, which is useful when one needs the tidal information for any location in the Taiwan Sea. However, for the mobile application, we only store the eight tidal constants for the water grids on the coast. Once the user activates the application, it reads the longitude and latitude from the GPS sensor in the mobile device and finds the nearest coastal grid which has our tidal constants. Then, the application calculates tidal height variation based on the harmonic analysis. The application also allows the user to input location and time to obtain tides for any historic or future dates for the input location. The predicted tides have been verified with the historic tidal records of certain tidal stations. The verification shows that the tides predicted by the application match the measured record well.
NASA Astrophysics Data System (ADS)
Park, J. H.; Chi, H. C.; Lim, I. S.; Seong, Y. J.; Pak, J.
2017-12-01
During the first phase of EEW(Earthquake Early Warning) service to the public by KMA (Korea Meteorological Administration) from 2015 in Korea, KIGAM(Korea Institute of Geoscience and Mineral Resources) has adopted ElarmS2 of UC Berkeley BSL and modified local magnitude relation, travel time curves and association procedures so called TrigDB back-filling method. The TrigDB back-filling method uses a database of sorted lists of stations based on epicentral distances of the pre-defined events located on the grids for 1,401 × 1,601 = 2,243,001 events around the Korean Peninsula at a grid spacing of 0.05 degrees. When the version of an event is updated, the TrigDB back-filling method is invoked. First, the grid closest to the epicenter of an event is chosen from the database and candidate stations, which are stations corresponding to the chosen grid and also adjacent to the already-associated stations, are selected. Second, the directions from the chosen grid to the associated stations are averaged to represent the direction of wave propagation, which is used as a reference for computing apparent travel times. The apparent travel times for the associated stations are computed using a P wave velocity of 5.5 km/s from the grid to the projected points in the reference direction. The travel times for the triggered candidate stations are also computed and used to obtain the difference between the apparent travel times of the associated stations and the triggered candidates. Finally, if the difference in the apparent travel times is less than that of the arrival times, the method forces the triggered candidate station to be associated with the event and updates the event location. This method is useful to reduce false locations of events which could be generated from the deep (> 500 km) and regional distance earthquakes happening on the subduction pacific plate boundaries. In comparison of the case study between TrigDB back-filling applied system and the others, we could get the more reliable results in the early stagy of the version updating by forced association of the neighbored stations.
NASA Astrophysics Data System (ADS)
Halenka, T.; Huszar, P.; Belda, M.
2010-09-01
Recent studies show considerable effect of atmospheric chemistry and aerosols on climate on regional and local scale. For the purpose of qualifying and quantifying the magnitude of climate forcing due to atmospheric chemistry/aerosols on regional scale, the development of coupling of regional climate model and chemistry/aerosol model was started on the Department of Meteorology and Environmental Protection, Charles University, Prague, for the EC FP6 Project QUANTIFY and EC FP6 Project CECILIA. For this coupling, existing regional climate model and chemistry transport model have been used at very high resolution of 10km grid. Climate is calculated using RegCM while chemistry is solved by CAMx. The experiments with the couple have been prepared for EC FP7 project MEGAPOLI assessing the impact of the megacities and industrialized areas on climate. Meteorological fields generated by RCM drive CAMx transport, chemistry and a dry/wet deposition. A preprocessor utility was developed for transforming RegCM provided fields to CAMx input fields and format. New domain have been settled for MEGAPOLI purpose in 10km resolution including all the European "megacities" regions, i.e. London metropolitan area, Paris region, industrialized Ruhr area, Po valley etc. There is critical issue of the emission inventories available for 10km resolution including the urban hot-spots, TNO emissions are adopted for this sensitivity study in 10km resolution for comparison of the results with the simulation based on merged TNO emissions, i.e. basically original EMEP emissions at 50 km grid. The sensitivity test to switch on/off Paris area emissions is analysed as well. Preliminary results for year 2005 are presented and discussed to reveal whether the concept of effective emission indices could help to parameterize the urban plume effects in lower resolution models. Interactive coupling is compared to study the potential of possible impact of urban air-pollution to the urban area climate.
Uncertainties in estimates of mortality attributable to ambient PM2.5 in Europe
NASA Astrophysics Data System (ADS)
Kushta, Jonilda; Pozzer, Andrea; Lelieveld, Jos
2018-06-01
The assessment of health impacts associated with airborne particulate matter smaller than 2.5 μm in diameter (PM2.5) relies on aerosol concentrations derived either from monitoring networks, satellite observations, numerical models, or a combination thereof. When global chemistry-transport models are used for estimating PM2.5, their relatively coarse resolution has been implied to lead to underestimation of health impacts in densely populated and industrialized areas. In this study the role of spatial resolution and of vertical layering of a regional air quality model, used to compute PM2.5 impacts on public health and mortality, is investigated. We utilize grid spacings of 100 km and 20 km to calculate annual mean PM2.5 concentrations over Europe, which are in turn applied to the estimation of premature mortality by cardiovascular and respiratory diseases. Using model results at a 100 km grid resolution yields about 535 000 annual premature deaths over the extended European domain (242 000 within the EU-28), while numbers approximately 2.4% higher are derived by using the 20 km resolution. Using the surface (i.e. lowest) layer of the model for PM2.5 yields about 0.6% higher mortality rates compared with PM2.5 averaged over the first 200 m above ground. Further, the calculation of relative risks (RR) from PM2.5, using 0.1 μg m‑3 size resolution bins compared to the commonly used 1 μg m‑3, is associated with ±0.8% uncertainty in estimated deaths. We conclude that model uncertainties contribute a small part of the overall uncertainty expressed by the 95% confidence intervals, which are of the order of ±30%, mostly related to the RR calculations based on epidemiological data.
Finely Resolved On-Road PM2.5 and Estimated Premature Mortality in Central North Carolina.
Chang, Shih Ying; Vizuete, William; Serre, Marc; Vennam, Lakshmi Pradeepa; Omary, Mohammad; Isakov, Vlad; Breen, Michael; Arunachalam, Saravanan
2017-12-01
To quantify the on-road PM 2.5 -related premature mortality at a national scale, previous approaches to estimate concentrations at a 12-km × 12-km or larger grid cell resolution may not fully characterize concentration hotspots that occur near roadways and thus the areas of highest risk. Spatially resolved concentration estimates from on-road emissions to capture these hotspots may improve characterization of the associated risk, but are rarely used for estimating premature mortality. In this study, we compared the on-road PM 2.5 -related premature mortality in central North Carolina with two different concentration estimation approaches-(i) using the Community Multiscale Air Quality (CMAQ) model to model concentration at a coarser resolution of a 36-km × 36-km grid resolution, and (ii) using a hybrid of a Gaussian dispersion model, CMAQ, and a space-time interpolation technique to provide annual average PM 2.5 concentrations at a Census-block level (∼105,000 Census blocks). The hybrid modeling approach estimated 24% more on-road PM 2.5 -related premature mortality than CMAQ. The major difference is from the primary on-road PM 2.5 where the hybrid approach estimated 2.5 times more primary on-road PM 2.5 -related premature mortality than CMAQ due to predicted exposure hotspots near roadways that coincide with high population areas. The results show that 72% of primary on-road PM 2.5 premature mortality occurs within 1,000 m from roadways where 50% of the total population resides, highlighting the importance to characterize near-road primary PM 2.5 and suggesting that previous studies may have underestimated premature mortality due to PM 2.5 from traffic-related emissions. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Skamarock, W. C.
2015-12-01
One of the major problems in atmospheric model applications is the representation of deep convection within the models; explicit simulation of deep convection on fine meshes performs much better than sub-grid parameterized deep convection on coarse meshes. Unfortunately, the high cost of explicit convective simulation has meant it has only been used to down-scale global simulations in weather prediction and regional climate applications, typically using traditional one-way interactive nesting technology. We have been performing real-time weather forecast tests using a global non-hydrostatic atmospheric model (the Model for Prediction Across Scales, MPAS) that employs a variable-resolution unstructured Voronoi horizontal mesh (nominally hexagons) to span hydrostatic to nonhydrostatic scales. The smoothly varying Voronoi mesh eliminates many downscaling problems encountered using traditional one- or two-way grid nesting. Our test weather forecasts cover two periods - the 2015 Spring Forecast Experiment conducted at the NOAA Storm Prediction Center during the month of May in which we used a 50-3 km mesh, and the PECAN field program examining nocturnal convection over the US during the months of June and July in which we used a 15-3 km mesh. An important aspect of this modeling system is that the model physics be scale-aware, particularly the deep convection parameterization. These MPAS simulations employ the Grell-Freitas scale-aware convection scheme. Our test forecasts show that the scheme produces a gradual transition in the deep convection, from the deep unstable convection being handled entirely by the convection scheme on the coarse mesh regions (dx > 15 km), to the deep convection being almost entirely explicit on the 3 km NA region of the meshes. We will present results illustrating the performance of critical aspects of the MPAS model in these tests.
NASA Technical Reports Server (NTRS)
Berndt, E. B.; Zavodsky, B. T.; Folmer, M. J.; Jedlovec, G. J.
2014-01-01
Non-convective wind events commonly occur with passing extratropical cyclones and have significant societal and economic impacts. Since non-convective winds often occur in the absence of specific phenomena such as a thunderstorm, tornado, or hurricane, the public are less likely to heed high wind warnings and continue daily activities. Thus non-convective wind events result in as many fatalities as straight line thunderstorm winds. One physical explanation for non-convective winds includes tropopause folds. Improved model representation of stratospheric air and associated non-convective wind events could improve non-convective wind forecasts and associated warnings. In recent years, satellite data assimilation has improved skill in forecasting extratropical cyclones; however errors still remain in forecasting the position and strength of extratropical cyclones as well as the tropopause folding process. The goal of this study is to determine the impact of assimilating satellite temperature and moisture retrieved profiles from hyperspectral infrared (IR) sounders (i.e. Atmospheric Infrared Sounder (AIRS), Cross-track Infrared and Microwave Sounding Suite (CrIMSS), and Infrared Atmospheric Sounding Interferometer (IASI)) on the model representation of the tropopause fold and an associated high wind event that impacted the Northeast United States on 09 February 2013. Model simulations using the Advanced Research Weather Research and Forecasting Model (ARW) were conducted on a 12-km grid with cycled data assimilation mimicking the operational North American Model (NAM). The results from the satellite assimilation run are compared to a control experiment (without hyperspectral IR retrievals), 32-km North American Regional Reanalysis (NARR) interpolated to a 12-km grid, and 13-km Rapid Refresh analyses.
NASA Astrophysics Data System (ADS)
Shaffer, S. R.
2017-12-01
Coupled land-atmosphere interactions in urban settings modeled with the Weather Research and Forecasting model (WRF) derive urban land cover from 30-meter resolution National Land Cover Database (NLCD) products. However, within urban areas, the categorical NLCD lose information of non-urban classifications whenever the impervious cover within a grid cell is above 0%, and the current method to determine urban area over estimates the actual area, leading to a bias of urban contribution. To address this bias of urban contribution an investigation is conducted by employing a 1-meter resolution land cover data product derived from the National Agricultural Imagery Program (NAIP) dataset. Scenes during 2010 for the Central Arizona Phoenix Long Term Ecological Research (CAP-LTER) study area, roughly a 120 km x 100 km area containing metropolitan Phoenix, are adapted for use within WRF to determine the areal fraction and urban fraction of each WRF urban class. A method is shown for converting these NAIP data into classes corresponding to NLCD urban classes, and is evaluated in comparison with current WRF implementation using NLCD. Results are shown for comparisons of land cover products at the level of input data and aggregated to model resolution (1 km). The sensitivity of WRF short-term summertime pre-monsoon predictions within metropolitan Phoenix to different input data products of land cover, to method of aggregating these data to model grid scale (1 km), for the default and derived parameter values are examined with the Noah mosaic land surface scheme adapted for using these data. Issues with adapting these non-urban NAIP classes for use in the mosaic approach will also be discussed.
Techno-economic comparison of series hybrid, plug-in hybrid, fuel cell and regular cars
NASA Astrophysics Data System (ADS)
van Vliet, Oscar P. R.; Kruithof, Thomas; Turkenburg, Wim C.; Faaij, André P. C.
We examine the competitiveness of series hybrid compared to fuel cell, parallel hybrid, and regular cars. We use public domain data to determine efficiency, fuel consumption, total costs of ownership and greenhouse gas emissions resulting from drivetrain choices. The series hybrid drivetrain can be seen both as an alternative to petrol, diesel and parallel hybrid cars, as well as an intermediate stage towards fully electric or fuel cell cars. We calculate the fuel consumption and costs of four diesel-fuelled series hybrid, four plug-in hybrid and four fuel cell car configurations, and compared these to three reference cars. We find that series hybrid cars may reduce fuel consumption by 34-47%, but cost €5000-12,000 more. Well-to-wheel greenhouse gas emissions may be reduced to 89-103 g CO 2 km -1 compared to reference petrol (163 g km -1) and diesel cars (156 g km -1). Series hybrid cars with wheel motors have lower weight and 7-21% lower fuel consumption than those with central electric motors. The fuel cell car remains uncompetitive even if production costs of fuel cells come down by 90%. Plug-in hybrid cars are competitive when driving large distances on electricity, and/or if cost of batteries come down substantially. Well-to-wheel greenhouse gas emissions may be reduced to 60-69 g CO 2 km -1.
Application of Physically based landslide susceptibility models in Brazil
NASA Astrophysics Data System (ADS)
Carvalho Vieira, Bianca; Martins, Tiago D.
2017-04-01
Shallow landslides and floods are the processes responsible for most material and environmental damages in Brazil. In the last decades, some landslides events induce a high number of deaths (e.g. Over 1000 deaths in one event) and incalculable social and economic losses. Therefore, the prediction of those processes is considered an important tool for land use planning tools. Among different methods the physically based landslide susceptibility models having been widely used in many countries, but in Brazil it is still incipient when compared to other ones, like statistical tools and frequency analyses. Thus, the main objective of this research was to assess the application of some Physically based landslide susceptibility models in Brazil, identifying their main results, the efficiency of susceptibility mapping, parameters used and limitations of the tropical humid environment. In order to achieve that, it was evaluated SHALSTAB, SINMAP and TRIGRS models in some studies in Brazil along with the Geotechnical values, scales, DEM grid resolution and the results based on the analysis of the agreement between predicted susceptibility and the landslide scar's map. Most of the studies in Brazil applied SHALSTAB, SINMAP and to a lesser extent the TRIGRS model. The majority researches are concentrated in the Serra do Mar mountain range, that is a system of escarpments and rugged mountains that extends more than 1,500 km along the southern and southeastern Brazilian coast, and regularly affected by heavy rainfall that generates widespread mass movements. Most part of these studies used conventional topographic maps with scales ranging from 1:2000 to 1:50000 and DEM-grid resolution between 2 and 20m. Regarding the Geotechnical and hydrological values, a few studies use field collected data which could produce more efficient results, as indicated by international literature. Therefore, even though they have enormous potential in the susceptibility mapping, even for comparison purposes between different areas, the studies in Brazil require more detailed consideration on the input of topographic and Geotechnical parameters.
Inference and Biogeochemical Response of Vertical Velocities inside a Mode Water Eddy
NASA Astrophysics Data System (ADS)
Barceló-Llull, B.; Pallas Sanz, E.; Sangrà, P.
2016-02-01
With the aim to study the modulation of the biogeochemical fluxes by the ageostrophic secondary circulation in anticyclonic mesoscale eddies, a typical eddy of the Canary Eddy Corridor was interdisciplinary surveyed on September 2014 in the framework of the PUMP project. The eddy was elliptical shaped, 4 month old, 110 km diameter and 400 m depth. It was an intrathermocline type often also referred as mode water eddy type. We inferred the mesoscale vertical velocity field resolving a generalized omega equation from the 3D density and ADCP velocity fields of a five-day sampled CTD-SeaSoar regular grid centred on the eddy. The grid transects where 10 nautical miles apart. Although complex, in average, the inferred omega velocity field (hereafter w) shows a dipolar structure with downwelling velocities upstream of the propagation path (west) and upwelling velocities downstream. The w at the eddy center was zero and maximum values were located at the periphery attaining ca. 6 m day-1. Coinciding with the occurrence of the vertical velocities cells a noticeable enhancement of phytoplankton biomass was observed at the eddy periphery respect to the far field. A corresponding upward diapycnal flux of nutrients was also observed at the periphery. As minimum velocities where reached at the eddy center, lineal Ekman pumping mechanism was discarded. Minimum values of phytoplankton biomass where also observed at the eddy center. The possible mechanisms for such dipolar w cell are still being investigated, but an analysis of the generalized omega equation forcing terms suggest that it may be a combination of horizontal deformation and advection of vorticity by the ageostrophic current (related to nonlinear Ekman pumping). As expected for Trades, the wind was rather constant and uniform with a speed of ca. 5 m s-1. Diagnosed nonlinear Ekman pumping leaded also to a dipolar cell that mirrors the omega w dipolar cell.
Fine resolution 3D temperature fields off Kerguelen from instrumented penguins
NASA Astrophysics Data System (ADS)
Charrassin, Jean-Benoît; Park, Young-Hyang; Le Maho, Yvon; Bost, Charles-André
2004-12-01
The use of diving animals as autonomous vectors of oceanographic instruments is rapidly increasing, because this approach yields cost-efficient new information and can be used in previously poorly sampled areas. However, methods for analyzing the collected data are still under development. In particular, difficulties may arise from the heterogeneous data distribution linked to animals' behavior. Here we show how raw temperature data collected by penguin-borne loggers were transformed to a regular gridded dataset that provided new information on the local circulation off Kerguelen. A total of 16 king penguins ( Aptenodytes patagonicus) were equipped with satellite-positioning transmitters and with temperature-time-depth recorders (TTDRs) to record dive depth and sea temperature. The penguins' foraging trips recorded during five summers ranged from 140 to 600 km from the colony and 11,000 dives >100 m were recorded. Temperature measurements recorded during diving were used to produce detailed 3D temperature fields of the area (0-200 m). The data treatment included dive location, determination of the vertical profile for each dive, averaging and gridding of those profiles onto 0.1°×0.1° cells, and optimal interpolation in both the horizontal and vertical using an objective analysis. Horizontal fields of temperature at the surface and 100 m are presented, as well as a vertical section along the main foraging direction of the penguins. Compared to conventional temperature databases (Levitus World Ocean Atlas and historical stations available in the area), the 3D temperature fields collected from penguins are extremely finely resolved, by one order finer. Although TTDRs were less accurate than conventional instruments, such a high spatial resolution of penguin-derived data provided unprecedented detailed information on the upper level circulation pattern east of Kerguelen, as well as the iron-enrichment mechanism leading to a high primary production over the Kerguelen Plateau.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
NASA Astrophysics Data System (ADS)
Wilde-Piorko, M.; Polkowski, M.
2016-12-01
Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.