Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...
2015-01-20
Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
2017-01-01
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; ...
2015-01-20
We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system.more » Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.« less
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
NASA Technical Reports Server (NTRS)
Dominguez, Anthony; Kleissl, Jan P.; Luvall, Jeffrey C.
2011-01-01
Large-eddy Simulation (LES) was used to study convective boundary layer (CBL) flow through suburban regions with both large and small scale heterogeneities in surface temperature. Constant remotely sensed surface temperatures were applied at the surface boundary at resolutions of 10 m, 90 m, 200 m, and 1 km. Increasing the surface resolution from 1 km to 200 m had the most significant impact on the mean and turbulent flow characteristics as the larger scale heterogeneities became resolved. While previous studies concluded that scales of heterogeneity much smaller than the CBL inversion height have little impact on the CBL characteristics, we found that further increasing the surface resolution (resolving smaller scale heterogeneities) results in an increase in mean surface heat flux, thermal blending height, and potential temperature profile. The results of this study will help to better inform sub-grid parameterization for meso-scale meteorological models. The simulation tool developed through this study (combining LES and high resolution remotely sensed surface conditions) is a significant step towards future studies on the micro-scale meteorology in urban areas.
NASA Astrophysics Data System (ADS)
Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan
2017-10-01
Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.
Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.
Hotta, H; Rempel, M; Yokoyama, T
2016-03-25
The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
Scaling an in situ network for high resolution modeling during SMAPVEX15
USDA-ARS?s Scientific Manuscript database
Among the greatest challenges within the field of soil moisture estimation is that of scaling sparse point measurements within a network to produce higher resolution map products. Large-scale field experiments present an ideal opportunity to develop methodologies for this scaling, by coupling in si...
NASA Astrophysics Data System (ADS)
Rasera, L. G.; Mariethoz, G.; Lane, S. N.
2017-12-01
Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.
Multi-scale approaches for high-speed imaging and analysis of large neural populations
Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam
2017-01-01
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570
NASA Astrophysics Data System (ADS)
Hendrickx, J. M. H.; Allen, R. G.; Myint, S. W.; Ogden, F. L.
2015-12-01
Large scale mapping of evapotranspiration and root zone soil moisture is only possible when satellite images are used. The spatial resolution of this imagery typically depends on its temporal resolution or the satellite overpass time. For example, the Landsat satellite acquires images at 30 m resolution every 16 days while the MODIS satellite acquires images at 250 m resolution every day. In this study we deal with optical/thermal imagery that is impacted by cloudiness contrary to radar imagery that penetrates through clouds. Due to cloudiness, the temporal resolution of Landsat drops from 16 days to about one clear sky Landsat image per month in the southwestern USA and about one every ten years in the humid tropics of Panama. Only by launching additional satellites can the temporal resolution be improved. Since this is too costly, an alternative is found by using ground measurements with high temporal resolution (from minutes to days) but poor spatial resolution. The challenge for large-scale evapotranspiration and root zone soil moisture mapping is to construct a layer stack consisting of N time layers covering the period of interest each containing M pixels covering the region of interest. We will present examples of the Phoenix Active Management Area in AZ (14,600 km2), Green River Basin in WY (44,000 km2), the Kishwaukee Watershed in IL (3,150 km2), the area covered by Landsat Path 28/Row 35 in OK (30,000 km2) and the Agua Salud Watershed in Panama (200 km2). In these regions we used Landsat or MODIS imagery for mapping evapotranspiration and root zone soil moisture by the algorithm Mapping EvapoTranspiration at high Resolution with Internalized Calibration (METRIC) together with meteorological measurements and sometimes either Large Aperture Scintillometers (LAS) or Eddy Covariance (EC). We conclude with lessons learned for future large-scale hydrological studies.
Roughness of stylolites: implications of 3D high resolution topography measurements.
Schmittbuhl, J; Renard, F; Gratier, J P; Toussaint, R
2004-12-03
Stylolites are natural pressure-dissolution surfaces in sedimentary rocks. We present 3D high resolution measurements at laboratory scales of their complex roughness. The topography is shown to be described by a self-affine scaling invariance. At large scales, the Hurst exponent is zeta(1) approximately 0.5 and very different from that at small scales where zeta(2) approximately 1.2. A crossover length scale at around L(c)=1 mm is well characterized. Measurements are consistent with a Langevin equation that describes the growth of a stylolitic interface as a competition between stabilizing long range elastic interactions at large scales or local surface tension effects at small scales and a destabilizing quenched material disorder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Gang
Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.
Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset
NASA Astrophysics Data System (ADS)
Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.
2017-12-01
Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.
Double inflation - A possible resolution of the large-scale structure problem
NASA Technical Reports Server (NTRS)
Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman
1987-01-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.
Potential for geophysical experiments in large scale tests.
Dieterich, J.H.
1981-01-01
Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (100MPa).-Author
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction
NASA Technical Reports Server (NTRS)
Li, Zhijin; Chao, Yi; Li, P. Peggy
2012-01-01
A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.
Multiscale/multiresolution landslides susceptibility mapping
NASA Astrophysics Data System (ADS)
Grozavu, Adrian; Cătălin Stanga, Iulian; Valeriu Patriche, Cristian; Toader Juravle, Doru
2014-05-01
Within the European strategies, landslides are considered an important threatening that requires detailed studies to identify areas where these processes could occur in the future and to design scientific and technical plans for landslide risk mitigation. In this idea, assessing and mapping the landslide susceptibility is an important preliminary step. Generally, landslide susceptibility at small scale (for large regions) can be assessed through qualitative approach (expert judgements), based on a few variables, while studies at medium and large scale requires quantitative approach (e.g. multivariate statistics), a larger set of variables and, necessarily, the landslide inventory. Obviously, the results vary more or less from a scale to another, depending on the available input data, but also on the applied methodology. Since it is almost impossible to have a complete landslide inventory on large regions (e.g. at continental level), it is very important to verify the compatibility and the validity of results obtained at different scales, identifying the differences and fixing the inherent errors. This paper aims at assessing and mapping the landslide susceptibility at regional level through a multiscale-multiresolution approach from small scale and low resolution to large scale and high resolution of data and results, comparing the compatibility of results. While the first ones could be used for studies at european and national level, the later ones allows results validation, including through fields surveys. The test area, namely the Barlad Plateau (more than 9000 sq.km) is located in Eastern Romania, covering a region where both the natural environment and the human factor create a causal context that favor these processes. The landslide predictors were initially derived from various databases available at pan-european level and progressively completed and/or enhanced together with scale and the resolution: the topography (from SRTM at 90 meters to digital elevation models based on topographical maps, 1:25,000 and 1:5,000), the lithology (from geological maps, 1:200,000), land cover and land use (from CLC 2006 to maps derived from orthorectified aerial images, 0.5 meters resolution), rainfall (from Worldclim, ECAD to our own data), the seismicity (the seismic zonation of Romania) etc. The landslide inventory was created as polygonal data based on aerial images (resolution 0.5 meters), the information being considered at county level (NUTS 3) and, eventually, at communal level (LAU2). The methodological framework is based on the logistic regression as a quantitative method and the analytic hierarchy process as a semi-qualitative methods, both being applied once identically for all scales and once recalibrated for each scale and resolution (from 1:1,000,000 and one km pixel resolution to 1:25,000 and ten meters resolution). The predictive performance of the two models was assessed using the ROC (Receiver Operating Characteristic) curve and the AUC (Area Under Curve) parameter and the results indicate a good correspondence between the susceptibility estimated for the test samples (0.855-0.890) and for the validation samples (0.830-0.865). Finally, the results were compared in pairs in order to fix the errors at small scale and low resolution and to optimize the methodology for landslide susceptibility mapping on large areas.
Shields, Christine A.; Kiehl, Jeffrey T.; Meehl, Gerald A.
2016-06-02
The global fully coupled half-degree Community Climate System Model Version 4 (CCSM4) was integrated for a suite of climate change ensemble simulations including five historical runs, five Representative Concentration Pathway 8.5 [RCP8.5) runs, and a long Pre-Industrial control run. This study focuses on precipitation at regional scales and its sensitivity to horizontal resolution. The half-degree historical CCSM4 simulations are compared to observations, where relevant, and to the standard 1° CCSM4. Both the halfdegree and 1° resolutions are coupled to a nominal 1° ocean. North American and South Asian/Indian monsoon regimes are highlighted because these regimes demonstrate improvements due to highermore » resolution, primarily because of better-resolved topography. Agriculturally sensitive areas are analyzed and include Southwest, Central, and Southeast U.S., Southern Europe, and Australia. Both mean and extreme precipitation is discussed for convective and large-scale precipitation processes. Convective precipitation tends to decrease with increasing resolution and large-scale precipitation tends to increase. Improvements for the half-degree agricultural regions can be found for mean and extreme precipitation in the Southeast U.S., Southern Europe, and Australian regions. Climate change responses differ between the model resolutions for the U.S. Southwest/Central regions and are seasonally dependent in the Southeast and Australian regions. Both resolutions project a clear drying signal across Southern Europe due to increased greenhouse warming. As a result, differences between resolutions tied to the representation of convective and large-scale precipitation play an important role in the character of the climate change and depend on regional influences.« less
Bridging the scales in atmospheric composition simulations using a nudging technique
NASA Astrophysics Data System (ADS)
D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco
2010-05-01
Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.
Improving crop condition monitoring at field scale by using optimal Landsat and MODIS images
USDA-ARS?s Scientific Manuscript database
Satellite remote sensing data at coarse resolution (kilometers) have been widely used in monitoring crop condition for decades. However, crop condition monitoring at field scale requires high resolution data in both time and space. Although a large number of remote sensing instruments with different...
Atmospheric gravity waves with small vertical-to-horizotal wavelength ratios
NASA Astrophysics Data System (ADS)
Song, I. S.; Jee, G.; Kim, Y. H.; Chun, H. Y.
2017-12-01
Gravity wave modes with small vertical-to-horizontal wavelength ratios of an order of 10-3 are investigated through the systematic scale analysis of governing equations for gravity wave perturbations embedded in the quasi-geostrophic large-scale flow. These waves can be categorized as acoustic gravity wave modes because their total energy is given by the sum of kinetic, potential, and elastic parts. It is found that these waves can be forced by density fluctuations multiplied by the horizontal gradients of the large-scale pressure (geopotential) fields. These theoretical findings are evaluated using the results of a high-resolution global model (Specified Chemistry WACCM with horizontal resolution of 25 km and vertical resolution of 600 m) by computing the density-related gravity-wave forcing terms from the modeling results.
NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansteen, V.; Pontieu, B. De; Carlsson, M.
2015-10-01
Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate “wall-to-wall” remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution. PMID:26402522
High-Resolution Climate Data Visualization through GIS- and Web-based Data Portals
NASA Astrophysics Data System (ADS)
WANG, X.; Huang, G.
2017-12-01
Sound decisions on climate change adaptation rely on an in-depth assessment of potential climate change impacts at regional and local scales, which usually requires finer resolution climate projections at both spatial and temporal scales. However, effective downscaling of global climate projections is practically difficult due to the lack of computational resources and/or long-term reference data. Although a large volume of downscaled climate data has been make available to the public, how to understand and interpret the large-volume climate data and how to make use of the data to drive impact assessment and adaptation studies are still challenging for both impact researchers and decision makers. Such difficulties have become major barriers preventing informed climate change adaptation planning at regional scales. Therefore, this research will explore new GIS- and web-based technologies to help visualize the large-volume regional climate data with high spatiotemporal resolutions. A user-friendly public data portal, named Climate Change Data Portal (CCDP, http://ccdp.network), will be established to allow intuitive and open access to high-resolution regional climate projections at local scales. The CCDP offers functions of visual representation through geospatial maps and data downloading for a variety of climate variables (e.g., temperature, precipitation, relative humidity, solar radiation, and wind) at multiple spatial resolutions (i.e., 25 - 50 km) and temporal resolutions (i.e., annual, seasonal, monthly, daily, and hourly). The vast amount of information the CCDP encompasses can provide a crucial basis for assessing impacts of climate change on local communities and ecosystems and for supporting better decision making under a changing climate.
Huang, Yongjun; Flores, Jaime Gonzalo Flor; Cai, Ziqiang; Yu, Mingbin; Kwong, Dim-Lee; Wen, Guangjun; Churchill, Layne; Wong, Chee Wei
2017-06-29
For the sensitive high-resolution force- and field-sensing applications, the large-mass microelectromechanical system (MEMS) and optomechanical cavity have been proposed to realize the sub-aN/Hz 1/2 resolution levels. In view of the optomechanical cavity-based force- and field-sensors, the optomechanical coupling is the key parameter for achieving high sensitivity and resolution. Here we demonstrate a chip-scale optomechanical cavity with large mass which operates at ≈77.7 kHz fundamental mode and intrinsically exhibiting large optomechanical coupling of 44 GHz/nm or more, for both optical resonance modes. The mechanical stiffening range of ≈58 kHz and a more than 100 th -order harmonics are obtained, with which the free-running frequency instability is lower than 10 -6 at 100 ms integration time. Such results can be applied to further improve the sensing performance of the optomechanical inspired chip-scale sensors.
Spatial resolution requirements for urban land cover mapping from space
NASA Technical Reports Server (NTRS)
Todd, William J.; Wrigley, Robert C.
1986-01-01
Very low resolution (VLR) satellite data (Advanced Very High Resolution Radiometer, DMSP Operational Linescan System), low resolution (LR) data (Landsat MSS), medium resolution (MR) data (Landsat TM), and high resolution (HR) satellite data (Spot HRV, Large Format Camera) were evaluated and compared for interpretability at differing spatial resolutions. VLR data (500 m - 1.0 km) is useful for Level 1 (urban/rural distinction) mapping at 1:1,000,000 scale. Feature tone/color is utilized to distinguish generalized urban land cover using LR data (80 m) for 1:250,000 scale mapping. Advancing to MR data (30 m) and 1:100,000 scale mapping, confidence in land cover mapping is greatly increased, owing to the element of texture/pattern which is now evident in the imagery. Shape and shadow contribute to detailed Level II/III urban land use mapping possible if the interpreter can use HR (10-15 m) satellite data; mapping scales can be 1:25,000 - 1:50,000.
Scale criticality in estimating ecosystem carbon dynamics
Zhao, Shuqing; Liu, Shuguang
2014-01-01
Scaling is central to ecology and Earth system sciences. However, the importance of scale (i.e. resolution and extent) for understanding carbon dynamics across scales is poorly understood and quantified. We simulated carbon dynamics under a wide range of combinations of resolution (nine spatial resolutions of 250 m, 500 m, 1 km, 2 km, 5 km, 10 km, 20 km, 50 km, and 100 km) and extent (57 geospatial extents ranging from 108 to 1 247 034 km2) in the southeastern United States to explore the existence of scale dependence of the simulated regional carbon balance. Results clearly show the existence of a critical threshold resolution for estimating carbon sequestration within a given extent and an error limit. Furthermore, an invariant power law scaling relationship was found between the critical resolution and the spatial extent as the critical resolution is proportional to An (n is a constant, and A is the extent). Scale criticality and the power law relationship might be driven by the power law probability distributions of land surface and ecological quantities including disturbances at landscape to regional scales. The current overwhelming practices without considering scale criticality might have largely contributed to difficulties in balancing carbon budgets at regional and global scales.
A Bayesian Nonparametric Approach to Image Super-Resolution.
Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid
2015-02-01
Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1982-01-01
An examination of the possibilities of using Landsat data to simulate NOAA-6 Advanced Very High Resolution Radiometer (AVHRR) data on two channels, as well as using actual NOAA-6 imagery, for large-scale hydrological studies is presented. A running average was obtained of 18 consecutive pixels of 1 km resolution taken by the Landsat scanners were scaled up to 8-bit data and investigated for different gray levels. AVHRR data comprising five channels of 10-bit, band-interleaved information covering 10 deg latitude were analyzed and a suitable pixel grid was chosen for comparison with the Landsat data in a supervised classification format, an unsupervised mode, and with ground truth. Landcover delineation was explored by removing snow, water, and cloud features from the cluster analysis, and resulted in less than 10% difference. Low resolution large-scale data was determined useful for characterizing some landcover features if weekly and/or monthly updates are maintained.
SOLAR WIND TURBULENCE FROM MHD TO SUB-ION SCALES: HIGH-RESOLUTION HYBRID SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franci, Luca; Verdini, Andrea; Landi, Simone
2015-05-10
We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wavenumbers. The simulation results simultaneously exhibit several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magnetohydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and themore » parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind, large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm’s law.« less
Solar Wind Turbulence from MHD to Sub-ion Scales: High-resolution Hybrid Simulations
NASA Astrophysics Data System (ADS)
Franci, Luca; Verdini, Andrea; Matteini, Lorenzo; Landi, Simone; Hellinger, Petr
2015-05-01
We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wavenumbers. The simulation results simultaneously exhibit several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magnetohydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and the parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind, large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm’s law.
The spatial and temporal domains of modern ecology.
Estes, Lyndon; Elsen, Paul R; Treuer, Timothy; Ahmed, Labeeb; Caylor, Kelly; Chang, Jason; Choi, Jonathan J; Ellis, Erle C
2018-05-01
To understand ecological phenomena, it is necessary to observe their behaviour across multiple spatial and temporal scales. Since this need was first highlighted in the 1980s, technology has opened previously inaccessible scales to observation. To help to determine whether there have been corresponding changes in the scales observed by modern ecologists, we analysed the resolution, extent, interval and duration of observations (excluding experiments) in 348 studies that have been published between 2004 and 2014. We found that observational scales were generally narrow, because ecologists still primarily use conventional field techniques. In the spatial domain, most observations had resolutions ≤1 m 2 and extents ≤10,000 ha. In the temporal domain, most observations were either unreplicated or infrequently repeated (>1 month interval) and ≤1 year in duration. Compared with studies conducted before 2004, observational durations and resolutions appear largely unchanged, but intervals have become finer and extents larger. We also found a large gulf between the scales at which phenomena are actually observed and the scales those observations ostensibly represent, raising concerns about observational comprehensiveness. Furthermore, most studies did not clearly report scale, suggesting that it remains a minor concern. Ecologists can better understand the scales represented by observations by incorporating autocorrelation measures, while journals can promote attentiveness to scale by implementing scale-reporting standards.
Multiresolution persistent homology for excessively large biomolecular datasets
NASA Astrophysics Data System (ADS)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
Joseph St. Peter; John Hogland; Nathaniel Anderson; Jason Drake; Paul Medley
2018-01-01
Land cover classification provides valuable information for prioritizing management and conservation operations across large landscapes. Current regional scale land cover geospatial products within the United States have a spatial resolution that is too coarse to provide the necessary information for operations at the local and project scales. This paper describes a...
Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.
Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less
Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals
Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.; ...
2018-02-09
Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less
Large-scale Density Structures in Magneto-rotational Disk Turbulence
NASA Astrophysics Data System (ADS)
Youdin, Andrew; Johansen, A.; Klahr, H.
2009-01-01
Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.
NASA Astrophysics Data System (ADS)
Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson
2017-03-01
Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.
Kooperman, Gabriel J.; Pritchard, Michael S.; O'Brien, Travis A.; ...
2018-04-01
Deficiencies in the parameterizations of convection used in global climate models often lead to a distorted representation of the simulated rainfall intensity distribution (i.e., too much rainfall from weak rain rates). While encouraging improvements in high percentile rainfall intensity have been found as the horizontal resolution of the Community Atmosphere Model is increased to ~25 km, we demonstrate no corresponding improvement in the moderate rain rates that generate the majority of accumulated rainfall. Using a statistical framework designed to emphasize links between precipitation intensity and accumulated rainfall beyond just the frequency distribution, we show that CAM cannot realistically simulate moderatemore » rain rates, and cannot capture their intensification with climate change, even as resolution is increased. However, by separating the parameterized convective and large-scale resolved contributions to total rainfall, we find that the intensity, geographic pattern, and climate change response of CAM's large-scale rain rates are more consistent with observations (TRMM 3B42), superparameterization, and theoretical expectations, despite issues with parameterized convection. Increasing CAM's horizontal resolution does improve the representation of total rainfall intensity, but not due to changes in the intensity of large-scale rain rates, which are surprisingly insensitive to horizontal resolution. Rather, improvements occur through an increase in the relative contribution of the large-scale component to the total amount of accumulated rainfall. Analysis of sensitivities to convective timescale and entrainment rate confirm the importance of these parameters in the possible development of scale-aware parameterizations, but also reveal unrecognized trade-offs from the entanglement of precipitation frequency and total amount.« less
NASA Astrophysics Data System (ADS)
Kooperman, Gabriel J.; Pritchard, Michael S.; O'Brien, Travis A.; Timmermans, Ben W.
2018-04-01
Deficiencies in the parameterizations of convection used in global climate models often lead to a distorted representation of the simulated rainfall intensity distribution (i.e., too much rainfall from weak rain rates). While encouraging improvements in high percentile rainfall intensity have been found as the horizontal resolution of the Community Atmosphere Model is increased to ˜25 km, we demonstrate no corresponding improvement in the moderate rain rates that generate the majority of accumulated rainfall. Using a statistical framework designed to emphasize links between precipitation intensity and accumulated rainfall beyond just the frequency distribution, we show that CAM cannot realistically simulate moderate rain rates, and cannot capture their intensification with climate change, even as resolution is increased. However, by separating the parameterized convective and large-scale resolved contributions to total rainfall, we find that the intensity, geographic pattern, and climate change response of CAM's large-scale rain rates are more consistent with observations (TRMM 3B42), superparameterization, and theoretical expectations, despite issues with parameterized convection. Increasing CAM's horizontal resolution does improve the representation of total rainfall intensity, but not due to changes in the intensity of large-scale rain rates, which are surprisingly insensitive to horizontal resolution. Rather, improvements occur through an increase in the relative contribution of the large-scale component to the total amount of accumulated rainfall. Analysis of sensitivities to convective timescale and entrainment rate confirm the importance of these parameters in the possible development of scale-aware parameterizations, but also reveal unrecognized trade-offs from the entanglement of precipitation frequency and total amount.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kooperman, Gabriel J.; Pritchard, Michael S.; O'Brien, Travis A.
Deficiencies in the parameterizations of convection used in global climate models often lead to a distorted representation of the simulated rainfall intensity distribution (i.e., too much rainfall from weak rain rates). While encouraging improvements in high percentile rainfall intensity have been found as the horizontal resolution of the Community Atmosphere Model is increased to ~25 km, we demonstrate no corresponding improvement in the moderate rain rates that generate the majority of accumulated rainfall. Using a statistical framework designed to emphasize links between precipitation intensity and accumulated rainfall beyond just the frequency distribution, we show that CAM cannot realistically simulate moderatemore » rain rates, and cannot capture their intensification with climate change, even as resolution is increased. However, by separating the parameterized convective and large-scale resolved contributions to total rainfall, we find that the intensity, geographic pattern, and climate change response of CAM's large-scale rain rates are more consistent with observations (TRMM 3B42), superparameterization, and theoretical expectations, despite issues with parameterized convection. Increasing CAM's horizontal resolution does improve the representation of total rainfall intensity, but not due to changes in the intensity of large-scale rain rates, which are surprisingly insensitive to horizontal resolution. Rather, improvements occur through an increase in the relative contribution of the large-scale component to the total amount of accumulated rainfall. Analysis of sensitivities to convective timescale and entrainment rate confirm the importance of these parameters in the possible development of scale-aware parameterizations, but also reveal unrecognized trade-offs from the entanglement of precipitation frequency and total amount.« less
Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data
NASA Astrophysics Data System (ADS)
Fries, K. J.; Kerkez, B.
2017-12-01
We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.
Nested high-resolution large-eddy simulations in WRF to support wind power
NASA Astrophysics Data System (ADS)
Mirocha, J.; Kirkil, G.; Kosovic, B.; Lundquist, J. K.
2009-12-01
The WRF model’s grid nesting capability provides a potentially powerful framework for simulating flow over a wide range of scales. One such application is computation of realistic inflow boundary conditions for large eddy simulations (LES) by nesting LES domains within mesoscale domains. While nesting has been widely and successfully applied at GCM to mesoscale resolutions, the WRF model’s nesting behavior at the high-resolution (Δx < 1000m) end of the spectrum is less well understood. Nesting LES within msoscale domains can significantly improve turbulent flow prediction at the scale of a wind park, providing a basis for superior site characterization, or for improved simulation of turbulent inflows encountered by turbines. We investigate WRF’s grid nesting capability at high mesh resolutions using nested mesoscale and large-eddy simulations. We examine the spatial scales required for flow structures to equilibrate to the finer mesh as flow enters a nest, and how the process depends on several parameters, including grid resolution, turbulence subfilter stress models, relaxation zones at nest interfaces, flow velocities, surface roughnesses, terrain complexity and atmospheric stability. Guidance on appropriate domain sizes and turbulence models for LES in light of these results is provided This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-416482
Towards large-scale, human-based, mesoscopic neurotechnologies.
Chang, Edward F
2015-04-08
Direct human brain recordings have transformed the scope of neuroscience in the past decade. Progress has relied upon currently available neurophysiological approaches in the context of patients undergoing neurosurgical procedures for medical treatment. While this setting has provided precious opportunities for scientific research, it also has presented significant constraints on the development of new neurotechnologies. A major challenge now is how to achieve high-resolution spatiotemporal neural recordings at a large scale. By narrowing the gap between current approaches, new directions tailored to the mesoscopic (intermediate) scale of resolution may overcome the barriers towards safe and reliable human-based neurotechnology development, with major implications for advancing both basic research and clinical translation. Copyright © 2015 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...
Extended-Range High-Resolution Dynamical Downscaling over a Continental-Scale Domain
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
High-resolution mesoscale simulations, when applied for downscaling meteorological fields over large spatial domains and for extended time periods, can provide valuable information for many practical application scenarios including the weather-dependent renewable energy industry. In the present study, a strategy has been proposed to dynamically downscale coarse-resolution meteorological fields from Environment Canada's regional analyses for a period of multiple years over the entire Canadian territory. The study demonstrates that a continuous mesoscale simulation over the entire domain is the most suitable approach in this regard. Large-scale deviations in the different meteorological fields pose the biggest challenge for extended-range simulations over continental scale domains, and the enforcement of the lateral boundary conditions is not sufficient to restrict such deviations. A scheme has therefore been developed to spectrally nudge the simulated high-resolution meteorological fields at the different model vertical levels towards those embedded in the coarse-resolution driving fields derived from the regional analyses. A series of experiments were carried out to determine the optimal nudging strategy including the appropriate nudging length scales, nudging vertical profile and temporal relaxation. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil-moisture, and snow conditions, towards their expected values obtained from a high-resolution offline surface scheme was also devised to limit any considerable deviation in the evolving surface fields due to extended-range temporal integrations. The study shows that ensuring large-scale atmospheric similarities helps to deliver near-surface statistical scores for temperature, dew point temperature and horizontal wind speed that are better or comparable to the operational regional forecasts issued by Environment Canada. Furthermore, the meteorological fields resulting from the proposed downscaling strategy have significantly improved spatiotemporal variance compared to those from the operational forecasts, and any time series generated from the downscaled fields do not suffer from discontinuities due to switching between the consecutive forecasts.
Comparing two ground-cover measurement methodologies for semiarid rangelands
USDA-ARS?s Scientific Manuscript database
The limited field-of-view (FOV) associated with single-resolution very-large scale aerial (VLSA) imagery requires users to balance FOV and resolution needs. This balance varies by the specific questions being asked of the data. Here, we tested a FOV-resolution question by comparing ground-cover meas...
A Large Scale Code Resolution Service Network in the Internet of Things
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-01-01
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207
A large scale code resolution service network in the Internet of Things.
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-11-07
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.
NASA Astrophysics Data System (ADS)
Langford, Z. L.; Kumar, J.; Hoffman, F. M.
2015-12-01
Observations indicate that over the past several decades, landscape processes in the Arctic have been changing or intensifying. A dynamic Arctic landscape has the potential to alter ecosystems across a broad range of scales. Accurate characterization is useful to understand the properties and organization of the landscape, optimal sampling network design, measurement and process upscaling and to establish a landscape-based framework for multi-scale modeling of ecosystem processes. This study seeks to delineate the landscape at Seward Peninsula of Alaska into ecoregions using large volumes (terabytes) of high spatial resolution satellite remote-sensing data. Defining high-resolution ecoregion boundaries is difficult because many ecosystem processes in Arctic ecosystems occur at small local to regional scales, which are often resolved in by coarse resolution satellites (e.g., MODIS). We seek to use data-fusion techniques and data analytics algorithms applied to Phased Array type L-band Synthetic Aperture Radar (PALSAR), Interferometric Synthetic Aperture Radar (IFSAR), Satellite for Observation of Earth (SPOT), WorldView-2, WorldView-3, and QuickBird-2 to develop high-resolution (˜5m) ecoregion maps for multiple time periods. Traditional analysis methods and algorithms are insufficient for analyzing and synthesizing such large geospatial data sets, and those algorithms rarely scale out onto large distributed- memory parallel computer systems. We seek to develop computationally efficient algorithms and techniques using high-performance computing for characterization of Arctic landscapes. We will apply a variety of data analytics algorithms, such as cluster analysis, complex object-based image analysis (COBIA), and neural networks. We also propose to use representativeness analysis within the Seward Peninsula domain to determine optimal sampling locations for fine-scale measurements. This methodology should provide an initial framework for analyzing dynamic landscape trends in Arctic ecosystems, such as shrubification and disturbances, and integration of ecoregions into multi-scale models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less
NASA Astrophysics Data System (ADS)
Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela
2016-10-01
Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.
NASA Technical Reports Server (NTRS)
Myneni, Ranga
2003-01-01
The problem of how the scale, or spatial resolution, of reflectance data impacts retrievals of vegetation leaf area index (LAI) and fraction absorbed photosynthetically active radiation (PAR) has been investigated. We define the goal of scaling as the process by which it is established that LAI and FPAR values derived from coarse resolution sensor data equal the arithmetic average of values derived independently from fine resolution sensor data. The increasing probability of land cover mixtures with decreasing resolution is defined as heterogeneity, which is a key concept in scaling studies. The effect of pixel heterogeneity on spectral reflectances and LAI/FPAR retrievals is investigated with 1 km Advanced Very High Resolution Radiometer (AVHRR) data aggregated to different coarse spatial resolutions. It is shown that LAI retrieval errors at coarse resolution are inversely related to the proportion of the dominant land cover in such pixel. Further, large errors in LAI retrievals are incurred when forests are minority biomes in non-forest pixels compared to when forest biomes are mixed with one another, and vice-versa. A physically based technique for scaling with explicit spatial resolution dependent radiative transfer formulation is developed. The successful application of this theory to scaling LAI retrievals from AVHRR data of different resolutions is demonstrated
NASA Technical Reports Server (NTRS)
Lim, Young-Kwon; Stefanova, Lydia B.; Chan, Steven C.; Schubert, Siegfried D.; OBrien, James J.
2010-01-01
This study assesses the regional-scale summer precipitation produced by the dynamical downscaling of analyzed large-scale fields. The main goal of this study is to investigate how much the regional model adds smaller scale precipitation information that the large-scale fields do not resolve. The modeling region for this study covers the southeastern United States (Florida, Georgia, Alabama, South Carolina, and North Carolina) where the summer climate is subtropical in nature, with a heavy influence of regional-scale convection. The coarse resolution (2.5deg latitude/longitude) large-scale atmospheric variables from the National Center for Environmental Prediction (NCEP)/DOE reanalysis (R2) are downscaled using the NCEP Environmental Climate Prediction Center regional spectral model (RSM) to produce precipitation at 20 km resolution for 16 summer seasons (19902005). The RSM produces realistic details in the regional summer precipitation at 20 km resolution. Compared to R2, the RSM-produced monthly precipitation shows better agreement with observations. There is a reduced wet bias and a more realistic spatial pattern of the precipitation climatology compared with the interpolated R2 values. The root mean square errors of the monthly R2 precipitation are reduced over 93 (1,697) of all the grid points in the five states (1,821). The temporal correlation also improves over 92 (1,675) of all grid points such that the domain-averaged correlation increases from 0.38 (R2) to 0.55 (RSM). The RSM accurately reproduces the first two observed eigenmodes, compared with the R2 product for which the second mode is not properly reproduced. The spatial patterns for wet versus dry summer years are also successfully simulated in RSM. For shorter time scales, the RSM resolves heavy rainfall events and their frequency better than R2. Correlation and categorical classification (above/near/below average) for the monthly frequency of heavy precipitation days is also significantly improved by the RSM.
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
NASA Astrophysics Data System (ADS)
Dipankar, A.; Stevens, B. B.; Zängl, G.; Pondkule, M.; Brdar, S.
2014-12-01
The effect of clouds on large scale dynamics is represented in climate models through parameterization of various processes, of which the parameterization of shallow and deep convection are particularly uncertain. The atmospheric boundary layer, which controls the coupling to the surface, and which defines the scale of shallow convection, is typically 1 km in depth. Thus, simulations on a O(100 m) grid largely obviate the need for such parameterizations. By crossing this threshold of O(100m) grid resolution one can begin thinking of large-eddy simulation (LES), wherein the sub-grid scale parameterization have a sounder theoretical foundation. Substantial initiatives have been taken internationally to approach this threshold. For example, Miura et al., 2007 and Mirakawa et al., 2014 approach this threshold by doing global simulations, with (gradually) decreasing grid resolution, to understand the effect of cloud-resolving scales on the general circulation. Our strategy, on the other hand, is to take a big leap forward by fixing the resolution at O(100 m), and gradually increasing the domain size. We believe that breaking this threshold would greatly help in improving the parameterization schemes and reducing the uncertainty in climate predictions. To take this forward, the German Federal Ministry of Education and Research has initiated a project on HD(CP)2 that aims for a limited area LES at resolution O(100 m) using the new unified modeling system ICON (Zängl et al., 2014). In the talk, results from the HD(CP)2 evaluation simulation will be shown that targets high resolution simulation over a small domain around Jülich, Germany. This site is chosen because high resolution HD(CP)2 Observational Prototype Experiment took place in this region from 1.04.2013 to 31.05.2013, in order to critically evaluate the model. Nesting capabilities of ICON is used to gradually increase the resolution from the outermost domain, which is forced from the COSMO-DE data, to the innermost and finest resolution domain centered around Jülich (see Fig. 1 top panel). Furthermore, detailed analyses of the simulation results against the observation data will be presented. A reprsentative figure showing time series of column integrated water vapor (IWV) for both model and observation on 24.04.2013 is shown in bottom panel of Fig. 1.
Regional turbulence patterns driven by meso- and submesoscale processes in the Caribbean Sea
NASA Astrophysics Data System (ADS)
C. Pérez, Juan G.; R. Calil, Paulo H.
2017-09-01
The surface ocean circulation in the Caribbean Sea is characterized by the interaction between anticyclonic eddies and the Caribbean Upwelling System (CUS). These interactions lead to instabilities that modulate the transfer of kinetic energy up- or down-cascade. The interaction of North Brazil Current rings with the islands leads to the formation of submesoscale vorticity filaments leeward of the Lesser Antilles, thus transferring kinetic energy from large to small scales. Within the Caribbean, the upper ocean dynamic ranges from large-scale currents to coastal upwelling filaments and allow the vertical exchange of physical properties and supply KE to larger scales. In this study, we use a regional model with different spatial resolutions (6, 3, and 1 km), focusing on the Guajira Peninsula and the Lesser Antilles in the Caribbean Sea, in order to evaluate the impact of submesoscale processes on the regional KE energy cascade. Ageostrophic velocities emerge as the Rossby number becomes O(1). As model resolution is increased submesoscale motions are more energetic, as seen by the flatter KE spectra when compared to the lower resolution run. KE injection at the large scales is greater in the Guajira region than in the others regions, being more effectively transferred to smaller scales, thus showing that submesoscale dynamics is key in modulating eddy kinetic energy and the energy cascade within the Caribbean Sea.
A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume
NASA Astrophysics Data System (ADS)
Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration
2017-11-01
An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Imaging mouse cerebellum with serial optical coherence scanner (Conference Presentation)
NASA Astrophysics Data System (ADS)
Liu, Chao J.; Williams, Kristen; Orr, Harry; Taner, Akkin
2017-02-01
We present the serial optical coherence scanner (SOCS), which consists of a polarization sensitive optical coherence tomography and a vibratome with associated controls for serial imaging, to visualize the cerebellum and adjacent brainstem of mouse. The cerebellar cortical layers and white matter are distinguished by using intrinsic optical contrasts. Images from serial scans reveal the large-scale anatomy in detail and map the nerve fiber pathways in the cerebellum and adjacent brainstem. The optical system, which has 5.5 μm axial resolution, utilizes a scan lens or a water-immersion microscope objective resulting in 10 μm or 4 μm lateral resolution, respectively. The large-scale brain imaging at high resolution requires an efficient way to collect large datasets. It is important to improve the SOCS system to deal with large-scale and large number of samples in a reasonable time. The imaging and slicing procedure for a section took about 4 minutes due to a low speed of the vibratome blade to maintain slicing quality. SOCS has potential to investigate pathological changes and monitor the effects of therapeutic drugs in cerebellar diseases such as spinocerebellar ataxia 1 (SCA1). The SCA1 is a neurodegenerative disease characterized by atrophy and eventual loss of Purkinje cells from the cerebellar cortex, and the optical contrasts provided by SOCS is being evaluated for biomarkers of the disease.
Techniques for automatic large scale change analysis of temporal multispectral imagery
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.
Optical mapping and its potential for large-scale sequencing projects.
Aston, C; Mishra, B; Schwartz, D C
1999-07-01
Physical mapping has been rediscovered as an important component of large-scale sequencing projects. Restriction maps provide landmark sequences at defined intervals, and high-resolution restriction maps can be assembled from ensembles of single molecules by optical means. Such optical maps can be constructed from both large-insert clones and genomic DNA, and are used as a scaffold for accurately aligning sequence contigs generated by shotgun sequencing.
A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis
NASA Astrophysics Data System (ADS)
Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.
2006-12-01
Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.
Iraq: Recent Developments in Reconstruction Assistance
2005-05-12
in Reconstruction Assistance Summary Large-scale reconstruction assistance programs are being undertaken by the United States following the war with... assistance programs , the Coalition Provisional Authority (CPA), dissolved, and sovereignty was returned to Iraq. Security Council Resolution 1546 of June...Assessment.pdf]. Iraq: Recent Developments in Reconstruction Assistance Large-scale reconstruction assistance programs are being undertaken by the United
The scale dependence of optical diversity in a prairie ecosystem
NASA Astrophysics Data System (ADS)
Gamon, J. A.; Wang, R.; Stilwell, A.; Zygielbaum, A. I.; Cavender-Bares, J.; Townsend, P. A.
2015-12-01
Biodiversity loss, one of the most crucial challenges of our time, endangers ecosystem services that maintain human wellbeing. Traditional methods of measuring biodiversity require extensive and costly field sampling by biologists with extensive experience in species identification. Remote sensing can be used for such assessment based upon patterns of optical variation. This provides efficient and cost-effective means to determine ecosystem diversity at different scales and over large areas. Sampling scale has been described as a "fundamental conceptual problem" in ecology, and is an important practical consideration in both remote sensing and traditional biodiversity studies. On the one hand, with decreasing spatial and spectral resolution, the differences among different optical types may become weak or even disappear. Alternately, high spatial and/or spectral resolution may introduce redundant or contradictory information. For example, at high resolution, the variation within optical types (e.g., between leaves on a single plant canopy) may add complexity unrelated to specie richness. We studied the scale-dependence of optical diversity in a prairie ecosystem at Cedar Creek Ecosystem Science Reserve, Minnesota, USA using a variety of spectrometers from several platforms on the ground and in the air. Using the coefficient of variation (CV) of spectra as an indicator of optical diversity, we found that high richness plots generally have a higher coefficient of variation. High resolution imaging spectrometer data (1 mm pixels) showed the highest sensitivity to richness level. With decreasing spatial resolution, the difference in CV between richness levels decreased, but remained significant. These findings can be used to guide airborne studies of biodiversity and develop more effective large-scale biodiversity sampling methods.
NASA Astrophysics Data System (ADS)
Sankey, T.; Donald, J.; McVay, J.
2015-12-01
High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.
High-resolution mapping of forest carbon stocks in the Colombian Amazon
NASA Astrophysics Data System (ADS)
Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Yepes Quintero, A. P.; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.
2012-07-01
High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or light detection and ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high-resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (> 40%) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon maps have 14% uncertainty at 1 ha resolution, and the regional map based on stratification has 28% uncertainty in any given hectare. High-resolution approaches with quantifiable pixel-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.
High-resolution Mapping of Forest Carbon Stocks in the Colombian Amazon
NASA Astrophysics Data System (ADS)
Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.
2012-03-01
High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or Light Detection and Ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (>40 %) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon mapping samples had 14.6 % uncertainty at 1 ha resolution, and regional maps based on stratification and regression approaches had 25.6 % and 29.6 % uncertainty, respectively, in any given hectare. High-resolution approaches with reported local-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision-makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.
NASA Astrophysics Data System (ADS)
Wylezalek, Dominika; Schnorr Müller, Allan; Zakamska, Nadia L.; Storchi-Bergmann, Thaisa; Greene, Jenny E.; Müller-Sánchez, Francisco; Kelly, Michael; Liu, Guilin; Law, David R.; Barrera-Ballesteros, Jorge K.; Riffel, Rogemar A.; Thomas, Daniel
2017-05-01
Ionized gas outflows driven by active galactic nuclei (AGN) are ubiquitous in high-luminosity AGN with outflow speeds apparently correlated with the total bolometric luminosity of the AGN. This empirical relation and theoretical work suggest that in the range Lbol ˜ 1043-45 erg s-1 there must exist a threshold luminosity above which the AGN becomes powerful enough to launch winds that will be able to escape the galaxy potential. In this paper, we present pilot observations of two AGN in this transitional range that were taken with the Gemini North Multi-Object Spectrograph integral field unit (IFU). Both sources have also previously been observed within the Sloan Digital Sky Survey-IV (SDSS) Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey. While the MaNGA IFU maps probe the gas fields on galaxy-wide scales and show that some regions are dominated by AGN ionization, the new Gemini IFU data zoom into the centre with four times better spatial resolution. In the object with the lower Lbol we find evidence of a young or stalled biconical AGN-driven outflow where none was obvious at the MaNGA resolution. In the object with the higher Lbol we trace the large-scale biconical outflow into the nuclear region and connect the outflow from small to large scales. These observations suggest that AGN luminosity and galaxy potential are crucial in shaping wind launching and propagation in low-luminosity AGN. The transition from small and young outflows to galaxy-wide feedback can only be understood by combining large-scale IFU data that trace the galaxy velocity field with higher resolution, small-scale IFU maps.
NASA Astrophysics Data System (ADS)
Lewis, Q. W.; Rhoads, B. L.
2017-12-01
The merging of rivers at confluences results in complex three-dimensional flow patterns that influence sediment transport, bed morphology, downstream mixing, and physical habitat conditions. The capacity to characterize comprehensively flow at confluences using traditional sensors, such as acoustic Doppler velocimeters and profiles, is limited by the restricted spatial resolution of these sensors and difficulties in measuring velocities simultaneously at many locations within a confluence. This study assesses two-dimensional surficial patterns of flow structure at a small stream confluence in Illinois, USA, using large scale particle image velocimetry (LSPIV) derived from videos captured by unmanned aerial systems (UAS). The method captures surface velocity patterns at high spatial and temporal resolution over multiple scales, ranging from the entire confluence to details of flow within the confluence mixing interface. Flow patterns at high momentum ratio are compared to flow patterns when the two incoming flows have nearly equal momentum flux. Mean surface flow patterns during the two types of events provide details on mean patterns of surface flow in different hydrodynamic regions of the confluence and on changes in these patterns with changing momentum flux ratio. LSPIV data derived from the highest resolution imagery also reveal general characteristics of large-scale vortices that form along the shear layer between the flows during the high-momentum ratio event. The results indicate that the use of LSPIV and UAS is well-suited for capturing in detail mean surface patterns of flow at small confluences, but that characterization of evolving turbulent structures is limited by scale considerations related to structure size, image resolution, and camera instability. Complementary methods, including camera platforms mounted at fixed positions close to the water surface, provide opportunities to accurately characterize evolving turbulent flow structures in confluences.
Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui
2018-01-01
Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589
Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.
NASA Astrophysics Data System (ADS)
Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric
2016-04-01
SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.
Advances in DNA sequencing technologies for high resolution HLA typing.
Cereb, Nezih; Kim, Hwa Ran; Ryu, Jaejun; Yang, Soo Young
2015-12-01
This communication describes our experience in large-scale G group-level high resolution HLA typing using three different DNA sequencing platforms - ABI 3730 xl, Illumina MiSeq and PacBio RS II. Recent advances in DNA sequencing technologies, so-called next generation sequencing (NGS), have brought breakthroughs in deciphering the genetic information in all living species at a large scale and at an affordable level. The NGS DNA indexing system allows sequencing multiple genes for large number of individuals in a single run. Our laboratory has adopted and used these technologies for HLA molecular testing services. We found that each sequencing technology has its own strengths and weaknesses, and their sequencing performances complement each other. HLA genes are highly complex and genotyping them is quite challenging. Using these three sequencing platforms, we were able to meet all requirements for G group-level high resolution and high volume HLA typing. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Measuring large-scale vertical motion in the atmosphere with dropsondes
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn
2017-04-01
Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.
Large scale tracking algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For highermore » resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.« less
NASA Astrophysics Data System (ADS)
Toigo, Anthony D.; Lee, Christopher; Newman, Claire E.; Richardson, Mark I.
2012-09-01
We investigate the sensitivity of the circulation and thermal structure of the martian atmosphere to numerical model resolution in a general circulation model (GCM) using the martian implementation (MarsWRF) of the planetWRF atmospheric model. We provide a description of the MarsWRF GCM and use it to study the global atmosphere at horizontal resolutions from 7.5° × 9° to 0.5° × 0.5°, encompassing the range from standard Mars GCMs to global mesoscale modeling. We find that while most of the gross-scale features of the circulation (the rough location of jets, the qualitative thermal structure, and the major large-scale features of the surface level winds) are insensitive to horizontal resolution over this range, several major features of the circulation are sensitive in detail. The northern winter polar circulation shows the greatest sensitivity, showing a continuous transition from a smooth polar winter jet at low resolution, to a distinct vertically “split” jet as resolution increases. The separation of the lower and middle atmosphere polar jet occurs at roughly 10 Pa, with the split jet structure developing in concert with the intensification of meridional jets at roughly 10 Pa and above 0.1 Pa. These meridional jets appear to represent the separation of lower and middle atmosphere mean overturning circulations (with the former being consistent with the usual concept of the “Hadley cell”). Further, the transition in polar jet structure is more sensitive to changes in zonal than meridional horizontal resolution, suggesting that representation of small-scale wave-mean flow interactions is more important than fine-scale representation of the meridional thermal gradient across the polar front. Increasing the horizontal resolution improves the match between the modeled thermal structure and the Mars Climate Sounder retrievals for northern winter high latitudes. While increased horizontal resolution also improves the simulation of the northern high latitudes at equinox, even the lowest model resolution considered here appears to do a good job for the southern winter and southern equinoctial pole (although in detail some discrepancies remain). These results suggest that studies of the northern winter jet (e.g., transient waves and cyclogenesis) will be more sensitive to global model resolution that those of the south (e.g., the confining dynamics of the southern polar vortex relevant to studies of argon transport). For surface winds, the major effect of increased horizontal resolution is in the superposition of circulations forced by local-scale topography upon the large-scale surface wind patterns. While passive predictions of dust lifting are generally insensitive to model horizontal resolution when no lifting threshold is considered, increasing the stress threshold produces significantly more lifting in higher resolution simulations with the generation of finer-scale, higher-stress winds due primarily to better-resolved topography. Considering the positive feedbacks expected for radiatively active dust lifting, we expect this bias to increase when such feedbacks are permitted.
Large-Scale, High-Resolution Neurophysiological Maps Underlying fMRI of Macaque Temporal Lobe
Papanastassiou, Alex M.; DiCarlo, James J.
2013-01-01
Maps obtained by functional magnetic resonance imaging (fMRI) are thought to reflect the underlying spatial layout of neural activity. However, previous studies have not been able to directly compare fMRI maps to high-resolution neurophysiological maps, particularly in higher level visual areas. Here, we used a novel stereo microfocal x-ray system to localize thousands of neural recordings across monkey inferior temporal cortex (IT), construct large-scale maps of neuronal object selectivity at subvoxel resolution, and compare those neurophysiology maps with fMRI maps from the same subjects. While neurophysiology maps contained reliable structure at the sub-millimeter scale, fMRI maps of object selectivity contained information at larger scales (>2.5 mm) and were only partly correlated with raw neurophysiology maps collected in the same subjects. However, spatial smoothing of neurophysiology maps more than doubled that correlation, while a variety of alternative transforms led to no significant improvement. Furthermore, raw spiking signals, once spatially smoothed, were as predictive of fMRI maps as local field potential signals. Thus, fMRI of the inferior temporal lobe reflects a spatially low-passed version of neurophysiology signals. These findings strongly validate the widespread use of fMRI for detecting large (>2.5 mm) neuronal domains of object selectivity but show that a complete understanding of even the most pure domains (e.g., faces vs nonface objects) requires investigation at fine scales that can currently only be obtained with invasive neurophysiological methods. PMID:24048850
Relaxation in two dimensions and the 'sinh-Poisson' equation
NASA Technical Reports Server (NTRS)
Montgomery, D.; Matthaeus, W. H.; Stribling, W. T.; Martinez, D.; Oughton, S.
1992-01-01
Long-time states of a turbulent, decaying, two-dimensional, Navier-Stokes flow are shown numerically to relax toward maximum-entropy configurations, as defined by the "sinh-Poisson" equation. The large-scale Reynolds number is about 14,000, the spatial resolution is (512)-squared, the boundary conditions are spatially periodic, and the evolution takes place over nearly 400 large-scale eddy-turnover times.
A first large-scale flood inundation forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie
2013-11-04
At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less
Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics
NASA Astrophysics Data System (ADS)
Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.
2016-12-01
The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.
Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure
NASA Astrophysics Data System (ADS)
Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.
2017-12-01
The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.
Scale and modeling issues in water resources planning
Lins, H.F.; Wolock, D.M.; McCabe, G.J.
1997-01-01
Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G
2017-04-07
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.
2017-01-01
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351
Photogrammetric portrayal of Mars topography.
Wu, S.S.C.
1979-01-01
Special photogrammetric techniques have been developed to portray Mars topography, using Mariner and Viking imaging and nonimaging topographic information and earth-based radar data. Topography is represented by the compilation of maps at three scales: global, intermediate, and very large scale. The global map is a synthesis of topographic information obtained from Mariner 9 and earth-based radar, compiled at a scale of 1:25,000,000 with a contour interval of 1 km; it gives a broad quantitative view of the planet. At intermediate scales, Viking Orbiter photographs of various resolutions are used to compile detailed contour maps of a broad spectrum of prominent geologic features; a contour interval as small as 20 m has been obtained from very high resolution orbital photography. Imagery from the Viking lander facsimile cameras permits construction of detailed, very large scale (1:10) topographic maps of the terrain surrounding the two landers; these maps have a contour interval of 1 cm. This paper presents several new detailed topographic maps of Mars.-Author
Photogrammetric portrayal of Mars topography
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1979-01-01
Special photogrammetric techniques have been developed to portray Mars topography, using Mariner and Viking imaging and nonimaging topographic information and earth-based radar data. Topography is represented by the compilation of maps at three scales: global, intermediate, and very large scale. The global map is a synthesis of topographic information obtained from Mariner 9 and earth-based radar, compiled at a scale of 1:25,000,000 with a contour interval of 1 km; it gives a broad quantitative view of the planet. At intermediate scales, Viking Orbiter photographs of various resolutions are used to compile detailed contour maps of a broad spectrum of prominent geologic features; a contour interval as small as 20 m has been obtained from very high resolution orbital photography. Imagery from the Viking lander facsimile cameras permits construction of detailed, very large scale (1:10) topographic maps of the terrain surrounding the two landers; these maps have a contour interval of 1 cm. This paper presents several new detailed topographic maps of Mars.
Van de Kamer, J B; Lagendijk, J J W
2002-05-21
SAR distributions in a healthy female adult head as a result of a radiating vertical dipole antenna (frequency 915 MHz) representing a hand-held mobile phone have been computed for three different resolutions: 2 mm, 1 mm and 0.4 mm. The extremely high resolution of 0.4 mm was obtained with our quasistatic zooming technique, which is briefly described in this paper. For an effectively transmitted power of 0.25 W, the maximum averaged SAR values in both cubic- and arbitrary-shaped volumes are, respectively, about 1.72 and 2.55 W kg(-1) for 1 g and 0.98 and 1.73 W kg(-1) for 10 g of tissue. These numbers do not vary much (<8%) for the different resolutions, indicating that SAR computations at a resolution of 2 mm are sufficiently accurate to describe the large-scale distribution. However, considering the detailed SAR pattern in the head, large differences may occur if high-resolution computations are performed rather than low-resolution ones. These deviations are caused by both increased modelling accuracy and improved anatomical description in higher resolution simulations. For example, the SAR profile across a boundary between tissues with high dielectric contrast is much more accurately described at higher resolutions. Furthermore, low-resolution dielectric geometries may suffer from loss of anatomical detail, which greatly affects small-scale SAR distributions. Thus. for strongly inhomogeneous regions high-resolution SAR modelling is an absolute necessity.
Kalkan, Fatih; Zaum, Christopher; Morgenstern, Karina
2012-10-01
A beetle type stage and a flexure scanning stage are combined to form a two stages scanning tunneling microscope (STM). It operates at room temperature in ultrahigh vacuum and is capable of scanning areas up to 300 μm × 450 μm down to resolution on the nanometer scale. This multi-scale STM has been designed and constructed in order to investigate prestructured metallic or semiconducting micro- and nano-structures in real space from atomic-sized structures up to the large-scale environment. The principle of the instrument is demonstrated on two different systems. Gallium nitride based micropillars demonstrate scan areas up to hundreds of micrometers; a Au(111) surface demonstrates nanometer resolution.
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.
2018-05-01
Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.
Soil organic carbon - a large scale paired catchment assessment
NASA Astrophysics Data System (ADS)
Kunkel, V.; Hancock, G. R.; Wells, T.
2016-12-01
Soil organic carbon (SOC) concentration can vary both spatially and temporally driven by differences in soil properties, topography and climate. However most studies have focused on point scale data sets with a paucity of studies examining larger scale catchments. Here we examine the spatial and temporal distribution of SOC for two large catchments. The Krui (575 km2) and Merriwa River (675km2) catchments (New South Wales, Australia). Both have similar shape, soils, topography and orientation. We show that SOC distribution is very similar for both catchments and that elevation (and associated increase in soil moisture) is a major influence on SOC. We also show that there is little change in SOC from the initial assessment in 2006 to 2015 despite a major drought from 2003 to 2010 and extreme rainfall events in 2007 and 2010 -therefore SOC concentration appears robust. However, we found significant relationships between erosion and deposition patterns (as quantified using 137Cs) and SOC for both catchments again demonstrating a strong geomorphic relationship. Vegetation across the catchments was assessed using remote sensing (Landsat and MODIS). Vegetation patterns were temporally consistent with above ground biomass increasing with elevation. SOC could be predicted using both these low and high resolution remote sensing platforms. Results indicate that, although moderate resolution (250 m) allows for reasonable prediction of the spatial distribution of SOC, the higher resolution (30 m) improved the strength of the SOC-NDVI relationship. The relationship between SOC and 137Cs, as a surrogate for the erosion and deposition of SOC, suggested that sediment transport and deposition influences the distribution of SOC within the catchment. The findings demonstrate that over the large catchment scale and at the decadal time scale that SOC is relatively constant and can largely be predicted by topography.
Incorporating human-water dynamics in a hyper-resolution land surface model
NASA Astrophysics Data System (ADS)
Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.
2017-12-01
The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in a hyper-resolution LSM this work allows for progress on hydrological monitoring and predictions, as well as drought preparedness and water impact assessments at relevant decision-making scales.
Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks
Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy; ...
2018-02-07
The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less
Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy
The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less
NASA Astrophysics Data System (ADS)
Martel, J. L.; Brissette, F.; Mailhot, A.; Wood, R. R.; Ludwig, R.; Frigon, A.; Leduc, M.; Turcotte, R.
2017-12-01
Recent studies indicate that the frequency and intensity of extreme precipitation will increase in future climate due to global warming. In this study, we compare annual maxima precipitation series from three large ensembles of climate simulations at various spatial and temporal resolutions. The first two are at the global scale: the Canadian Earth System Model (CanESM2) 50-member large ensemble (CanESM2-LE) at a 2.8° resolution and the Community Earth System Model (CESM1) 40-member large ensemble (CESM1-LE) at a 1° resolution. The third ensemble is at the regional scale over both Eastern North America and Europe: the Canadian Regional Climate Model (CRCM5) 50-member large ensemble (CRCM5-LE) at a 0.11° resolution, driven at its boundaries by the CanESM-LE. The CRCM5-LE is a new ensemble issued from the ClimEx project (http://www.climex-project.org), a Québec-Bavaria collaboration. Using these three large ensembles, change in extreme precipitations over the historical (1980-2010) and future (2070-2100) periods are investigated. This results in 1 500 (30 years x 50 members for CanESM2-LE and CRCM5-LE) and 1200 (30 years x 40 members for CESM1-LE) simulated years over both the historical and future periods. Using these large datasets, the empirical daily (and sub-daily for CRCM5-LE) extreme precipitation quantiles for large return periods ranging from 2 to 100 years are computed. Results indicate that daily extreme precipitations generally will increase over most land grid points of both domains according to the three large ensembles. Regarding the CRCM5-LE, the increase in sub-daily extreme precipitations will be even more important than the one observed for daily extreme precipitations. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety.
NASA Astrophysics Data System (ADS)
Khandelwal, A.; Karpatne, A.; Kumar, V.
2017-12-01
In this paper, we present novel methods for producing surface water maps at 30 meter spatial resolution at a daily temporal resolution. These new methods will make use of the MODIS spectral data from Terra (available daily since 2000) to produce daily maps at 250 meter and 500 meter resolution, and then refine them using the relative elevation ordering of pixels at 30 meter resolution. The key component of these methods is the use of elevation structure (relative elevation ordering) of a water body. Elevation structure is not explicitly available at desired resolution for most water bodies in the world and hence it will be estimated using our previous work that uses the history of imperfect labels. In this paper, we will present a new technique that uses elevation structure (unlike existing pixel based methods) to enforce temporal consistency in surface water extents (lake area on nearby dates is likely to be very similar). This will greatly improve the quality of the MODIS scale land/water labels since daily MODIS data can have a large amount of missing (or poor quality) data due to clouds and other factors. The quality of these maps will be further improved using elevation based resolution refinement approach that will make use of elevation structure estimated at Landsat scale. With the assumption that elevation structure does not change over time, it provides a very effective way to transfer information between datasets even when they are not observed concurrently. In this work, we will derive elevation structure at Landsat scale from monthly water extent maps spanning 1984-2015, publicly available through a joint effort of Google Earth Engine and the European Commission's Joint Research Centre (JRC). This elevation structure will then be used to refine spatial resolution of Modis scale maps from 2000 onwards. We will present the analysis of these methods on a large and diverse set of water bodies across the world.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Scale-Resolving simulations (SRS): How much resolution do we really need?
NASA Astrophysics Data System (ADS)
Pereira, Filipe M. S.; Girimaji, Sharath
2017-11-01
Scale-resolving simulations (SRS) are emerging as the computational approach of choice for many engineering flows with coherent structures. The SRS methods seek to resolve only the most important features of the coherent structures and model the remainder of the flow field with canonical closures. With reference to a typical Large-Eddy Simulation (LES), practical SRS methods aim to resolve a considerably narrower range of scales (reduced physical resolution) to achieve an adequate degree of accuracy at reasonable computational effort. While the objective of SRS is well-founded, the criteria for establishing the optimal degree of resolution required to achieve an acceptable level of accuracy are not clear. This study considers the canonical case of the flow around a circular cylinder to address the issue of `optimal' resolution. Two important criteria are developed. The first condition addresses the issue of adequate resolution of the flow field. The second guideline provides an assessment of whether the modeled field is canonical (stochastic) turbulence amenable to closure-based computations.
NASA Astrophysics Data System (ADS)
Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo; Lecci, Rita; Mossa, Michele
2017-01-01
SANIFS (Southern Adriatic Northern Ionian coastal Forecasting System) is a coastal-ocean operational system based on the unstructured grid finite-element three-dimensional hydrodynamic SHYFEM model, providing short-term forecasts. The operational chain is based on a downscaling approach starting from the large-scale system for the entire Mediterranean Basin (MFS, Mediterranean Forecasting System), which provides initial and boundary condition fields to the nested system. The model is configured to provide hydrodynamics and active tracer forecasts both in open ocean and coastal waters of southeastern Italy using a variable horizontal resolution from the open sea (3-4 km) to coastal areas (50-500 m). Given that the coastal fields are driven by a combination of both local (also known as coastal) and deep-ocean forcings propagating along the shelf, the performance of SANIFS was verified both in forecast and simulation mode, first (i) on the large and shelf-coastal scales by comparing with a large-scale survey CTD (conductivity-temperature-depth) in the Gulf of Taranto and then (ii) on the coastal-harbour scale (Mar Grande of Taranto) by comparison with CTD, ADCP (acoustic doppler current profiler) and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 6.5 km). The SANIFS forecasts at a lead time of 1 day were compared with the MFS forecasts, highlighting that SANIFS is able to retain the large-scale dynamics of MFS. The large-scale dynamics of MFS are correctly propagated to the shelf-coastal scale, improving the forecast accuracy (+17 % for temperature and +6 % for salinity compared to MFS). Moreover, the added value of SANIFS was assessed on the coastal-harbour scale, which is not covered by the coarse resolution of MFS, where the fields forecasted by SANIFS reproduced the observations well (temperature RMSE equal to 0.11 °C). Furthermore, SANIFS simulations were compared with hourly time series of temperature, sea level and velocity measured on the coastal-harbour scale, showing a good agreement. Simulations in the Gulf of Taranto described a circulation mainly characterized by an anticyclonic gyre with the presence of cyclonic vortexes in shelf-coastal areas. A surface water inflow from the open sea to Mar Grande characterizes the coastal-harbour scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Joslyn, Cliff A.; Chappell, Alan R.
As semantic datasets grow to be very large and divergent, there is a need to identify and exploit their inherent semantic structure for discovery and optimization. Towards that end, we present here a novel methodology to identify the semantic structures inherent in an arbitrary semantic graph dataset. We first present the concept of an extant ontology as a statistical description of the semantic relations present amongst the typed entities modeled in the graph. This serves as a model of the underlying semantic structure to aid in discovery and visualization. We then describe a method of ontological scaling in which themore » ontology is employed as a hierarchical scaling filter to infer different resolution levels at which the graph structures are to be viewed or analyzed. We illustrate these methods on three large and publicly available semantic datasets containing more than one billion edges each. Keywords-Semantic Web; Visualization; Ontology; Multi-resolution Data Mining;« less
Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate
Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.
2015-01-01
Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1994-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth-order central differences through fast Fourier transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large scale features, such as the total circulation around the roll-up region, are adequately resolved.
NASA Technical Reports Server (NTRS)
Weinan, E.; Shu, Chi-Wang
1992-01-01
High order essentially non-oscillatory (ENO) schemes, originally designed for compressible flow and in general for hyperbolic conservation laws, are applied to incompressible Euler and Navier-Stokes equations with periodic boundary conditions. The projection to divergence-free velocity fields is achieved by fourth order central differences through Fast Fourier Transforms (FFT) and a mild high-order filtering. The objective of this work is to assess the resolution of ENO schemes for large scale features of the flow when a coarse grid is used and small scale features of the flow, such as shears and roll-ups, are not fully resolved. It is found that high-order ENO schemes remain stable under such situations and quantities related to large-scale features, such as the total circulation around the roll-up region, are adequately resolved.
Measuring Large-Scale Social Networks with High Resolution
Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr; Cuttone, Andrea; Madsen, Mette My; Larsen, Jakob Eg; Lehmann, Sune
2014-01-01
This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years—the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection. PMID:24770359
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.; Wilcox, J. M.; Svalgaard, L.; Scherrer, P. H.; Mcintosh, P. S.
1977-01-01
Two methods of observing the neutral line of the large-scale photospheric magnetic field are compared: neutral line positions inferred from H-alpha photographs (McIntosh and Nolte, 1975) and observations of the photospheric magnetic field made with low spatial resolution (three minutes) and high sensitivity using the Stanford magnetograph. The comparison is found to be very favorable.
Large and small-scale structures in Saturn's rings
NASA Astrophysics Data System (ADS)
Albers, N.; Rehnberg, M. E.; Brown, Z. L.; Sremcevic, M.; Esposito, L. W.
2017-09-01
Observations made by the Cassini spacecraft have revealed both large and small scale structures in Saturn's rings in unprecedented detail. Analysis of high-resolution measurements by the Cassini Ultraviolet Spectrograph (UVIS) High Speed Photometer (HSP) and the Imaging Science Subsystem (ISS) show an abundance of intrinsic small-scale structures (or clumping) seen across the entire ring system. These include self-gravity wakes (50-100m), sub-km structure at the A and B ring edges, and "straw"/"ropy" structures (1-3km).
NASA Astrophysics Data System (ADS)
Zarzycki, C. M.; Gettelman, A.; Callaghan, P.
2017-12-01
Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.
Towards a New Assessment of Urban Areas from Local to Global Scales
NASA Astrophysics Data System (ADS)
Bhaduri, B. L.; Roy Chowdhury, P. K.; McKee, J.; Weaver, J.; Bright, E.; Weber, E.
2015-12-01
Since early 2000s, starting with NASA MODIS, satellite based remote sensing has facilitated collection of imagery with medium spatial resolution but high temporal resolution (daily). This trend continues with an increasing number of sensors and data products. Increasing spatial and temporal resolutions of remotely sensed data archives, from both public and commercial sources, have significantly enhanced the quality of mapping and change data products. However, even with automation of such analysis on evolving computing platforms, rates of data processing have been suboptimal largely because of the ever-increasing pixel to processor ratio coupled with limitations of the computing architectures. Novel approaches utilizing spatiotemporal data mining techniques and computational architectures have emerged that demonstrates the potential for sustained and geographically scalable landscape monitoring to be operational. We exemplify this challenge with two broad research initiatives on High Performance Geocomputation at Oak Ridge National Laboratory: (a) mapping global settlement distribution; (b) developing national critical infrastructure databases. Our present effort, on large GPU based architectures, to exploit high resolution (1m or less) satellite and airborne imagery for extracting settlements at global scale is yielding understanding of human settlement patterns and urban areas at unprecedented resolution. Comparison of such urban land cover database, with existing national and global land cover products, at various geographic scales in selected parts of the world is revealing intriguing patterns and insights for urban assessment. Early results, from the USA, Taiwan, and Egypt, indicate closer agreements (5-10%) in urban area assessments among databases at larger, aggregated geographic extents. However, spatial variability at local scales could be significantly different (over 50% disagreement).
The impact of Lyman-α radiative transfer on large-scale clustering in the Illustris simulation
NASA Astrophysics Data System (ADS)
Behrens, C.; Byrohl, C.; Saito, S.; Niemeyer, J. C.
2018-06-01
Context. Lyman-α emitters (LAEs) are a promising probe of the large-scale structure at high redshift, z ≳ 2. In particular, the Hobby-Eberly Telescope Dark Energy Experiment aims at observing LAEs at 1.9 < z < 3.5 to measure the baryon acoustic oscillation (BAO) scale and the redshift-space distortion (RSD). However, it has been pointed out that the complicated radiative transfer (RT) of the resonant Lyman-α emission line generates an anisotropic selection bias in the LAE clustering on large scales, s ≳ 10 Mpc. This effect could potentially induce a systematic error in the BAO and RSD measurements. Also, there exists a recent claim to have observational evidence of the effect in the Lyman-α intensity map, albeit statistically insignificant. Aims: We aim at quantifying the impact of the Lyman-α RT on the large-scale galaxy clustering in detail. For this purpose, we study the correlations between the large-scale environment and the ratio of an apparent Lyman-α luminosity to an intrinsic one, which we call the "observed fraction", at 2 < z < 6. Methods: We apply our Lyman-α RT code by post-processing the full Illustris simulations. We simply assume that the intrinsic luminosity of the Lyman-α emission is proportional to the star formation rate of galaxies in Illustris, yielding a sufficiently large sample of LAEs to measure the anisotropic selection bias. Results: We find little correlation between large-scale environment and the observed fraction induced by the RT, and hence a smaller anisotropic selection bias than has previously been claimed. We argue that the anisotropy was overestimated in previous work due to insufficient spatial resolution; it is important to keep the resolution such that it resolves the high-density region down to the scale of the interstellar medium, that is, 1 physical kpc. We also find that the correlation can be further enhanced by assumptions in modeling intrinsic Lyman-α emission.
Demonstration of nanoimprinted hyperlens array for high-throughput sub-diffraction imaging
NASA Astrophysics Data System (ADS)
Byun, Minsueop; Lee, Dasol; Kim, Minkyung; Kim, Yangdoo; Kim, Kwan; Ok, Jong G.; Rho, Junsuk; Lee, Heon
2017-04-01
Overcoming the resolution limit of conventional optics is regarded as the most important issue in optical imaging science and technology. Although hyperlenses, super-resolution imaging devices based on highly anisotropic dispersion relations that allow the access of high-wavevector components, have recently achieved far-field sub-diffraction imaging in real-time, the previously demonstrated devices have suffered from the extreme difficulties of both the fabrication process and the non-artificial objects placement. This results in restrictions on the practical applications of the hyperlens devices. While implementing large-scale hyperlens arrays in conventional microscopy is desirable to solve such issues, it has not been feasible to fabricate such large-scale hyperlens array with the previously used nanofabrication methods. Here, we suggest a scalable and reliable fabrication process of a large-scale hyperlens device based on direct pattern transfer techniques. We fabricate a 5 cm × 5 cm size hyperlenses array and experimentally demonstrate that it can resolve sub-diffraction features down to 160 nm under 410 nm wavelength visible light. The array-based hyperlens device will provide a simple solution for much more practical far-field and real-time super-resolution imaging which can be widely used in optics, biology, medical science, nanotechnology and other closely related interdisciplinary fields.
Photosynthesis in high definition
NASA Astrophysics Data System (ADS)
Hilton, Timothy W.
2018-01-01
Photosynthesis is the foundation for almost all known life, but quantifying it at scales above a single plant is difficult. A new satellite illuminates plants' molecular machinery at much-improved spatial resolution, taking us one step closer to combined `inside-outside' insights into large-scale photosynthesis.
NASA Astrophysics Data System (ADS)
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
NASA Astrophysics Data System (ADS)
Welle, Paul D.; Mauter, Meagan S.
2017-09-01
This work introduces a generalizable approach for estimating the field-scale agricultural yield losses due to soil salinization. When integrated with regional data on crop yields and prices, this model provides high-resolution estimates for revenue losses over large agricultural regions. These methods account for the uncertainty inherent in model inputs derived from satellites, experimental field data, and interpreted model results. We apply this method to estimate the effect of soil salinity on agricultural outputs in California, performing the analysis with both high-resolution (i.e. field scale) and low-resolution (i.e. county-scale) data sources to highlight the importance of spatial resolution in agricultural analysis. We estimate that soil salinity reduced agricultural revenues by 3.7 billion (1.7-7.0 billion) in 2014, amounting to 8.0 million tons of lost production relative to soil salinities below the crop-specific thresholds. When using low-resolution data sources, we find that the costs of salinization are underestimated by a factor of three. These results highlight the need for high-resolution data in agro-environmental assessment as well as the challenges associated with their integration.
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,
1985-10-07
ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
Vogel, J.R.; Brown, G.O.
2003-01-01
Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
NASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
Approximate registration of point clouds with large scale differences
NASA Astrophysics Data System (ADS)
Novak, D.; Schindler, K.
2013-10-01
3D reconstruction of objects is a basic task in many fields, including surveying, engineering, entertainment and cultural heritage. The task is nowadays often accomplished with a laser scanner, which produces dense point clouds, but lacks accurate colour information, and lacks per-point accuracy measures. An obvious solution is to combine laser scanning with photogrammetric recording. In that context, the problem arises to register the two datasets, which feature large scale, translation and rotation differences. The absence of approximate registration parameters (3D translation, 3D rotation and scale) precludes the use of fine-registration methods such as ICP. Here, we present a method to register realistic photogrammetric and laser point clouds in a fully automated fashion. The proposed method decomposes the registration into a sequence of simpler steps: first, two rotation angles are determined by finding dominant surface normal directions, then the remaining parameters are found with RANSAC followed by ICP and scale refinement. These two steps are carried out at low resolution, before computing a precise final registration at higher resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chun; Leung, L. Ruby; Park, Sang-Hun
Advances in computing resources are gradually moving regional and global numerical forecasting simulations towards sub-10 km resolution, but global high resolution climate simulations remain a challenge. The non-hydrostatic Model for Prediction Across Scales (MPAS) provides a global framework to achieve very high resolution using regional mesh refinement. Previous studies using the hydrostatic version of MPAS (H-MPAS) with the physics parameterizations of Community Atmosphere Model version 4 (CAM4) found notable resolution dependent behaviors. This study revisits the resolution sensitivity using the non-hydrostatic version of MPAS (NH-MPAS) with both CAM4 and CAM5 physics. A series of aqua-planet simulations at global quasi-uniform resolutionsmore » ranging from 240 km to 30 km and global variable resolution simulations with a regional mesh refinement of 30 km resolution over the tropics are analyzed, with a primary focus on the distinct characteristics of NH-MPAS in simulating precipitation, clouds, and large-scale circulation features compared to H-MPAS-CAM4. The resolution sensitivity of total precipitation and column integrated moisture in NH-MPAS is smaller than that in H-MPAS-CAM4. This contributes importantly to the reduced resolution sensitivity of large-scale circulation features such as the inter-tropical convergence zone and Hadley circulation in NH-MPAS compared to H-MPAS. In addition, NH-MPAS shows almost no resolution sensitivity in the simulated westerly jet, in contrast to the obvious poleward shift in H-MPAS with increasing resolution, which is partly explained by differences in the hyperdiffusion coefficients used in the two models that influence wave activity. With the reduced resolution sensitivity, simulations in the refined region of the NH-MPAS global variable resolution configuration exhibit zonally symmetric features that are more comparable to the quasi-uniform high-resolution simulations than those from H-MPAS that displays zonal asymmetry in simulations inside the refined region. Overall, NH-MPAS with CAM5 physics shows less resolution sensitivity compared to CAM4. These results provide a reference for future studies to further explore the use of NH-MPAS for high-resolution climate simulations in idealized and realistic configurations.« less
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
NASA Astrophysics Data System (ADS)
Mesinger, F.
The traditional views hold that high-resolution limited area models (LAMs) down- scale large-scale lateral boundary information, and that predictability of small scales is short. Inspection of various rms fits/errors has contributed to these views. It would follow that the skill of LAMs should visibly deteriorate compared to that of their driver models at more extended forecast times. The limited area Eta Model at NCEP has an additional handicap of being driven by LBCs of the previous Avn global model run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. This should make its relative skill compared to that of the Avn deteriorate even faster. These views are challenged by various Eta results including rms fits to raobs out to 84 h. It is argued that it is the largest scales that contribute the most to the skill of the Eta relative to that of the Avn.
Global-scale patterns of forest fragmentation
Kurt H. Riitters; James D. Wickham; R. O' Neill; B. Jones; E. Smith
2000-01-01
We report an analysis of forest fragmentation based on 1-km resolution land-cover maps for the globe. Measurements in analysis windows from 81 km 2 (9 x 9 pixels, "small" scale) to 59,049 km 2 (243 x 243 pixels, "large" scale) were used to characterize the fragmentation around each forested pixel. We identified six categories of fragmentation (...
Scale problems in reporting landscape pattern at the regional scale
R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham
1996-01-01
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Scaling of surface energy fluxes using remotely sensed data
NASA Astrophysics Data System (ADS)
French, Andrew Nichols
Accurate estimates of evapotranspiration (ET) across multiple terrains would greatly ease challenges faced by hydrologists, climate modelers, and agronomists as they attempt to apply theoretical models to real-world situations. One ET estimation approach uses an energy balance model to interpret a combination of meteorological observations taken at the surface and data captured by remote sensors. However, results of this approach have not been accurate because of poor understanding of the relationship between surface energy flux and land cover heterogeneity, combined with limits in available resolution of remote sensors. The purpose of this study was to determine how land cover and image resolution affect ET estimates. Using remotely sensed data collected over El Reno, Oklahoma, during four days in June and July 1997, scale effects on the estimation of spatially distributed ET were investigated. Instantaneous estimates of latent and sensible heat flux were calculated using a two-source surface energy balance model driven by thermal infrared, visible-near infrared, and meteorological data. The heat flux estimates were verified by comparison to independent eddy-covariance observations. Outcomes of observations taken at coarser resolutions were simulated by aggregating remote sensor data and estimated surface energy balance components from the finest sensor resolution (12 meter) to hypothetical resolutions as coarse as one kilometer. Estimated surface energy flux components were found to be significantly dependent on observation scale. For example, average evaporative fraction varied from 0.79, using 12-m resolution data, to 0.93, using 1-km resolution data. Resolution effects upon flux estimates were related to a measure of landscape heterogeneity known as operational scale, reflecting the size of dominant landscape features. Energy flux estimates based on data at resolutions less than 100 m and much greater than 400 m showed a scale-dependent bias. But estimates derived from data taken at about 400-m resolution (the operational scale at El Reno) were susceptible to large error due to mixing of surface types. The El Reno experiments show that accurate instantaneous estimates of ET require precise image alignment and image resolutions finer than landscape operational scale. These findings are valuable for the design of sensors and experiments to quantify spatially-varying hydrologic processes.
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
NASA Astrophysics Data System (ADS)
Fischer, P. D.; Brown, M. E.; Trumbo, S. K.; Hand, K. P.
2017-01-01
We present spatially resolved spectroscopic observations of Europa’s surface at 3-4 μm obtained with the near-infrared spectrograph and adaptive optics system on the Keck II telescope. These are the highest quality spatially resolved reflectance spectra of Europa’s surface at 3-4 μm. The observations spatially resolve Europa’s large-scale compositional units at a resolution of several hundred kilometers. The spectra show distinct features and geographic variations associated with known compositional units; in particular, large-scale leading hemisphere chaos shows a characteristic longward shift in peak reflectance near 3.7 μm compared to icy regions. These observations complement previous spectra of large-scale chaos, and can aid efforts to identify the endogenous non-ice species.
NASA Astrophysics Data System (ADS)
Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin
2015-12-01
The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.
Light sheet theta microscopy for rapid high-resolution imaging of large biological samples.
Migliori, Bianca; Datta, Malika S; Dupre, Christophe; Apak, Mehmet C; Asano, Shoh; Gao, Ruixuan; Boyden, Edward S; Hermanson, Ola; Yuste, Rafael; Tomer, Raju
2018-05-29
Advances in tissue clearing and molecular labeling methods are enabling unprecedented optical access to large intact biological systems. These developments fuel the need for high-speed microscopy approaches to image large samples quantitatively and at high resolution. While light sheet microscopy (LSM), with its high planar imaging speed and low photo-bleaching, can be effective, scaling up to larger imaging volumes has been hindered by the use of orthogonal light sheet illumination. To address this fundamental limitation, we have developed light sheet theta microscopy (LSTM), which uniformly illuminates samples from the same side as the detection objective, thereby eliminating limits on lateral dimensions without sacrificing the imaging resolution, depth, and speed. We present a detailed characterization of LSTM, and demonstrate its complementary advantages over LSM for rapid high-resolution quantitative imaging of large intact samples with high uniform quality. The reported LSTM approach is a significant step for the rapid high-resolution quantitative mapping of the structure and function of very large biological systems, such as a clarified thick coronal slab of human brain and uniformly expanded tissues, and also for rapid volumetric calcium imaging of highly motile animals, such as Hydra, undergoing non-isomorphic body shape changes.
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
A fast image simulation algorithm for scanning transmission electron microscopy
Ophus, Colin
2017-05-10
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less
A High Resolution View of Galactic Centers: Arp 220 and M31
NASA Astrophysics Data System (ADS)
Lockhart, Kelly E.
The centers of galaxy are small in size and yet incredibly complex. They play host to supermassive black holes and nuclear star clusters (NSCs) and are subject to large gas inows, nuclear starbursts, and active galactic nuclear (AGN) activity. They can also be the launching site for large-scale galactic outows. However, though these systems are quite important to galactic evolution, observations are quite difficult due to their small size. Using high spatial resolution narrowband imaging with HST/WFC3 of Arp 220, a latestage galaxy merger, I discover an ionized gas bubble feature ( r = 600 pc) just off the nucleus. The bubble is aligned with both the western nucleus and with the large-scale galactic outflow. Using energetics arguments, I link the bubble with a young, obscured AGN or with an intense nuclear starburst. Given its alignment along the large-scale outflow axis, I argue that the bubble presents evidence for a link between the galactic center and the large-scale outflow. I also present new observations of the NSC in M31, the closest large spiral galaxy to our own. Using the OSIRIS near-infrared integral field spectrograph (IFS) on Keck, I map the kinematics of the old stellar population in the eccentric disk of the NSC. I compare the observations to models to derive a precession speed of the disk of 0+/-5 km s-1 pc-1 , and hence confirm that winds from the old stellar population may be the source of gas needed to form the young stellar population in the NSC. Studies of galactic centers are dependent on high spatial resolution observations. In particular, IFSs are ideal instruments for these studies as they provide two-dimensional spectroscopy of the field of view, enabling 2D kinematic studies. I report on work to characterize and improve the data reduction pipeline of the OSIRIS IFS, and discuss implications for future generations of IFS instrumentation.
Nagy, Szilvia; Pipek, János
2015-12-21
In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.
NASA Technical Reports Server (NTRS)
Stoller, Ray A.; Wedding, Donald K.; Friedman, Peter S.
1993-01-01
A development status evaluation is presented for gas plasma display technology, noting how tradeoffs among the parameters of size, resolution, speed, portability, color, and image quality can yield cost-effective solutions for medical imaging, CAD, teleconferencing, multimedia, and both civil and military applications. Attention is given to plasma-based large-area displays' suitability for radar, sonar, and IR, due to their lack of EM susceptibility. Both monochrome and color displays are available.
NASA Astrophysics Data System (ADS)
Jiang, H.; Lin, T.
2017-12-01
Rain-fed corn production systems are subject to sub-seasonal variations of precipitation and temperature during the growing season. As each growth phase has varied inherent physiological process, plants necessitate different optimal environmental conditions during each phase. However, this temporal heterogeneity towards climate variability alongside the lifecycle of crops is often simplified and fixed as constant responses in large scale statistical modeling analysis. To capture the time-variant growing requirements in large scale statistical analysis, we develop and compare statistical models at various spatial and temporal resolutions to quantify the relationship between corn yield and weather factors for 12 corn belt states from 1981 to 2016. The study compares three spatial resolutions (county, agricultural district, and state scale) and three temporal resolutions (crop growth phase, monthly, and growing season) to characterize the effects of spatial and temporal variability. Our results show that the agricultural district model together with growth phase resolution can explain 52% variations of corn yield caused by temperature and precipitation variability. It provides a practical model structure balancing the overfitting problem in county specific model and weak explanation power in state specific model. In US corn belt, precipitation has positive impact on corn yield in growing season except for vegetative stage while extreme heat attains highest sensitivity from silking to dough phase. The results show the northern counties in corn belt area are less interfered by extreme heat but are more vulnerable to water deficiency.
Turbulent kinetics of a large wind farm and their impact in the neutral boundary layer
Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo; ...
2015-12-28
High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less
NASA Astrophysics Data System (ADS)
Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.
2017-12-01
The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
NASA Astrophysics Data System (ADS)
Xu, Z.; Rhoades, A.; Johansen, H.; Ullrich, P. A.; Collins, W. D.
2017-12-01
Dynamical downscaling is widely used to properly characterize regional surface heterogeneities that shape the local hydroclimatology. However, the factors in dynamical downscaling, including the refinement of model horizontal resolution, large-scale forcing datasets and dynamical cores, have not been fully evaluated. Two cutting-edge global-to-regional downscaling methods are used to assess these, specifically the variable-resolution Community Earth System Model (VR-CESM) and the Weather Research & Forecasting (WRF) regional climate model, under different horizontal resolutions (28, 14, and 7 km). Two groups of WRF simulations are driven by either the NCEP reanalysis dataset (WRF_NCEP) or VR-CESM outputs (WRF_VRCESM) to evaluate the effects of the large-scale forcing datasets. The impacts of dynamical core are assessed by comparing the VR-CESM simulations to the coupled WRF_VRCESM simulations with the same physical parameterizations and similar grid domains. The simulated hydroclimatology (i.e., total precipitation, snow cover, snow water equivalent and surface temperature) are compared with the reference datasets. The large-scale forcing datasets are critical to the WRF simulations in more accurately simulating total precipitation, SWE and snow cover, but not surface temperature. Both the WRF and VR-CESM results highlight that no significant benefit is found in the simulated hydroclimatology by just increasing horizontal resolution refinement from 28 to 7 km. Simulated surface temperature is sensitive to the choice of dynamical core. WRF generally simulates higher temperatures than VR-CESM, alleviates the systematic cold bias of DJF temperatures over the California mountain region, but overestimates the JJA temperature in California's Central Valley.
NASA Astrophysics Data System (ADS)
Burney, J. A.; Goldblatt, R.
2016-12-01
Understanding drivers of land use change - and in particular, levels of ecosystem degradation - in semi-arid regions is of critical importance because these agroecosystems (1) are home to the world's poorest populations, almost all of whom depend on agriculture for their livelihoods, (2) play a critical role in the global carbon and climate cycles, and (3) have in many cases seen dramatic changes in temperature and precipitation, relative to global averages, over the past several decades. However, assessing ecosystem health (or, conversely, degradation) presents a difficult measurement problem. Established methods are very labor intensive and rest on detailed questionnaires and field assessments. High-resolution satellite imagery has a unique role semi-arid ecosystem assessment in that it can be used for rapid (or repeated) and very simple measurements of tree and shrub density, an excellent overall indicator for dryland ecosystem health. Because trees and large shrubs are more sparse in semi-arid regions, sub-meter resolution imagery in conjunction with automated image analysis can be used to assess density differences at high spatial resolution without expensive and time-consuming ground-truthing. This could be used down to the farm level, for example, to better assess the larger-scale ecosystem impacts of different management practices, to assess compliance with REDD+ carbon offset protocols, or to evaluate implementation of conservation goals. Here we present results comparing spatial and spectral remote sensing methods for semi-arid ecosystem assessment across new data sources, using the Brazilian Sertão as an example, and the implications for large-scale use in semi-arid ecosystem science.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim
2016-04-01
The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).
Rebling, Johannes; Estrada, Héctor; Gottschalk, Sven; Sela, Gali; Zwack, Michael; Wissmeyer, Georg; Ntziachristos, Vasilis; Razansky, Daniel
2018-04-19
A critical link exists between pathological changes of cerebral vasculature and diseases affecting brain function. Microscopic techniques have played an indispensable role in the study of neurovascular anatomy and functions. Yet, investigations are often hindered by suboptimal trade-offs between the spatiotemporal resolution, field-of-view (FOV) and type of contrast offered by the existing optical microscopy techniques. We present a hybrid dual-wavelength optoacoustic (OA) biomicroscope capable of rapid transcranial visualization of large-scale cerebral vascular networks. The system offers 3-dimensional views of the morphology and oxygenation status of the cerebral vasculature with single capillary resolution and a FOV exceeding 6 × 8 mm 2 , thus covering the entire cortical vasculature in mice. The large-scale OA imaging capacity is complemented by simultaneously acquired pulse-echo ultrasound (US) biomicroscopy scans of the mouse skull. The new approach holds great potential to provide better insights into cerebrovascular function and facilitate efficient studies into neurological and vascular abnormalities of the brain. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling
USDA-ARS?s Scientific Manuscript database
We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...
NASA Astrophysics Data System (ADS)
Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca
2018-06-01
We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.
NASA Astrophysics Data System (ADS)
Turner, Alexander J.; Jacob, Daniel J.; Benmergui, Joshua; Brandman, Jeremy; White, Laurent; Randles, Cynthia A.
2018-06-01
Anthropogenic methane emissions originate from a large number of fine-scale and often transient point sources. Satellite observations of atmospheric methane columns are an attractive approach for monitoring these emissions but have limitations from instrument precision, pixel resolution, and measurement frequency. Dense observations will soon be available in both low-Earth and geostationary orbits, but the extent to which they can provide fine-scale information on methane sources has yet to be explored. Here we present an observation system simulation experiment (OSSE) to assess the capabilities of different satellite observing system configurations. We conduct a 1-week WRF-STILT simulation to generate methane column footprints at 1.3 × 1.3 km2 spatial resolution and hourly temporal resolution over a 290 × 235 km2 domain in the Barnett Shale, a major oil and gas field in Texas with a large number of point sources. We sub-sample these footprints to match the observing characteristics of the recently launched TROPOMI instrument (7 × 7 km2 pixels, 11 ppb precision, daily frequency), the planned GeoCARB instrument (2.7 × 3.0 km2 pixels, 4 ppb precision, nominal twice-daily frequency), and other proposed observing configurations. The information content of the various observing systems is evaluated using the Fisher information matrix and its eigenvalues. We find that a week of TROPOMI observations should provide information on temporally invariant emissions at ˜ 30 km spatial resolution. GeoCARB should provide information available on temporally invariant emissions ˜ 2-7 km spatial resolution depending on sampling frequency (hourly to daily). Improvements to the instrument precision yield greater increases in information content than improved sampling frequency. A precision better than 6 ppb is critical for GeoCARB to achieve fine resolution of emissions. Transient emissions would be missed with either TROPOMI or GeoCARB. An aspirational high-resolution geostationary instrument with 1.3 × 1.3 km2 pixel resolution, hourly return time, and 1 ppb precision would effectively constrain the temporally invariant emissions in the Barnett Shale at the kilometer scale and provide some information on hourly variability of sources.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Dual-axis confocal microscope for high-resolution in vivo imaging
Wang, Thomas D.; Mandella, Michael J.; Contag, Christopher H.; Kino, Gordon S.
2007-01-01
We describe a novel confocal microscope that uses separate low-numerical-aperture objectives with the illumination and collection axes crossed at angle θ from the midline. This architecture collects images in scattering media with high transverse and axial resolution, long working distance, large field of view, and reduced noise from scattered light. We measured transverse and axial (FWHM) resolution of 1.3 and 2.1 μm, respectively, in free space, and confirm subcellular resolution in excised esophageal mucosa. The optics may be scaled to millimeter dimensions and fiber coupled for collection of high-resolution images in vivo. PMID:12659264
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
Validation of Satellite Retrieved Land Surface Variables
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman; Susskind, Joel
1999-01-01
The effective use of satellite observations of the land surface is limited by the lack of high spatial resolution ground data sets for validation of satellite products. Recent large scale field experiments include FIFE, HAPEX-Sahel and BOREAS which provide us with data sets that have large spatial coverage and long time coverage. It is the objective of this paper to characterize the difference between the satellite estimates and the ground observations. This study and others along similar lines will help us in utilization of satellite retrieved data in large scale modeling studies.
NASA Astrophysics Data System (ADS)
Michaelis, Dirk; Schroeder, Andreas
2012-11-01
Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.
Regional climate model sensitivity to domain size
NASA Astrophysics Data System (ADS)
Leduc, Martin; Laprise, René
2009-05-01
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.
Physical basis for river segmentation from water surface observables
NASA Astrophysics Data System (ADS)
Samine Montazem, A.; Garambois, P. A.; Calmant, S.; Moreira, D. M.; Monnier, J.; Biancamaria, S.
2017-12-01
With the advent of satellite missions such as SWOT we will have access to high resolution estimates of the elevation, slope and width of the free surface. A segmentation strategy is required in order to sub-sample the data set into reach master points for further hydraulic analyzes and inverse modelling. The question that arises is : what will be the best node repartition strategy that preserves hydraulic properties of river flow? The concept of hydraulic visibility introduced by Garambois et al. (2016) is investigated in order to highlight and characterize the spatio-temporal variations of water surface slope and curvature for different flow regimes and reach geometries. We show that free surface curvature is a powerful proxy for characterizing the hydraulic behavior of a reach since concavity of water surface is driven by variations in channel geometry that impacts the hydraulic properties of the flow. We evaluated the performance of three segmentation strategies by means of a well documented case, that of the Garonne river in France. We conclude that local extrema of free surface curvature appear as the best candidate for locating the segment boundaries for an optimal hydraulic representation of the segmented river. We show that for a given river different segmentation scales are possible: a fine-scale segmentation which is driven by fine-scale hydraulic to large-scale segmentation driven by large-scale geomorphology. The segmentation technique is then applied to high resolution GPS profiles of free surface elevation collected on the Negro river basin, a major contributor of the Amazon river. We propose two segmentations: a low-resolution one that can be used for basin hydrology and a higher resolution one better suited for local hydrodynamic studies.
NASA Astrophysics Data System (ADS)
Deo, Ram K.; Domke, Grant M.; Russell, Matthew B.; Woodall, Christopher W.; Andersen, Hans-Erik
2018-05-01
Aboveground biomass (AGB) estimates for regional-scale forest planning have become cost-effective with the free access to satellite data from sensors such as Landsat and MODIS. However, the accuracy of AGB predictions based on passive optical data depends on spatial resolution and spatial extent of target area as fine resolution (small pixels) data are associated with smaller coverage and longer repeat cycles compared to coarse resolution data. This study evaluated various spatial resolutions of Landsat-derived predictors on the accuracy of regional AGB models at three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We combined national forest inventory data with Landsat-derived predictors at spatial resolutions ranging from 30–1000 m to understand the optimal spatial resolution of optical data for large-area (regional) AGB estimation. Ten generic models were developed using the data collected in 2014, 2015 and 2016, and the predictions were evaluated (i) at the county-level against the estimates of the USFS Forest Inventory and Analysis Program which relied on EVALIDator tool and national forest inventory data from the 2009–2013 cycle and (ii) within a large number of strips (~1 km wide) predicted via LiDAR metrics at 30 m spatial resolution. The county-level estimates by the EVALIDator and Landsat models were highly related (R 2 > 0.66), although the R 2 varied significantly across sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of coarser resolution. The Landsat-based total AGB estimates were larger than the LiDAR-based total estimates within the strips, however the mean of AGB predictions by LiDAR were mostly within one-standard deviations of the mean predictions obtained from the Landsat-based model at any of the resolutions. We conclude that satellite data at resolutions up to 1000 m provide acceptable accuracy for continental scale analysis of AGB.
Atomic-scale imaging of DNA using scanning tunnelling microscopy.
Driscoll, R J; Youngquist, M G; Baldeschwieler, J D
1990-07-19
The scanning tunnelling microscope (STM) has been used to visualize DNA under water, under oil and in air. Images of single-stranded DNA have shown that submolecular resolution is possible. Here we describe atomic-resolution imaging of duplex DNA. Topographic STM images of uncoated duplex DNA on a graphite substrate obtained in ultra-high vacuum are presented that show double-helical structure, base pairs, and atomic-scale substructure. Experimental STM profiles show excellent correlation with atomic contours of the van der Waals surface of A-form DNA derived from X-ray crystallography. A comparison of variations in the barrier to quantum mechanical tunnelling (barrier-height) with atomic-scale topography shows correlation over the phosphate-sugar backbone but anticorrelation over the base pairs. This relationship may be due to the different chemical characteristics of parts of the molecule. Further investigation of this phenomenon should lead to a better understanding of the physics of imaging adsorbates with the STM and may prove useful in sequencing DNA. The improved resolution compared with previously published STM images of DNA may be attributable to ultra-high vacuum, high data-pixel density, slow scan rate, a fortuitously clean and sharp tip and/or a relatively dilute and extremely clean sample solution. This work demonstrates the potential of the STM for characterization of large biomolecular structures, but additional development will be required to make such high resolution imaging of DNA and other large molecules routine.
Stochastic Downscaling of Digital Elevation Models
NASA Astrophysics Data System (ADS)
Rasera, Luiz Gustavo; Mariethoz, Gregoire; Lane, Stuart N.
2016-04-01
High-resolution digital elevation models (HR-DEMs) are extremely important for the understanding of small-scale geomorphic processes in Alpine environments. In the last decade, remote sensing techniques have experienced a major technological evolution, enabling fast and precise acquisition of HR-DEMs. However, sensors designed to measure elevation data still feature different spatial resolution and coverage capabilities. Terrestrial altimetry allows the acquisition of HR-DEMs with centimeter to millimeter-level precision, but only within small spatial extents and often with dead ground problems. Conversely, satellite radiometric sensors are able to gather elevation measurements over large areas but with limited spatial resolution. In the present study, we propose an algorithm to downscale low-resolution satellite-based DEMs using topographic patterns extracted from HR-DEMs derived for example from ground-based and airborne altimetry. The method consists of a multiple-point geostatistical simulation technique able to generate high-resolution elevation data from low-resolution digital elevation models (LR-DEMs). Initially, two collocated DEMs with different spatial resolutions serve as an input to construct a database of topographic patterns, which is also used to infer the statistical relationships between the two scales. High-resolution elevation patterns are then retrieved from the database to downscale a LR-DEM through a stochastic simulation process. The output of the simulations are multiple equally probable DEMs with higher spatial resolution that also depict the large-scale geomorphic structures present in the original LR-DEM. As these multiple models reflect the uncertainty related to the downscaling, they can be employed to quantify the uncertainty of phenomena that are dependent on fine topography, such as catchment hydrological processes. The proposed methodology is illustrated for a case study in the Swiss Alps. A swissALTI3D HR-DEM (with 5 m resolution) and a SRTM-derived LR-DEM from the Western Alps are used to downscale a SRTM-based LR-DEM from the eastern part of the Alps. The results show that the method is capable of generating multiple high-resolution synthetic DEMs that reproduce the spatial structure and statistics of the original DEM.
NASA Astrophysics Data System (ADS)
Deo, R. K.; Domke, G. M.; Russell, M.; Woodall, C. W.
2017-12-01
Landsat data have been widely used to support strategic forest inventory and management decisions despite the limited success of passive optical remote sensing for accurate estimation of aboveground biomass (AGB). The archive of publicly available Landsat data, available at 30-m spatial resolutions since 1984, has been a valuable resource for cost-effective large-area estimation of AGB to inform national requirements such as for the US national greenhouse gas inventory (NGHGI). In addition, other optical satellite data such as MODIS imagery of wider spatial coverage and higher temporal resolution are enriching the domain of spatial predictors for regional scale mapping of AGB. Because NGHGIs require national scale AGB information and there are tradeoffs in the prediction accuracy versus operational efficiency of Landsat, this study evaluated the impact of various resolutions of Landsat predictors on the accuracy of regional AGB models across three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We used recent national forest inventory (NFI) data with numerous Landsat-derived predictors at ten different spatial resolutions ranging from 30 to 1000 m to understand the optimal spatial resolution of the optical data for enhanced spatial inventory of AGB for NGHGI reporting. Ten generic spatial models at different spatial resolutions were developed for all sites and large-area estimates were evaluated (i) at the county-level against the independent designed-based estimates via the US NFI Evalidator tool and (ii) within a large number of strips ( 1 km wide) predicted via LiDAR metrics at a high spatial resolution. The county-level estimates by the Evalidator and Landsat models were statistically equivalent and produced coefficients of determination (R2) above 0.85 that varied with sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of decreasing resolutions. The Landsat-based total AGB estimates within the strips against the total AGB obtained using LiDAR metrics did not differ significantly and were within ±15 Mg/ha for each of the sites. We conclude that the optical satellite data at resolutions up to 1000 m provide acceptable accuracy for the US' NGHGI.
NASA Astrophysics Data System (ADS)
Hewitt, Helene T.; Bell, Michael J.; Chassignet, Eric P.; Czaja, Arnaud; Ferreira, David; Griffies, Stephen M.; Hyder, Pat; McClean, Julie L.; New, Adrian L.; Roberts, Malcolm J.
2017-12-01
As the importance of the ocean in the weather and climate system is increasingly recognised, operational systems are now moving towards coupled prediction not only for seasonal to climate timescales but also for short-range forecasts. A three-way tension exists between the allocation of computing resources to refine model resolution, the expansion of model complexity/capability, and the increase of ensemble size. Here we review evidence for the benefits of increased ocean resolution in global coupled models, where the ocean component explicitly represents transient mesoscale eddies and narrow boundary currents. We consider lessons learned from forced ocean/sea-ice simulations; from studies concerning the SST resolution required to impact atmospheric simulations; and from coupled predictions. Impacts of the mesoscale ocean in western boundary current regions on the large-scale atmospheric state have been identified. Understanding of air-sea feedback in western boundary currents is modifying our view of the dynamics in these key regions. It remains unclear whether variability associated with open ocean mesoscale eddies is equally important to the large-scale atmospheric state. We include a discussion of what processes can presently be parameterised in coupled models with coarse resolution non-eddying ocean models, and where parameterizations may fall short. We discuss the benefits of resolution and identify gaps in the current literature that leave important questions unanswered.
The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation
NASA Astrophysics Data System (ADS)
Noh, Yookyung
The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.
Strategy for large-scale isolation of enantiomers in drug discovery.
Leek, Hanna; Thunberg, Linda; Jonson, Anna C; Öhlén, Kristina; Klarqvist, Magnus
2017-01-01
A strategy for large-scale chiral resolution is illustrated by the isolation of pure enantiomer from a 5kg batch. Results from supercritical fluid chromatography will be presented and compared with normal phase liquid chromatography. Solubility of the compound in the supercritical mobile phase was shown to be the limiting factor. To circumvent this, extraction injection was used but shown not to be efficient for this compound. Finally, a method for chiral resolution by crystallization was developed and applied to give diastereomeric salt with an enantiomeric excess of 99% at a 91% yield. Direct access to a diverse separation tool box will be shown to be essential for solving separation problems in the most cost and time efficient way. Copyright © 2016 Elsevier Ltd. All rights reserved.
High resolution modeling of reservoir storage and extent dynamics at the continental scale
NASA Astrophysics Data System (ADS)
Shin, S.; Pokhrel, Y. N.
2017-12-01
Over the past decade, significant progress has been made in developing reservoir schemes in large scale hydrological models to better simulate hydrological fluxes and storages in highly managed river basins. These schemes have been successfully used to study the impact of reservoir operation on global river basins. However, improvements in the existing schemes are needed for hydrological fluxes and storages, especially at the spatial resolution to be used in hyper-resolution hydrological modeling. In this study, we developed a reservoir routing scheme with explicit representation of reservoir storage and extent at the grid scale of 5km or less. Instead of setting reservoir area to a fixed value or diagnosing it using the area-storage equation, which is a commonly used approach in the existing reservoir schemes, we explicitly simulate the inundated storage and area for all grid cells that are within the reservoir extent. This approach enables a better simulation of river-floodplain-reservoir storage by considering both the natural flood and man-made reservoir storage. Results of the seasonal dynamics of reservoir storage, river discharge at the downstream of dams, and the reservoir inundation extent are evaluated with various datasets from ground-observations and satellite measurements. The new model captures the dynamics of these variables with a good accuracy for most of the large reservoirs in the western United States. It is expected that the incorporation of the newly developed reservoir scheme in large-scale land surface models (LSMs) will lead to improved simulation of river flow and terrestrial water storage in highly managed river basins.
NASA Astrophysics Data System (ADS)
Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel
2017-03-01
A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.
High-resolution Observations of Hα Spectra with a Subtractive Double Pass
NASA Astrophysics Data System (ADS)
Beck, C.; Rezaei, R.; Choudhary, D. P.; Gosain, S.; Tritschler, A.; Louis, R. E.
2018-02-01
High-resolution imaging spectroscopy in solar physics has relied on Fabry-Pérot interferometers (FPIs) in recent years. FPI systems, however, become technically challenging and expensive for telescopes larger than the 1 m class. A conventional slit spectrograph with a diffraction-limited performance over a large field of view (FOV) can be built at much lower cost and effort. It can be converted into an imaging spectro(polari)meter using the concept of a subtractive double pass (SDP). We demonstrate that an SDP system can reach a similar performance as FPI-based systems with a high spatial and moderate spectral resolution across a FOV of 100^'' ×100^' ' with a spectral coverage of 1 nm. We use Hα spectra taken with an SDP system at the Dunn Solar Telescope and complementary full-disc data to infer the properties of small-scale superpenumbral filaments. We find that the majority of all filaments end in patches of opposite-polarity fields. The internal fine-structure in the line-core intensity of Hα at spatial scales of about 0.5'' exceeds that in other parameters such as the line width, indicating small-scale opacity effects in a larger-scale structure with common properties. We conclude that SDP systems in combination with (multi-conjugate) adaptive optics are a valid alternative to FPI systems when high spatial resolution and a large FOV are required. They can also reach a cadence that is comparable to that of FPI systems, while providing a much larger spectral range and a simultaneous multi-line capability.
Satellite-based peatland mapping: potential of the MODIS sensor.
D. Pflugmacher; O.N. Krankina; W.B. Cohen
2006-01-01
Peatlands play a major role in the global carbon cycle but are largely overlooked in current large-scale vegetation mapping efforts. In this study, we investigated the potential of the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to capture extent and distribution of peatlands in the St. Petersburg region of Russia.
NASA Astrophysics Data System (ADS)
Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin
2017-10-01
Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.
Towards a Fine-Resolution Global Coupled Climate System for Prediction on Decadal/Centennial Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Julie L.
The over-arching goal of this project was to contribute to the realization of a fully coupled fine resolution Earth System Model simulation in which a weather-scale atmosphere is coupled to an ocean in which mesoscale eddies are largely resolved. Both a prototype fine-resolution fully coupled ESM simulation and a first-ever multi-decadal forced fine-resolution global coupled ocean/ice simulation were configured, tested, run, and analyzed as part of this grant. Science questions focused on the gains from the use of high horizontal resolution, particularly in the ocean and sea-ice, with respect to climatically important processes. Both these fine resolution coupled ocean/sea icemore » and fully-coupled simulations and precedent stand-alone eddy-resolving ocean and eddy-permitting coupled ocean/ice simulations were used to explore the high resolution regime. Overall, these studies showed that the presence of mesoscale eddies significantly impacted mixing processes and the global meridional overturning circulation in the ocean simulations. Fourteen refereed publications and a Ph.D. dissertation resulted from this grant.« less
Map Scale, Proportion, and Google[TM] Earth
ERIC Educational Resources Information Center
Roberge, Martin C.; Cooper, Linda L.
2010-01-01
Aerial imagery has a great capacity to engage and maintain student interest while providing a contextual setting to strengthen their ability to reason proportionally. Free, on-demand, high-resolution, large-scale aerial photography provides both a bird's eye view of the world and a new perspective on one's own community. This article presents an…
Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography
Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.
1972-01-01
Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.
The scientific targets of the SCOPE mission
NASA Astrophysics Data System (ADS)
Fujimoto, M.; Saito, Y.; Tsuda, Y.; Shinohara, I.; Kojima, H.
Future Japanese magnetospheric mission "SCOPE" is now under study (planned to be launched in 2012). The main purpose of this mission is to investigate the dynamic behaviors of plasmas in the Earth's magnetosphere from the view-point of cross-scale coupling. Dynamical collisionless space plasma phenomena, be they large scale as a whole, are chracterized by coupling over various time and spatial scales. The best example would be the magnetic reconnection process, which is a large scale energy conversion process but has a small key region at the heart of its engine. Inside the key region, electron scale dynamics plays the key role in liberating the frozen-in constraint, by which reconnection is allowed to proceed. The SCOPE mission is composed of one large mother satellite and four small daughter satellites. The mother spacecraft will be equiped with the electron detector that has 10 msec time resolution so that scales down to the electron's will be resolved. Three of the four daughter satellites surround the mother satellite 3-dimensionally with the mutual distances between several km and several thousand km, which are varied during the mission. Plasma measurements on these spacecrafts will have 1 sec resolution and will provide information on meso-scale plasma structure. The fourth daughter satellite stays near the mother satellite with the distance less than 100km. By correlation between the two plasma wave instruments on the daughter and the mother spacecrafts, propagation of the waves and the information on the electron scale dynamics will be obtained. By this strategy, both meso- and micro-scale information on dynamics are obtained, that will enable us to investigate the physics of the space plasma from the cross-scale coupling point of view.
Targeted carbon conservation at national scales with high-resolution monitoring
Asner, Gregory P.; Knapp, David E.; Martin, Roberta E.; Tupayachi, Raul; Anderson, Christopher B.; Mascaro, Joseph; Sinca, Felipe; Chadwick, K. Dana; Higgins, Mark; Farfan, William; Llactayo, William; Silman, Miles R.
2014-01-01
Terrestrial carbon conservation can provide critical environmental, social, and climate benefits. Yet, the geographically complex mosaic of threats to, and opportunities for, conserving carbon in landscapes remain largely unresolved at national scales. Using a new high-resolution carbon mapping approach applied to Perú, a megadiverse country undergoing rapid land use change, we found that at least 0.8 Pg of aboveground carbon stocks are at imminent risk of emission from land use activities. Map-based information on the natural controls over carbon density, as well as current ecosystem threats and protections, revealed three biogeographically explicit strategies that fully offset forthcoming land-use emissions. High-resolution carbon mapping affords targeted interventions to reduce greenhouse gas emissions in rapidly developing tropical nations. PMID:25385593
Targeted carbon conservation at national scales with high-resolution monitoring.
Asner, Gregory P; Knapp, David E; Martin, Roberta E; Tupayachi, Raul; Anderson, Christopher B; Mascaro, Joseph; Sinca, Felipe; Chadwick, K Dana; Higgins, Mark; Farfan, William; Llactayo, William; Silman, Miles R
2014-11-25
Terrestrial carbon conservation can provide critical environmental, social, and climate benefits. Yet, the geographically complex mosaic of threats to, and opportunities for, conserving carbon in landscapes remain largely unresolved at national scales. Using a new high-resolution carbon mapping approach applied to Perú, a megadiverse country undergoing rapid land use change, we found that at least 0.8 Pg of aboveground carbon stocks are at imminent risk of emission from land use activities. Map-based information on the natural controls over carbon density, as well as current ecosystem threats and protections, revealed three biogeographically explicit strategies that fully offset forthcoming land-use emissions. High-resolution carbon mapping affords targeted interventions to reduce greenhouse gas emissions in rapidly developing tropical nations.
A multimodel intercomparison of resolution effects on precipitation: simulations and theory
NASA Astrophysics Data System (ADS)
Rauscher, Sara A.; O'Brien, Travis A.; Piani, Claudio; Coppola, Erika; Giorgi, Filippo; Collins, William D.; Lawston, Patricia M.
2016-10-01
An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961-2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov-Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolution over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. This theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.
Stable clustering and the resolution of dissipationless cosmological N-body simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sylos Labini, Francesco
2017-10-01
The determination of the resolution of cosmological N-body simulations, I.e. the range of scales in which quantities measured in them represent accurately the continuum limit, is an important open question. We address it here using scale-free models, for which self-similarity provides a powerful tool to control resolution. Such models also provide a robust testing ground for the so-called stable clustering approximation, which gives simple predictions for them. Studying large N-body simulations of such models with different force smoothing, we find that these two issues are in fact very closely related: our conclusion is that the accuracy of two-point statistics in the non-linear regime starts to degrade strongly around the scale at which their behaviour deviates from that predicted by the stable clustering hypothesis. Physically the association of the two scales is in fact simple to understand: stable clustering fails to be a good approximation when there are strong interactions of structures (in particular merging) and it is precisely such non-linear processes which are sensitive to fluctuations at the smaller scales affected by discretization. Resolution may be further degraded if the short distance gravitational smoothing scale is larger than the scale to which stable clustering can propagate. We examine in detail the very different conclusions of studies by Smith et al. and Widrow et al. and find that the strong deviations from stable clustering reported by these works are the results of over-optimistic assumptions about scales resolved accurately by the measured power spectra, and the reliance on Fourier space analysis. We emphasize the much poorer resolution obtained with the power spectrum compared to the two-point correlation function.
The impact of mesoscale convective systems on global precipitation: A modeling study
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo
2017-04-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.
Terrestrial photography as a complementary measurement in weather stations for snow monitoring
NASA Astrophysics Data System (ADS)
Pimentel, Rafael; José Pérez-Palazón, María; Herrero, Javier; José Polo, María
2015-04-01
Snow monitoring constitutes a basic key to know snow behaviour and evolution, which have particular features in semiarid regions (i.e. highly strong spatiotemporal variability, and the occurrence of several accumulation-melting cycles throughout the year). On one hand, traditional snow observation, such as snow surveys and snow pillows have the inconvenience of a limited accessibility during snow season and the impossibility to cover a vast extension. On the other hand, satellite remote sensing techniques, largely employed in medium to large scale regional studies, has the disadvantage of a fixed spatial and temporal resolutions which in some cases are not able to reproduce snow processes at small scale. An economic alternative is the use of terrestrial photography which scales are adapted to the study problem. At the microscale resolution permits the continuous monitoring of snow, adapting the resolution of the observation to the scales of the processes. Besides its use as raw observation datasets to calibrate and validate models' results, terrestrial photography constitutes valuable information to complement weather stations observations. It allows the discriminating possible mistakes in meteorological observations (i.e. overestimation on rain measurements) and a better understanding of snow behaviour against certain weather agents (i.e. blowing snow). Thus, terrestrial photography is a feasible and convenient technique to be included in weather monitoring stations in mountainous areas in semiarid regions.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
NASA Astrophysics Data System (ADS)
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song
2016-11-01
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
NASA Astrophysics Data System (ADS)
Mendoza, Pablo A.; Mizukami, Naoki; Ikeda, Kyoko; Clark, Martyn P.; Gutmann, Ethan D.; Arnold, Jeffrey R.; Brekke, Levi D.; Rajagopalan, Balaji
2016-10-01
We examine the effects of regional climate model (RCM) horizontal resolution and forcing scaling (i.e., spatial aggregation of meteorological datasets) on the portrayal of climate change impacts. Specifically, we assess how the above decisions affect: (i) historical simulation of signature measures of hydrologic behavior, and (ii) projected changes in terms of annual water balance and hydrologic signature measures. To this end, we conduct our study in three catchments located in the headwaters of the Colorado River basin. Meteorological forcings for current and a future climate projection are obtained at three spatial resolutions (4-, 12- and 36-km) from dynamical downscaling with the Weather Research and Forecasting (WRF) regional climate model, and hydrologic changes are computed using four different hydrologic model structures. These projected changes are compared to those obtained from running hydrologic simulations with current and future 4-km WRF climate outputs re-scaled to 12- and 36-km. The results show that the horizontal resolution of WRF simulations heavily affects basin-averaged precipitation amounts, propagating into large differences in simulated signature measures across model structures. The implications of re-scaled forcing datasets on historical performance were primarily observed on simulated runoff seasonality. We also found that the effects of WRF grid resolution on projected changes in mean annual runoff and evapotranspiration may be larger than the effects of hydrologic model choice, which surpasses the effects from re-scaled forcings. Scaling effects on projected variations in hydrologic signature measures were found to be generally smaller than those coming from WRF resolution; however, forcing aggregation in many cases reversed the direction of projected changes in hydrologic behavior.
Detecting Multi-scale Structures in Chandra Images of Centaurus A
NASA Astrophysics Data System (ADS)
Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.
1999-12-01
Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.
NASA Astrophysics Data System (ADS)
Huisman, J. A.; Brogi, C.; Pätzold, S.; Weihermueller, L.; von Hebel, C.; Van Der Kruk, J.; Vereecken, H.
2017-12-01
Subsurface structures of the vadose zone can play a key role in crop yield potential, especially during water stress periods. Geophysical techniques like electromagnetic induction EMI can provide information about dominant shallow subsurface features. However, previous studies with EMI have typically not reached beyond the field scale. We used high-resolution large-scale multi-configuration EMI measurements to characterize patterns of soil structural organization (layering and texture) and their impact on crop productivity at the km2 scale. We collected EMI data on an agricultural area of 1 km2 (102 ha) near Selhausen (NRW, Germany). The area consists of 51 agricultural fields cropped in rotation. Therefore, measurements were collected between April and December 2016, preferably within few days after the harvest. EMI data were automatically filtered, temperature corrected, and interpolated onto a common grid of 1 m resolution. Inspecting the ECa maps, we identified three main sub-areas with different subsurface heterogeneity. We also identified small-scale geomorphological structures as well as anthropogenic activities such as soil management and buried drainage networks. To identify areas with similar subsurface structures, we applied image classification techniques. We fused ECa maps obtained with different coil distances in a multiband image and applied supervised and unsupervised classification methodologies. Both showed good results in reconstructing observed patterns in plant productivity and the subsurface structures associated with them. However, the supervised methodology proved more efficient in classifying the whole study area. In a second step, we selected hundred locations within the study area and obtained a soil profile description with type, depth, and thickness of the soil horizons. Using this ground truth data it was possible to assign a typical soil profile to each of the main classes obtained from the classification. The proposed methodology was effective in producing a high resolution subsurface model in a large and complex study area that extends well beyond the field scale.
Large-area Soil Moisture Surveys Using a Cosmic-ray Rover: Approaches and Results from Australia
NASA Astrophysics Data System (ADS)
Hawdon, A. A.; McJannet, D. L.; Renzullo, L. J.; Baker, B.; Searle, R.
2017-12-01
Recent improvements in satellite instrumentation has increased the resolution and frequency of soil moisture observations, and this in turn has supported the development of higher resolution land surface process models. Calibration and validation of these products is restricted by the mismatch of scales between remotely sensed and contemporary ground based observations. Although the cosmic ray neutron soil moisture probe can provide estimates soil moisture at a scale useful for the calibration and validation purposes, it is spatially limited to a single, fixed location. This scaling issue has been addressed with the development of mobile soil moisture monitoring systems that utilizes the cosmic ray neutron method, typically referred to as a `rover'. This manuscript describes a project designed to develop approaches for undertaking rover surveys to produce soil moisture estimates at scales comparable to satellite observations and land surface process models. A custom designed, trailer-mounted rover was used to conduct repeat surveys at two scales in the Mallee region of Victoria, Australia. A broad scale survey was conducted at 36 x 36 km covering an area of a standard SMAP pixel and an intensive scale survey was conducted over a 10 x 10 km portion of the broad scale survey, which is at a scale equivalent to that used for national water balance modelling. We will describe the design of the rover, the methods used for converting neutron counts into soil moisture and discuss factors controlling soil moisture variability. We found that the intensive scale rover surveys produced reliable soil moisture estimates at 1 km resolution and the broad scale at 9 km resolution. We conclude that these products are well suited for future analysis of satellite soil moisture retrievals and finer scale soil moisture models.
Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md.; Fletcher, Christine
2016-01-01
General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon. PMID:27561887
Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md; Fletcher, Christine
2016-08-26
General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon.
2013-09-30
flow models, such as Delft3D, with our developed Boussinesq -type model. The vision of this project is to develop an operational tool for the...situ measurements or large-scale wave models. This information will be used to drive the offshore wave boundary condition. • Execute the Boussinesq ...model to match with the Boussinesq -type theory would be one which can simulate sheared and stratified currents due to large-scale (non-wave) forcings
Clumpy filaments of the Chamaeleon I cloud: C18O mapping with the SEST
NASA Astrophysics Data System (ADS)
Haikala, L. K.; Harju, J.; Mattila, K.; Toriseva, M.
2005-02-01
The Chamaeleon I dark cloud (Cha I) has been mapped in C18O with an angular resolution of 1 arcmin using the SEST telescope. The large scale structures previously observed with lower spatial resolution in the cloud turn into a network of clumpy filaments. The automatic Clumpfind routine developed by \\cite{williams1994} is used to identify individual clumps in a consistent way. Altogether 71 clumps were found and the total mass of these clumps is 230 M⊙. The dense ``cores'' detected with the NANTEN telescope (\\cite{mizuno1999}) and the very cold cores detected in the ISOPHOT serendipity survey (\\cite{toth2000}) form parts of these filaments but decompose into numerous ``clumps''. The filaments are preferentially oriented at right angles to the large-scale magnetic field in the region. We discuss the cloud structure, the physical characteristics of the clumps and the distribution of young stars. The observed clump mass spectrum is compared with the predictions of the turbulent fragmentation model of \\cite{padoan2002}. Agreement is found if fragmentation has been driven by very large-scale hypersonic turbulence, and if by now it has had time to dissipate into modestly supersonic turbulence in the interclump gas. According to numerical simulations, large-scale turbulence should have resulted in filamentary structures as seen in Cha I. The well-oriented magnetic field does not, however, support this picture, but suggests magnetically steered large-scale collapse. The origin of filaments and clumps in Cha I is thus controversial. A possible solution is that the characterization of the driving turbulence fails and that in fact different processes have been effective on small and large scales in this cloud. Based on observations collected at the European Southern Observatory, La Silla, Chile. FITS files are only available in electronic form at http://www.edpsciences.org
Introducing CGOLS: The Cholla Galactic Outflow Simulation Suite
NASA Astrophysics Data System (ADS)
Schneider, Evan E.; Robertson, Brant E.
2018-06-01
We present the Cholla Galactic OutfLow Simulations (CGOLS) suite, a set of extremely high resolution global simulations of isolated disk galaxies designed to clarify the nature of multiphase structure in galactic winds. Using the GPU-based code Cholla, we achieve unprecedented resolution in these simulations, modeling galaxies over a 20 kpc region at a constant resolution of 5 pc. The simulations include a feedback model designed to test the effects of different mass- and energy-loading factors on galactic outflows over kiloparsec scales. In addition to describing the simulation methodology in detail, we also present the results from an adiabatic simulation that tests the frequently adopted analytic galactic wind model of Chevalier & Clegg. Our results indicate that the Chevalier & Clegg model is a good fit to nuclear starburst winds in the nonradiative region of parameter space. Finally, we investigate the role of resolution and convergence in large-scale simulations of multiphase galactic winds. While our largest-scale simulations show convergence of observable features like soft X-ray emission, our tests demonstrate that simulations of this kind with resolutions greater than 10 pc are not yet converged, confirming the need for extreme resolution in order to study the structure of winds and their effects on the circumgalactic medium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guba, O.; Taylor, M. A.; Ullrich, P. A.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations
NASA Astrophysics Data System (ADS)
Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik
2017-02-01
The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.
NASA Astrophysics Data System (ADS)
Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas
2017-06-01
The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.
NASA Technical Reports Server (NTRS)
Baker, V. R. (Principal Investigator); Holz, R. K.; Hulke, S. D.; Patton, P. C.; Penteado, M. M.
1975-01-01
The author has identified the following significant results. Development of a quantitative hydrogeomorphic approach to flood hazard evaluation was hindered by (1) problems of resolution and definition of the morphometric parameters which have hydrologic significance, and (2) mechanical difficulties in creating the necessary volume of data for meaningful analysis. Measures of network resolution such as drainage density and basin Shreve magnitude indicated that large scale topographic maps offered greater resolution than small scale suborbital imagery and orbital imagery. The disparity in network resolution capabilities between orbital and suborbital imagery formats depends on factors such as rock type, vegetation, and land use. The problem of morphometric data analysis was approached by developing a computer-assisted method for network analysis. The system allows rapid identification of network properties which can then be related to measures of flood response.
NASA Astrophysics Data System (ADS)
Ji, X.; Shen, C.
2017-12-01
Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.
NASA Astrophysics Data System (ADS)
Tourigny, E.; Nobre, C.; Cardoso, M. F.
2012-12-01
Deforestation of tropical forests for logging and agriculture, associated to slash-and-burn practices, is a major source of CO2 emissions, both immediate due to biomass burning and future due to the elimination of a potential CO2 sink. Feedbacks between climate change and LUCC (Land-Use and Land-Cover Change) can potentially increase the loss of tropical forests and increase the rate of CO2 emissions, through mechanisms such as land and soil degradation and the increase in wildfire occurrence and severity. However, current understanding of the processes of fires (including ignition, spread and consequences) in tropical forests and climatic feedbacks are poorly understood and need further research. As the processes of LUCC and associated fires occur at local scales, linking them to large-scale atmospheric processes requires a means of up-scaling higher resolutions processes to lower resolutions. Our approach is to couple models which operate at various spatial and temporal scales: a Global Climate Model (GCM), Dynamic Global Vegetation Model (DGVM) and local-scale LUCC and fire spread model. The climate model resolves large scale atmospheric processes and forcings, which are imposed on the surface DGVM and fed-back to climate. Higher-resolution processes such as deforestation, land use management and associated (as well as natural) fires are resolved at the local level. A dynamic tiling scheme allows to represent local-scale heterogeneity while maintaining computational efficiency of the land surface model, compared to traditional landscape models. Fire behavior is modeled at the regional scale (~500m) to represent the detailed landscape using a semi-empirical fire spread model. The relatively coarse scale (as compared to other fire spread models) is necessary due to the paucity of detailed land-cover information and fire history (particularly in the tropics and developing countries). This work presents initial results of a spatially-explicit fire spread model coupled to the IBIS DGVM model. Our area of study comprises selected regions in and near the Brazilian "arc of deforestation". For model training and evaluation, several areas have been mapped using high-resolution imagery from the Landsat TM/ETM+ sensors (Figure 1). This high resolution reference data is used for local-scale simulations and also to evaluate the accuracy of the global MCD45 burned area product, which will be used in future studies covering the entire "arc of deforestation".; Area of study along the arc of deforestation and cerrado: landsat scenes used and burned area (2010) from MCD45 product.
Spatial resolution requirements for automated cartographic road extraction
Benjamin, S.; Gaydos, L.
1990-01-01
Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors
NASA Technical Reports Server (NTRS)
Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)
1985-01-01
Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.
Pattern-based, multi-scale segmentation and regionalization of EOSD land cover
NASA Astrophysics Data System (ADS)
Niesterowicz, Jacek; Stepinski, Tomasz F.
2017-10-01
The Earth Observation for Sustainable Development of Forests (EOSD) map is a 25 m resolution thematic map of Canadian forests. Because of its large spatial extent and relatively high resolution the EOSD is difficult to analyze using standard GIS methods. In this paper we propose multi-scale segmentation and regionalization of EOSD as new methods for analyzing EOSD on large spatial scales. Segments, which we refer to as forest land units (FLUs), are delineated as tracts of forest characterized by cohesive patterns of EOSD categories; we delineated from 727 to 91,885 FLUs within the spatial extent of EOSD depending on the selected scale of a pattern. Pattern of EOSD's categories within each FLU is described by 1037 landscape metrics. A shapefile containing boundaries of all FLUs together with an attribute table listing landscape metrics make up an SQL-searchable spatial database providing detailed information on composition and pattern of land cover types in Canadian forest. Shapefile format and extensive attribute table pertaining to the entire legend of EOSD are designed to facilitate broad range of investigations in which assessment of composition and pattern of forest over large areas is needed. We calculated four such databases using different spatial scales of pattern. We illustrate the use of FLU database for producing forest regionalization maps of two Canadian provinces, Quebec and Ontario. Such maps capture the broad scale variability of forest at the spatial scale of the entire province. We also demonstrate how FLU database can be used to map variability of landscape metrics, and thus the character of landscape, over the entire Canada.
Imprint of non-linear effects on HI intensity mapping on large scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna, E-mail: umeobinna@gmail.com
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on themore » power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.« less
Imprint of non-linear effects on HI intensity mapping on large scales
NASA Astrophysics Data System (ADS)
Umeh, Obinna
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng
2013-01-01
Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887
xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.
McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus
2014-09-01
X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.
NASA Astrophysics Data System (ADS)
Chirayath, V.
2014-12-01
Fluid Lensing is a theoretical model and algorithm I present for fluid-optical interactions in turbulent flows as well as two-fluid surface boundaries that, when coupled with an unique computer vision and image-processing pipeline, may be used to significantly enhance the angular resolution of a remote sensing optical system with applicability to high-resolution 3D imaging of subaqueous regions and through turbulent fluid flows. This novel remote sensing technology has recently been implemented on a quadcopter-based UAS for imaging shallow benthic systems to create the first dataset of a biosphere with unprecedented sub-cm-level imagery in 3D over areas as large as 15 square kilometers. Perturbed two-fluid boundaries with different refractive indices, such as the surface between the ocean and air, may be exploited for use as lensing elements for imaging targets on either side of the interface with enhanced angular resolution. I present theoretical developments behind Fluid Lensing and experimental results from its recent implementation for the Reactive Reefs project to image shallow reef ecosystems at cm scales. Preliminary results from petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk coral reefs in American Samoa (August, 2013) show broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to understanding climate change's impact on coastal zones, global oxygen production and carbon sequestration.
High-resolution observations of combustion in heterogeneous surface fuels
E. Louise Loudermilk; Gary L. Achtemeier; Joseph J. O' Brien; J. Kevin Hiers; Benjamin S. Hornsby
2014-01-01
In ecosystems with frequent surface fires, fire and fuel heterogeneity at relevant scales have been largely ignored. This could be because complete burns give an impression of homogeneity, or due to the difficulty in capturing fine-scale variation in fuel characteristics and fire behaviour. Fire movement between patches of fuel can have implications for modelling fire...
High-resolution hybrid simulations of turbulence from inertial to sub-proton scales
NASA Astrophysics Data System (ADS)
Franci, Luca; Hellinger, Petr; Landi, Simone; Matteini, Lorenzo; Verdini, Andrea
2015-04-01
We investigate properties of turbulence from MHD scales to ion scales by means of two-dimensional, large-scale, high-resolution hybrid particle-in-cell simulations, which to our knowledge constitute the most accurate hybrid simulations of ion scale turbulence ever presented so far. We impose an initial ambient magnetic field perpendicular to the simulation box, and we add a spectrum of large-scale, linearly polarized Alfvén waves, balanced and Alfvénically equipartitioned, on average. When turbulence is fully developed, we observe an inertial range which is characterized by the power spectrum of perpendicular magnetic field fluctuations following a Kolmogorov law with spectral index close to -5/3, while the proton bulk velocity fluctuations exhibit a less steeper slope with index close to -3/2. Both these trends hold over a full decade. A definite transition is observed at a scale of the order of the proton inertial length, above which both spectra steepen, with the perpendicular magnetic field still exhibiting a power law with spectral index about -3 over another full decade. The spectrum of perpendicular electric fluctuations follows the one of the proton bulk velocity at MHD scales and reaches a sort of plateau at small scales. The turbulent nature of our data is also supported by the presence of intermittency. This is revealed by the non-Gaussianity of the probability distribution functions of MHD primitive variables increasing as approaching kinetic scales. All these features are in good agreement with solar wind observations.
Leempoel, Kevin; Parisod, Christian; Geiser, Céline; Joost, Stéphane
2018-02-01
Plant species are known to adapt locally to their environment, particularly in mountainous areas where conditions can vary drastically over short distances. The climate of such landscapes being largely influenced by topography, using fine-scale models to evaluate environmental heterogeneity may help detecting adaptation to micro-habitats. Here, we applied a multiscale landscape genomic approach to detect evidence of local adaptation in the alpine plant Biscutella laevigata . The two gene pools identified, experiencing limited gene flow along a 1-km ridge, were different in regard to several habitat features derived from a very high resolution (VHR) digital elevation model (DEM). A correlative approach detected signatures of selection along environmental gradients such as altitude, wind exposure, and solar radiation, indicating adaptive pressures likely driven by fine-scale topography. Using a large panel of DEM-derived variables as ecologically relevant proxies, our results highlighted the critical role of spatial resolution. These high-resolution multiscale variables indeed indicate that the robustness of associations between genetic loci and environmental features depends on spatial parameters that are poorly documented. We argue that the scale issue is critical in landscape genomics and that multiscale ecological variables are key to improve our understanding of local adaptation in highly heterogeneous landscapes.
Does lower Omega allow a resolution of the large-scale structure problem?
NASA Technical Reports Server (NTRS)
Silk, Joseph; Vittorio, Nicola
1987-01-01
The intermediate angular scale anisotropy of the cosmic microwave background, peculiar velocities, density correlations, and mass fluctuations for both neutrino and baryon-dominated universes with Omega less than one are evaluated. The large coherence length associated with a low-Omega, hot dark matter-dominated universe provides substantial density fluctuations on scales up to 100 Mpc: there is a range of acceptable models that are capable of producing large voids and superclusters of galaxies and the clustering of galaxy clusters, with Omega roughly 0.3, without violating any observational constraint. Low-Omega, cold dark matter-dominated cosmologies are also examined. All of these models may be reconciled with the inflationary requirement of a flat universe by introducing a cosmological constant 1-Omega.
NASA Astrophysics Data System (ADS)
Beers, A.; Ray, C.
2015-12-01
Climate change is likely to affect mountainous areas unevenly due to the complex interactions between topography, vegetation, and the accumulation of snow and ice. This heterogeneity will complicate relationships between species presence and large-scale drivers such as precipitation and make predicting habitat extent and connectivity much more difficult. We studied the potential for fine-scale variation in climate and habitat use throughout the year in the American pika (Ochotona princeps), a talus specialist of mountainous western North America known for strong microhabitat affiliation. Not all areas of talus are likely to be equally hospitable, which may reduce connectivity more than predicted by large-scale occupancy drivers. We used high resolution remotely sensed data to create metrics of the terrain and land cover in the Niwot Ridge (NWT) LTER site in Colorado. We hypothesized that pikas preferentially use heterogeneous terrain, as it might foster greater snow accumulation, and used radio telemetry to test this with radio-collared pikas. Pikas use heterogeneous terrain during snow covered periods and less heterogeneous area during the summer. This suggests that not all areas of talus habitat are equally suitable as shelter from extreme conditions but that pikas need more than just shelter from winter cold. With those results we created a predictive map using the same habitat metrics to model the extent of suitable habitat across the NWT area. These strong effects of terrain on pika habitat use and territory occupancy show the great utility that high resolution remotely sensed data can have in ecological applications. With increasing effects of climate change in mountainous regions, this modeling approach is crucial for quantifying habitat connectivity at both small and large scales and to identify potential refugia for threatened or isolated species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo
High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less
Wang, Haipeng; Yang, Yushuang; Yang, Jianli; Nie, Yihang; Jia, Jing; Wang, Yudan
2015-01-01
Multiscale nondestructive characterization of coal microscopic physical structure can provide important information for coal conversion and coal-bed methane extraction. In this study, the physical structure of a coal sample was investigated by synchrotron-based multiple-energy X-ray CT at three beam energies and two different spatial resolutions. A data-constrained modeling (DCM) approach was used to quantitatively characterize the multiscale compositional distributions at the two resolutions. The volume fractions of each voxel for four different composition groups were obtained at the two resolutions. Between the two resolutions, the difference for DCM computed volume fractions of coal matrix and pores is less than 0.3%, and the difference for mineral composition groups is less than 0.17%. This demonstrates that the DCM approach can account for compositions beyond the X-ray CT imaging resolution with adequate accuracy. By using DCM, it is possible to characterize a relatively large coal sample at a relatively low spatial resolution with minimal loss of the effect due to subpixel fine length scale structures.
A multimodel intercomparison of resolution effects on precipitation: simulations and theory
Rauscher, Sara A.; O?Brien, Travis A.; Piani, Claudio; ...
2016-02-27
An ensemble of six pairs of RCM experiments performed at 25 and 50 km for the period 1961–2000 over a large European domain is examined in order to evaluate the effects of resolution on the simulation of daily precipitation statistics. Application of the non-parametric two-sample Kolmorgorov–Smirnov test, which tests for differences in the location and shape of the probability distributions of two samples, shows that the distribution of daily precipitation differs between the pairs of simulations over most land areas in both summer and winter, with the strongest signal over southern Europe. Two-dimensional histograms reveal that precipitation intensity increases with resolutionmore » over almost the entire domain in both winter and summer. In addition, the 25 km simulations have more dry days than the 50 km simulations. The increase in dry days with resolution is indicative of an improvement in model performance at higher resolution, while the more intense precipitation exceeds observed values. The systematic increase in precipitation extremes with resolution across all models suggests that this response is fundamental to model formulation. Simple theoretical arguments suggest that fluid continuity, combined with the emergent scaling properties of the horizontal wind field, results in an increase in resolved vertical transport as grid spacing decreases. This increase in resolution-dependent vertical mass flux then drives an intensification of convergence and resolvable-scale precipitation as grid spacing decreases. In conclusion, this theoretical result could help explain the increasingly, and often anomalously, large stratiform contribution to total rainfall observed with increasing resolution in many regional and global models.« less
Can limited area NWP and/or RCM models improve on large scales inside their domain?
NASA Astrophysics Data System (ADS)
Mesinger, Fedor; Veljovic, Katarina
2017-04-01
In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales. Average rms wind difference at 250 hPa compared to ECMWF analyses was used as another verification measure. With 21 members run, at about the same resolution of the driver global and the nested Eta during the first 10 days of the experiment, both verification measures generally demonstrate advantage of the Eta, in particular during and after the time of a deep upper tropospheric trough crossing the Rockies at the first 2-6 days of the experiment. Rerunning the Eta ensemble switched to use sigma (Eta/sigma) showed this advantage of the Eta to come to a considerable degree, but not entirely, from its use of the eta coordinate. Compared to cumulative scores of the ensembles run, this is demonstrated to even a greater degree by the number of "wins" of one model vs. another. Thus, at 4.5 day time when the trough just about crossed the Rockies, all 21 Eta/eta members have better ETSa scores than their ECMWF driver members. Eta/sigma has 19 members improving upon ECMWF, but loses to Eta/eta by a score of as much as 20 to 1. ECMWF members do better with rms scores, losing to Eta/eta by 18 vs. 3, but winning over Eta/sigma by 12 to 9. Examples of wind plots behind these results are shown, and additional reasons possibly helping or not helping the results summarized are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
A novel representation of groundwater dynamics in large-scale land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan
2017-04-01
Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.
Bridging the scales in a eulerian air quality model to assess megacity export of pollution
NASA Astrophysics Data System (ADS)
Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.
2013-08-01
In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Khee-Gan; Hennawi, Joseph F.; Eilers, Anna-Christina
2014-11-01
We present the first observations of foreground Lyα forest absorption from high-redshift galaxies, targeting 24 star-forming galaxies (SFGs) with z ∼ 2.3-2.8 within a 5' × 14' region of the COSMOS field. The transverse sightline separation is ∼2 h {sup –1} Mpc comoving, allowing us to create a tomographic reconstruction of the three-dimensional (3D) Lyα forest absorption field over the redshift range 2.20 ≤ z ≤ 2.45. The resulting map covers 6 h {sup –1} Mpc × 14 h {sup –1} Mpc in the transverse plane and 230 h {sup –1} Mpc along the line of sight with a spatialmore » resolution of ≈3.5 h {sup –1} Mpc, and is the first high-fidelity map of a large-scale structure on ∼Mpc scales at z > 2. Our map reveals significant structures with ≳ 10 h {sup –1} Mpc extent, including several spanning the entire transverse breadth, providing qualitative evidence for the filamentary structures predicted to exist in the high-redshift cosmic web. Simulated reconstructions with the same sightline sampling, spectral resolution, and signal-to-noise ratio recover the salient structures present in the underlying 3D absorption fields. Using data from other surveys, we identified 18 galaxies with known redshifts coeval with our map volume, enabling a direct comparison with our tomographic map. This shows that galaxies preferentially occupy high-density regions, in qualitative agreement with the same comparison applied to simulations. Our results establish the feasibility of the CLAMATO survey, which aims to obtain Lyα forest spectra for ∼1000 SFGs over ∼1 deg{sup 2} of the COSMOS field, in order to map out the intergalactic medium large-scale structure at (z) ∼ 2.3 over a large volume (100 h {sup –1} Mpc){sup 3}.« less
High-resolution simulation of deep pencil beam surveys - analysis of quasi-periodicity
NASA Astrophysics Data System (ADS)
Weiss, A. G.; Buchert, T.
1993-07-01
We carry out pencil beam constructions in a high-resolution simulation of the large-scale structure of galaxies. The initial density fluctuations are taken to have a truncated power spectrum. All the models have {OMEGA} = 1. As an example we present the results for the case of "Hot-Dark-Matter" (HDM) initial conditions with scale-free n = 1 power index on large scales as a representative of models with sufficient large-scale power. We use an analytic approximation for particle trajectories of a self-gravitating dust continuum and apply a local dynamical biasing of volume elements to identify luminous matter in the model. Using this method, we are able to resolve formally a simulation box of 1200h^-1^ Mpc (e.g. for HDM initial conditions) down to the scale of galactic halos using 2160^3^ particles. We consider this as the minimal resolution necessary for a sensible simulation of deep pencil beam data. Pencil beam probes are taken for a given epoch using the parameters of observed beams. In particular, our analysis concentrates on the detection of a quasi-periodicity in the beam probes using several different methods. The resulting beam ensembles are analyzed statistically using number distributions, pair-count histograms, unnormalized pair-counts, power spectrum analysis and trial-period folding. Periodicities are classified according to their significance level in the power spectrum of the beams. The simulation is designed for application to parameter studies which prepare future observational projects. We find that a large percentage of the beams show quasi- periodicities with periods which cluster at a certain length scale. The periods found range between one and eight times the cutoff length in the initial fluctuation spectrum. At significance levels similar to those of the data of Broadhurst et al. (1990), we find about 15% of the pencil beams to show periodicities, about 30% of which are around the mean separation of rich clusters, while the distribution of scales reaches values of more than 200h^-1^ Mpc. The detection of periodicities larger than the typical void size must not be due to missing of "walls" (like the so called "Great Wall" seen in the CfA catalogue of galaxies), but can be due to different clustering properties of galaxies along the beams.
The FRIGG project: From intermediate galactic scales to self-gravitating cores
NASA Astrophysics Data System (ADS)
Hennebelle, Patrick
2018-03-01
Context. Understanding the detailed structure of the interstellar gas is essential for our knowledge of the star formation process. Aim. The small-scale structure of the interstellar medium (ISM) is a direct consequence of the galactic scales and making the link between the two is essential. Methods: We perform adaptive mesh simulations that aim to bridge the gap between the intermediate galactic scales and the self-gravitating prestellar cores. For this purpose we use stratified supernova regulated ISM magneto-hydrodynamical simulations at the kpc scale to set up the initial conditions. We then zoom, performing a series of concentric uniform refinement and then refining on the Jeans length for the last levels. This allows us to reach a spatial resolution of a few 10-3 pc. The cores are identified using a clump finder and various criteria based on virial analysis. Their most relevant properties are computed and, due to the large number of objects formed in the simulations, reliable statistics are obtained. Results: The cores' properties show encouraging agreements with observations. The mass spectrum presents a clear powerlaw at high masses with an exponent close to ≃-1.3 and a peak at about 1-2 M⊙. The velocity dispersion and the angular momentum distributions are respectively a few times the local sound speed and a few 10-2 pc km s-1. We also find that the distribution of thermally supercritical cores present a range of magnetic mass-to-flux over critical mass-to-flux ratios, typically between ≃0.3 and 3 indicating that they are significantly magnetized. Investigating the time and spatial dependence of these statistical properties, we conclude that they are not significantly affected by the zooming procedure and that they do not present very large fluctuations. The most severe issue appears to be the dependence on the numerical resolution of the core mass function (CMF). While the core definition process may possibly introduce some biases, the peak tends to shift to smaller values when the resolution improves. Conclusions: Our simulations, which use self-consistently generated initial conditions at the kpc scale, produce a large number of prestellar cores from which reliable statistics can be inferred. Preliminary comparisons with observations show encouraging agreements. In particular the inferred CMFs resemble the ones inferred from recent observations. We stress, however, a possible issue with the peak position shifting with numerical resolution.
Landscape-scale forest disturbance regimes in southern Peruvian Amazonia.
Boyd, Doreen S; Hill, Ross A; Hopkinson, Chris; Baker, Timothy R
2013-10-01
Landscape-scale gap-size frequency distributions in tropical forests are a poorly studied but key ecological variable. Currently, a scale gap currently exists between local-scale field-based studies and those employing regional-scale medium-resolution satellite data. Data at landscape scales but of fine resolution would, however, facilitate investigation into a range of ecological questions relating to gap dynamics. These include whether canopy disturbances captured in permanent sample plots (PSPs) are representative of those in their surrounding landscape, and whether disturbance regimes vary with forest type. Here, therefore, we employ airborne LiDAR data captured over 142.5 km2 of mature, swamp, and regenerating forests in southeast Peru to assess the landscape-scale disturbance at a sampling resolution of up to 2 m. We find that this landscape is characterized by large numbers of small gaps; large disturbance events are insignificant and infrequent. Of the total number of gaps that are 2 m2 or larger in area, just 0.45% were larger than 100 m2, with a power-law exponent (alpha) value of the gap-size frequency distribution of 2.22. However, differences in disturbance regimes are seen among different forest types, with a significant difference in the alpha value of the gap-size frequency distribution observed for the swamp/regenerating forests compared with the mature forests at higher elevations. Although a relatively small area of the total forest of this region was investigated here, this study presents an unprecedented assessment of this landscape with respect to its gap dynamics. This is particularly pertinent given the range of forest types present in the landscape and the differences observed. The coupling of detailed insights into forest properties and growth provided by PSPs with the broader statistics of disturbance events using remote sensing is recommended as a strong basis for scaling-up estimates of landscape and regional-scale carbon balance.
A high-resolution European dataset for hydrologic modeling
NASA Astrophysics Data System (ADS)
Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta
2013-04-01
There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as inputs to the hydrological calibration and validation of EFAS as well as for establishing long-term discharge "proxy" climatologies which can then in turn be used for statistical analysis to derive return periods or other time series derivatives. In addition, this dataset will be used to assess climatological trends in Europe. Unfortunately, to date no baseline dataset at the European scale exists to test the quality of the herein presented data. Hence, a comparison against other existing datasets can therefore only be an indication of data quality. Due to availability, a comparison was made for precipitation and temperature only, arguably the most important meteorological drivers for hydrologic models. A variety of analyses was undertaken at country scale against data reported to EUROSTAT and E-OBS datasets. The comparison revealed that while the datasets showed overall similar temporal and spatial patterns, there were some differences in magnitudes especially for precipitation. It is not straightforward to define the specific cause for these differences. However, in most cases the comparatively low observation station density appears to be the principal reason for the differences in magnitude.
DMI's Baltic Sea Coastal operational forecasting system
NASA Astrophysics Data System (ADS)
Murawski, Jens; Berg, Per; Weismann Poulsen, Jacob
2017-04-01
Operational forecasting is challenged with bridging the gap between the large scales of the driving weather systems and the local, human scales of the model applications. The limit of what can be represented by local model has been continuously shifted to higher and higher spatial resolution, with the aim to better resolve the local dynamic and to make it possible to describe processes that could only be parameterised in older versions, with the ultimate goal to improve the quality of the forecast. Current hardware trends demand a str onger focus on the development of efficient, highly parallelised software and require a refactoring of the code with a solid focus on portable performance. The gained performance can be used for running high resolution model with a larger coverage. Together with the development of efficient two-way nesting routines, this has made it possible to approach the near-coastal zone with model applications that can run in a time effective way. Denmarks Meteorological Institute uses the HBM(1) ocean circulation model for applications that covers the entire Baltic Sea and North Sea with an integrated model set-up that spans the range of horizontal resolution from 1nm for the entire Baltic Sea to approx. 200m resolution in local fjords (Limfjord). For the next model generation, the high resolution set-ups are going to be extended and new high resolution domains in coastal zones are either implemented or tested for operational use. For the first time it will be possible to cover large stretches of the Baltic coastal zone with sufficiently high resolution to model the local hydrodynamic adequately. (1) HBM stands for HIROMB-BOOS-Model, whereas HIROMB stands for "High Resolution Model for the Baltic Sea" and BOOS stands for "Baltic Operational Oceanography System".
The Role of Jet Adjustment Processes in Subtropical Dust Storms
NASA Astrophysics Data System (ADS)
Pokharel, Ashok Kumar; Kaplan, Michael L.; Fiedler, Stephanie
2017-11-01
Meso-α/β/γ scale atmospheric processes of jet dynamics responsible for generating Harmattan, Saudi Arabian, and Bodélé Depression dust storms are analyzed with observations and high-resolution modeling. The analysis of the role of jet adjustment processes in each dust storm shows similarities as follows: (1) the presence of a well-organized baroclinic synoptic scale system, (2) cross mountain flows that produced a leeside inversion layer prior to the large-scale dust storm, (3) the presence of thermal wind imbalance in the exit region of the midtropospheric jet streak in the lee of the respective mountains shortly after the time of the inversion formation, (4) dust storm formation accompanied by large magnitude ageostrophic isallobaric low-level winds as part of the meso-β scale adjustment process, (5) substantial low-level turbulence kinetic energy (TKE), and (6) emission and uplift of mineral dust in the lee of nearby mountains. The thermally forced meso-γ scale adjustment processes, which occurred in the canyons/small valleys, may have been the cause of numerous observed dust streaks leading to the entry of the dust into the atmosphere due to the presence of significant vertical motion and TKE generation. This study points to the importance of meso-β to meso-γ scale adjustment processes at low atmospheric levels due to an imbalance within the exit region of an upper level jet streak for the formation of severe dust storms. The low level TKE, which is one of the prerequisites to deflate the dust from the surface, cannot be detected with the low resolution data sets; so our results show that a high spatial resolution is required for better representing TKE as a proxy for dust emission.
Unstructured-grid coastal ocean modelling in Southern Adriatic and Northern Ionian Seas
NASA Astrophysics Data System (ADS)
Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo
2016-04-01
The Southern Adriatic Northern Ionian coastal Forecasting System (SANIFS) is a short-term forecasting system based on unstructured grid approach. The model component is built on SHYFEM finite element three-dimensional hydrodynamic model. The operational chain exploits a downscaling approach starting from the Mediterranean oceanographic-scale model MFS (Mediterranean Forecasting System, operated by INGV). The implementation set-up has been designed to provide accurate hydrodynamics and active tracer processes in the coastal waters of Southern Eastern Italy (Apulia, Basilicata and Calabria regions), where the model is characterized by a variable resolution in range of 50-500 m. The horizontal resolution is also high in open-sea areas, where the elements size is approximately 3 km. The model is forced: (i) at the lateral open boundaries through a full nesting strategy directly with the MFS (temperature, salinity, non-tidal sea surface height and currents) and OTPS (tidal forcing) fields; (ii) at surface through two alternative atmospheric forcing datasets (ECMWF and COSMOME) via MFS-bulk-formulae. Given that the coastal fields are driven by a combination of both local/coastal and deep ocean forcings propagating along the shelf, the performance of SANIFS was verified first (i) at the large and shelf-coastal scales by comparing with a large scale CTD survey and then (ii) at the coastal-harbour scale by comparison with CTD, ADCP and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 7 km). The present work highlights how downscaling could improve the simulation of the flow field going from typical open-ocean scales of the order of several km to the coastal (and harbour) scales of tens to hundreds of meters.
NASA Technical Reports Server (NTRS)
Strom, Stephen; Sargent, Wallace L. W.; Wolff, Sidney; Ahearn, Michael F.; Angel, J. Roger; Beckwith, Steven V. W.; Carney, Bruce W.; Conti, Peter S.; Edwards, Suzan; Grasdalen, Gary
1991-01-01
Optical/infrared (O/IR) astronomy in the 1990's is reviewed. The following subject areas are included: research environment; science opportunities; technical development of the 1980's and opportunities for the 1990's; and ground-based O/IR astronomy outside the U.S. Recommendations are presented for: (1) large scale programs (Priority 1: a coordinated program for large O/IR telescopes); (2) medium scale programs (Priority 1: a coordinated program for high angular resolution; Priority 2: a new generation of 4-m class telescopes); (3) small scale programs (Priority 1: near-IR and optical all-sky surveys; Priority 2: a National Astrometric Facility); and (4) infrastructure issues (develop, purchase, and distribute optical CCDs and infrared arrays; a program to support large optics technology; a new generation of large filled aperture telescopes; a program to archive and disseminate astronomical databases; and a program for training new instrumentalists)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, P. D.; Brown, M. E.; Trumbo, S. K.
2017-01-01
We present spatially resolved spectroscopic observations of Europa’s surface at 3–4 μ m obtained with the near-infrared spectrograph and adaptive optics system on the Keck II telescope. These are the highest quality spatially resolved reflectance spectra of Europa’s surface at 3–4 μ m. The observations spatially resolve Europa’s large-scale compositional units at a resolution of several hundred kilometers. The spectra show distinct features and geographic variations associated with known compositional units; in particular, large-scale leading hemisphere chaos shows a characteristic longward shift in peak reflectance near 3.7 μ m compared to icy regions. These observations complement previous spectra of large-scalemore » chaos, and can aid efforts to identify the endogenous non-ice species.« less
Large-scale horizontal flows from SOUP observations of solar granulation
NASA Technical Reports Server (NTRS)
November, L. J.; Simon, G. W.; Tarbell, T. D.; Title, A. M.; Ferguson, S. H.
1987-01-01
Using high resolution time sequence photographs of solar granulation from the SOUP experiment on Spacelab 2, large scale horizontal flows were observed in the solar surface. The measurement method is based upon a local spatial cross correlation analysis. The horizontal motions have amplitudes in the range 300 to 1000 m/s. Radial outflow of granulation from a sunspot penumbra into surrounding photosphere is a striking new discovery. Both the supergranulation pattern and cellular structures having the scale of mesogranulation are seen. The vertical flows that are inferred by continuity of mass from these observed horizontal flows have larger upflow amplitudes in cell centers than downflow amplitudes at cell boundaries.
Li, Quanfeng; Wang, Qi; Hou, Yubin; Lu, Qingyou
2012-04-01
We present a home-built 18/20 T high magnetic field scanning tunneling microscope (STM) featuring fully low voltage (lower than ±15 V) operability in low temperatures, large scale searching ability, and 20 fA high current resolution (measured by using a 100 GOhm dummy resistor to replace the tip-sample junction) with a bandwidth of 3.03 kHz. To accomplish low voltage operation which is important in achieving high precision, low noise, and low interference with the strong magnetic field, the coarse approach is implemented with an inertial slider driven by the lateral bending of a piezoelectric scanner tube (PST) whose inner electrode is axially split into two for enhanced bending per volt. The PST can also drive the same sliding piece to inertial slide in the other bending direction (along the sample surface) of the PST, which realizes the large area searching ability. The STM head is housed in a three segment tubular chamber, which is detachable near the STM head for the convenience of sample and tip changes. Atomic resolution images of a graphite sample taken under 17.6 T and 18.0001 T are presented to show its performance. © 2012 American Institute of Physics
Using an SLR inversion to measure the mass balance of Greenland before and during GRACE
NASA Astrophysics Data System (ADS)
Bonin, Jennifer
2016-04-01
The GRACE mission has done an admirable job of measuring large-scale mass changes over Greenland since its launch in 2002. However before that time, measurements of large-scale ice mass balance were few and far between, leading to a lack of baseline knowledge. High-quality Satellite Laser Ranging (SLR) data existed a decade earlier, but normally has too low a spatial resolution to be used for this purpose. I demonstrate that a least squares inversion technique can reconstitute the SLR data and use it to measure ice loss over Greenland. To do so, I first simulate the problem by degrading today's GRACE data to a level comparable with SLR, then demonstrating that the inversion can re-localize Greenland's contribution to the low-resolution signal, giving an accurate time series of mass change over all of Greenland which compares well with the full-resolution GRACE estimates. I then utilize that method on the actual SLR data, resulting in an independent 1994-2014 time series of mass change over Greenland. I find favorable agreement between the pure-SLR inverted results and the 2012 Ice-sheet Mass Balance Inter-comparison Exercise (IMBIE) results, which are largely based on the "input-output" modeling method before GRACE's launch.
NASA Astrophysics Data System (ADS)
Maxwell, R. M.; Condon, L. E.; Kollet, S. J.
2013-12-01
Groundwater is an important component of the hydrologic cycle yet its importance is often overlooked. Aquifers are a critical water resource, particularly in irrigation, but also participates in moderating the land-energy balance over the so-called critical zone of 2-10m in water table depth. Yet,the scaling behavior of groundwater is not well known. Here, we present the results of a fully-integrated hydrologic model run over a 6.3M km2 domain that covers much of North America focused on the continental United States. This model encompasses both the Mississippi and Colorado River watersheds in their entirety at 1km resolution and is constructed using the fully-integrated groundwater-vadose zone-surface water-land surface model, ParFlow. Results from this work are compared to observations (both of surface water flow and groundwater depths) and approaches are presented for observing of these integrated systems. Furthermore, results are used to understand the scaling behavior of groundwater over the continent at high resolution. Implications for understanding dominant hydrological processes at large scales will be discussed.
Effect of Spatial Resolution for Characterizing Soil Properties from Imaging Spectrometer Data
NASA Astrophysics Data System (ADS)
Dutta, D.; Kumar, P.; Greenberg, J. A.
2015-12-01
The feasibility of quantifying soil constituents over large areas using airborne hyperspectral data [0.35 - 2.5 μm] in an ensemble bootstrapping lasso algorithmic framework has been demonstrated previously [1]. However the effects of coarsening the spatial resolution of hyperspectral data on the quantification of soil constituents are unknown. We use Airborne Visible Infrared Imaging Spectrometer (AVIRIS) data collected at 7.6m resolution over Birds Point New Madrid (BPNM) floodway for up-scaling and generating multiple coarser resolution datasets including the 60m Hyperspectral Infrared Imager (HyspIRI) like data. HyspIRI is a proposed visible shortwave/thermal infrared mission, which will provide global data over a spectral range of 0.35 - 2.5μm at a spatial resolution of 60m. Our results show that the lasso method, which is based on point scale observational data, is scalable. We found consistent good model performance (R2) values (0.79 < R2 < 0.82) and correct classifications as per USDA soil texture classes at multiple spatial resolutions. The results further demonstrate that the attributes of the pdf for different soil constituents across the landscape and the within-pixel variance are well preserved across scales. Our analysis provides a methodological framework with a sufficient set of metrics for assessing the performance of scaling up analysis from point scale observational data and may be relevant for other similar remote sensing studies. [1] Dutta, D.; Goodwell, A.E.; Kumar, P.; Garvey, J.E.; Darmody, R.G.; Berretta, D.P.; Greenberg, J.A., "On the Feasibility of Characterizing Soil Properties From AVIRIS Data," Geoscience and Remote Sensing, IEEE Transactions on, vol.53, no.9, pp.5133,5147, Sept. 2015. doi: 10.1109/TGRS.2015.2417547.
Analysis of Radar and Optical Space Borne Data for Large Scale Topographical Mapping
NASA Astrophysics Data System (ADS)
Tampubolon, W.; Reinhardt, W.
2015-03-01
Normally, in order to provide high resolution 3 Dimension (3D) geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term "Rapid Mapping". In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS) imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs) from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar) or the usage of the GCPs in both, the optical and the radar satellite data processing. Finally this results will be used in the future as a reference for further geospatial data acquisitions to support topographical mapping in even larger scales up to the 1:10.000 map scale.
Pushing the limits of spatial resolution with the Kuiper Airborne observatory
NASA Technical Reports Server (NTRS)
Lester, Daniel
1994-01-01
The study of astronomical objects at high spatial resolution in the far-IR is one of the most serious limitations to our work at these wavelengths, which carry information about the luminosity of dusty and obscured sources. At IR wavelengths shorter than 30 microns, ground based telescopes with large apertures at superb sites achieve diffraction-limited performance close to the seeing limit in the optical. At millimeter wavelengths, ground based interferometers achieve resolution that is close to this. The inaccessibility of the far-IR from the ground makes it difficult, however, to achieve complementary resolution in the far-IR. The 1983 IRAS survey, while extraordinarily sensitive, provides us with a sky map at a spatial resolution that is limited by detector size on a spatial scale that is far larger than that available in other wavelengths on the ground. The survey resolution is of order 4 min in the 100 micron bandpass, and 2 min at 60 microns (IRAS Explanatory Supplement, 1988). Information on a scale of 1' is available on some sources from the CPC. Deconvolution and image resolution using this database is one of the subjects of this workshop.
NASA Astrophysics Data System (ADS)
Ba, Yu Tao; xian Liu, Bao; Sun, Feng; Wang, Li hua; Tang, Yu jia; Zhang, Da wei
2017-04-01
High-resolution mapping of PM2.5 is the prerequisite for precise analytics and subsequent anti-pollution interventions. Considering the large variances of particulate distribution, urban-scale mapping is challenging either with ground-based fixed stations, with satellites or via models. In this study, a dynamic fusion method between high-density sensor network and MODIS Aerosol Optical Depth (AOD) was introduced. The sensor network was deployed in Beijing ( > 1000 fixed monitors across 16000 km2 area) to provide raw observations with high temporal resolution (sampling interval < 1 hour), high spatial resolution in flat areas ( < 1 km), and low spatial resolution in mountainous areas ( > 5 km). The MODIS AOD was calibrated to provide distribution map with low temporal resolution (daily) and moderate spatial resolution ( = 3 km). By encoding the data quality and defects (e.g. could, reflectance, abnormal), a hybrid interpolation procedure with cross-validation generated PM2.5 distribution with both high temporal and spatial resolution. Several no-pollutant and high-pollution periods were tested to validate the proposed fusion method for capturing the instantaneous patterns of PM2.5 emission.
4D electron microscopy: principles and applications.
Flannigan, David J; Zewail, Ahmed H
2012-10-16
The transmission electron microscope (TEM) is a powerful tool enabling the visualization of atoms with length scales smaller than the Bohr radius at a factor of only 20 larger than the relativistic electron wavelength of 2.5 pm at 200 keV. The ability to visualize matter at these scales in a TEM is largely due to the efforts made in correcting for the imperfections in the lens systems which introduce aberrations and ultimately limit the achievable spatial resolution. In addition to the progress made in increasing the spatial resolution, the TEM has become an all-in-one characterization tool. Indeed, most of the properties of a material can be directly mapped in the TEM, including the composition, structure, bonding, morphology, and defects. The scope of applications spans essentially all of the physical sciences and includes biology. Until recently, however, high resolution visualization of structural changes occurring on sub-millisecond time scales was not possible. In order to reach the ultrashort temporal domain within which fundamental atomic motions take place, while simultaneously retaining high spatial resolution, an entirely new approach from that of millisecond-limited TEM cameras had to be conceived. As shown below, the approach is also different from that of nanosecond-limited TEM, whose resolution cannot offer the ultrafast regimes of dynamics. For this reason "ultrafast electron microscopy" is reserved for the field which is concerned with femtosecond to picosecond resolution capability of structural dynamics. In conventional TEMs, electrons are produced by heating a source or by applying a strong extraction field. Both methods result in the stochastic emission of electrons, with no control over temporal spacing or relative arrival time at the specimen. The timing issue can be overcome by exploiting the photoelectric effect and using pulsed lasers to generate precisely timed electron packets of ultrashort duration. The spatial and temporal resolutions achievable with short intense pulses containing a large number of electrons, however, are limited to tens of nanometers and nanoseconds, respectively. This is because Coulomb repulsion is significant in such a pulse, and the electrons spread in space and time, thus limiting the beam coherence. It is therefore not possible to image the ultrafast elementary dynamics of complex transformations. The challenge was to retain the high spatial resolution of a conventional TEM while simultaneously enabling the temporal resolution required to visualize atomic-scale motions. In this Account, we discuss the development of four-dimensional ultrafast electron microscopy (4D UEM) and summarize techniques and applications that illustrate the power of the approach. In UEM, images are obtained either stroboscopically with coherent single-electron packets or with a single electron bunch. Coulomb repulsion is absent under the single-electron condition, thus permitting imaging, diffraction, and spectroscopy, all with high spatiotemporal resolution, the atomic scale (sub-nanometer and femtosecond). The time resolution is limited only by the laser pulse duration and energy carried by the electron packets; the CCD camera has no bearing on the temporal resolution. In the regime of single pulses of electrons, the temporal resolution of picoseconds can be attained when hundreds of electrons are in the bunch. The applications given here are selected to highlight phenomena of different length and time scales, from atomic motions during structural dynamics to phase transitions and nanomechanical oscillations. We conclude with a brief discussion of emerging methods, which include scanning ultrafast electron microscopy (S-UEM), scanning transmission ultrafast electron microscopy (ST-UEM) with convergent beams, and time-resolved imaging of biological structures at ambient conditions with environmental cells.
Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D
2018-01-01
Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales.
Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F.; Byrne, Maria; Malcolm, Hamish A.; Williams, Stefan B.; Steinberg, Peter D.
2018-01-01
Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate ‘no-take’ and ‘general-use’ (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5–10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and sampling design that once temporal sampling is incorporated will be useful to detect changes of marine benthic communities across multiple spatial and temporal scales. PMID:29547656
Simulation of the Atmospheric Boundary Layer for Wind Energy Applications
NASA Astrophysics Data System (ADS)
Marjanovic, Nikola
Energy production from wind is an increasingly important component of overall global power generation, and will likely continue to gain an even greater share of electricity production as world governments attempt to mitigate climate change and wind energy production costs decrease. Wind energy generation depends on wind speed, which is greatly influenced by local and synoptic environmental forcings. Synoptic forcing, such as a cold frontal passage, exists on a large spatial scale while local forcing manifests itself on a much smaller scale and could result from topographic effects or land-surface heat fluxes. Synoptic forcing, if strong enough, may suppress the effects of generally weaker local forcing. At the even smaller scale of a wind farm, upstream turbines generate wakes that decrease the wind speed and increase the atmospheric turbulence at the downwind turbines, thereby reducing power production and increasing fatigue loading that may damage turbine components, respectively. Simulation of atmospheric processes that span a considerable range of spatial and temporal scales is essential to improve wind energy forecasting, wind turbine siting, turbine maintenance scheduling, and wind turbine design. Mesoscale atmospheric models predict atmospheric conditions using observed data, for a wide range of meteorological applications across scales from thousands of kilometers to hundreds of meters. Mesoscale models include parameterizations for the major atmospheric physical processes that modulate wind speed and turbulence dynamics, such as cloud evolution and surface-atmosphere interactions. The Weather Research and Forecasting (WRF) model is used in this dissertation to investigate the effects of model parameters on wind energy forecasting. WRF is used for case study simulations at two West Coast North American wind farms, one with simple and one with complex terrain, during both synoptically and locally-driven weather events. The model's performance with different grid nesting configurations, turbulence closures, and grid resolutions is evaluated by comparison to observation data. Improvement to simulation results from the use of more computationally expensive high resolution simulations is only found for the complex terrain simulation during the locally-driven event. Physical parameters, such as soil moisture, have a large effect on locally-forced events, and prognostic turbulence kinetic energy (TKE) schemes are found to perform better than non-local eddy viscosity turbulence closure schemes. Mesoscale models, however, do not resolve turbulence directly, which is important at finer grid resolutions capable of resolving wind turbine components and their interactions with atmospheric turbulence. Large-eddy simulation (LES) is a numerical approach that resolves the largest scales of turbulence directly by separating large-scale, energetically important eddies from smaller scales with the application of a spatial filter. LES allows higher fidelity representation of the wind speed and turbulence intensity at the scale of a wind turbine which parameterizations have difficulty representing. Use of high-resolution LES enables the implementation of more sophisticated wind turbine parameterizations to create a robust model for wind energy applications using grid spacing small enough to resolve individual elements of a turbine such as its rotor blades or rotation area. Generalized actuator disk (GAD) and line (GAL) parameterizations are integrated into WRF to complement its real-world weather modeling capabilities and better represent wind turbine airflow interactions, including wake effects. The GAD parameterization represents the wind turbine as a two-dimensional disk resulting from the rotation of the turbine blades. Forces on the atmosphere are computed along each blade and distributed over rotating, annular rings intersecting the disk. While typical LES resolution (10-20 m) is normally sufficient to resolve the GAD, the GAL parameterization requires significantly higher resolution (1-3 m) as it does not distribute the forces from the blades over annular elements, but applies them along lines representing individual blades. In this dissertation, the GAL is implemented into WRF and evaluated against the GAD parameterization from two field campaigns that measured the inflow and near-wake regions of a single turbine. The data-sets are chosen to allow validation under the weakly convective and weakly stable conditions characterizing most turbine operations. The parameterizations are evaluated with respect to their ability to represent wake wind speed, variance, and vorticity by comparing fine-resolution GAD and GAL simulations along with coarse-resolution GAD simulations. Coarse-resolution GAD simulations produce aggregated wake characteristics similar to both GAD and GAL simulations (saving on computational cost), while the GAL parameterization enables resolution of near wake physics (such as vorticity shedding and wake expansion) for high fidelity applications. (Abstract shortened by ProQuest.).
Agent-based large-scale emergency evacuation using real-time open government data.
DOT National Transportation Integrated Search
2014-01-01
The open government initiatives have provided tremendous data resources for the : transportation system and emergency services in urban areas. This paper proposes : a traffic simulation framework using high temporal resolution demographic data : and ...
NASA Technical Reports Server (NTRS)
Gaskell, R. W.; Synnott, S. P.
1987-01-01
To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.
Rodríguez, José-Rodrigo; Turégano-López, Marta; DeFelipe, Javier; Merchán-Pérez, Angel
2018-01-01
Semithin sections are commonly used to examine large areas of tissue with an optical microscope, in order to locate and trim the regions that will later be studied with the electron microscope. Ideally, the observation of semithin sections would be from mesoscopic to nanoscopic scales directly, instead of using light microscopy and then electron microscopy (EM). Here we propose a method that makes it possible to obtain high-resolution scanning EM images of large areas of the brain in the millimeter to nanometer range. Since our method is compatible with light microscopy, it is also feasible to generate hybrid light and electron microscopic maps. Additionally, the same tissue blocks that have been used to obtain semithin sections can later be used, if necessary, for transmission EM, or for focused ion beam milling and scanning electron microscopy (FIB-SEM). PMID:29568263
Rodríguez, José-Rodrigo; Turégano-López, Marta; DeFelipe, Javier; Merchán-Pérez, Angel
2018-01-01
Semithin sections are commonly used to examine large areas of tissue with an optical microscope, in order to locate and trim the regions that will later be studied with the electron microscope. Ideally, the observation of semithin sections would be from mesoscopic to nanoscopic scales directly, instead of using light microscopy and then electron microscopy (EM). Here we propose a method that makes it possible to obtain high-resolution scanning EM images of large areas of the brain in the millimeter to nanometer range. Since our method is compatible with light microscopy, it is also feasible to generate hybrid light and electron microscopic maps. Additionally, the same tissue blocks that have been used to obtain semithin sections can later be used, if necessary, for transmission EM, or for focused ion beam milling and scanning electron microscopy (FIB-SEM).
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
NASA Technical Reports Server (NTRS)
Campbell, W. J.; Josberger, E. G.; Gloersen, P.; Johannessen, O. M.; Guest, P. S.
1987-01-01
The data acquired during the summer 1984 Marginal Ice Zone Experiment in the Fram Strait-Greenland Sea marginal ice zone, using airborne active and passive microwave sensors and the Nimbus 7 SMMR, were analyzed to compile a sequential description of the mesoscale and large-scale ice morphology variations during the period of June 6 - July 16, 1984. Throughout the experiment, the long ice edge between northwest Svalbard and central Greenland meandered; eddies were repeatedly formed, moved, and disappeared but the ice edge remained within a 100-km-wide zone. The ice pack behind this alternately diffuse and compact edge underwent rapid and pronounced variations in ice concentration over a 200-km-wide zone. The high-resolution ice concentration distributions obtained in the aircraft images agree well with the low-resolution distributions of SMMR images.
D.J. Hayes; W.B. Cohen
2006-01-01
This article describes the development of a methodology for scaling observations of changes in tropical forest cover to large areas at high temporal frequency from coarse-resolution satellite imagery. The approach for estimating proportional forest cover change as a continuous variable is based on a regression model that relates multispectral, multitemporal Moderate...
Large-area super-resolution optical imaging by using core-shell microfibers
NASA Astrophysics Data System (ADS)
Liu, Cheng-Yang; Lo, Wei-Chieh
2017-09-01
We first numerically and experimentally report large-area super-resolution optical imaging achieved by using core-shell microfibers. The particular spatial electromagnetic waves for different core-shell microfibers are studied by using finite-difference time-domain and ray tracing calculations. The focusing properties of photonic nanojets are evaluated in terms of intensity profile and full width at half-maximum along propagation and transversal directions. In experiment, the general optical fiber is chemically etched down to 6 μm diameter and coated with different metallic thin films by using glancing angle deposition. The direct imaging of photonic nanojets for different core-shell microfibers is performed with a scanning optical microscope system. We show that the intensity distribution of a photonic nanojet is highly related to the metallic shell due to the surface plasmon polaritons. Furthermore, large-area super-resolution optical imaging is performed by using different core-shell microfibers placed over the nano-scale grating with 150 nm line width. The core-shell microfiber-assisted imaging is achieved with super-resolution and hundreds of times the field-of-view in contrast to microspheres. The possible applications of these core-shell optical microfibers include real-time large-area micro-fluidics and nano-structure inspections.
Full-scale high-speed ``Edgerton'' retroreflective shadowgraphy of gunshots
NASA Astrophysics Data System (ADS)
Settles, Gary
2005-11-01
Almost 1/2 century ago, H. E. ``Doc'' Edgerton demonstrated a simple and elegant direct-shadowgraph technique for imaging large-scale events like explosions and gunshots. Only a retroreflective screen, flashlamp illumination, and an ordinary view camera were required. Retroreflective shadowgraphy has seen occasional use since then, but its unique combination of large scale, simplicity and portability has barely been tapped. It functions well in environments hostile to most optical diagnostics, such as full-scale outdoor daylight ballistics and explosives testing. Here, shadowgrams cast upon a 2.4 m square retroreflective screen are imaged by a Photron Fastcam APX-RS digital camera that is capable of megapixel image resolution at 3000 frames/sec up to 250,000 frames/sec at lower resolution. Microsecond frame exposures are used to examine the external ballistics of several firearms, including a high-powered rifle, an AK-47 submachine gun, and several pistols and revolvers. Muzzle blast phenomena and the mechanism of gunpowder residue deposition on the shooter's hands are clearly visualized. In particular, observing the firing of a pistol with and without a silencer (suppressor) suggests that some of the muzzle blast energy is converted by the silencer into supersonic jet noise.
Downscaling modelling system for multi-scale air quality forecasting
NASA Astrophysics Data System (ADS)
Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.
2010-09-01
Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -É linear eddy-viscosity model, k - É non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.
A framework for global river flood risk assessment
NASA Astrophysics Data System (ADS)
Winsemius, H. C.; Van Beek, L. P. H.; Bouwman, A.; Ward, P. J.; Jongman, B.
2012-04-01
There is an increasing need for strategic global assessments of flood risks. Such assessments may be required by: (a) International Financing Institutes and Disaster Management Agencies to evaluate where, when, and which investments in flood risk mitigation are most required; (b) (re-)insurers, who need to determine their required coverage capital; and (c) large companies to account for risks of regional investments. In this contribution, we propose a framework for global river flood risk assessment. The framework combines coarse scale resolution hazard probability distributions, derived from global hydrological model runs (typical scale about 0.5 degree resolution) with high resolution estimates of exposure indicators. The high resolution is required because floods typically occur at a much smaller scale than the typical resolution of global hydrological models, and exposure indicators such as population, land use and economic value generally are strongly variable in space and time. The framework therefore estimates hazard at a high resolution ( 1 km2) by using a) global forcing data sets of the current (or in scenario mode, future) climate; b) a global hydrological model; c) a global flood routing model, and d) importantly, a flood spatial downscaling routine. This results in probability distributions of annual flood extremes as an indicator of flood hazard, at the appropriate resolution. A second component of the framework combines the hazard probability distribution with classical flood impact models (e.g. damage, affected GDP, affected population) to establish indicators for flood risk. The framework can be applied with a large number of datasets and models and sensitivities of such choices can be evaluated by the user. The framework is applied using the global hydrological model PCR-GLOBWB, combined with a global flood routing model. Downscaling of the hazard probability distributions to 1 km2 resolution is performed with a new downscaling algorithm, applied on a number of target regions. We demonstrate the use of impact models in these regions based on global GDP, population, and land use maps. In this application, we show sensitivities of the estimated risks with regard to the use of different climate input datasets, decisions made in the downscaling algorithm, and different approaches to establish distributed estimates of GDP and asset exposure to flooding.
Observational constraints on earthquake source scaling: Understanding the limits in resolution
Hough, S.E.
1996-01-01
I examine the resolution of the type of stress drop estimates that have been used to place observational constraints on the scaling of earthquake source processes. I first show that apparent stress and Brune stress drop are equivalent to within a constant given any source spectral decay between ??1.5 and ??3 (i.e., any plausible value) and so consistent scaling is expected for the two estimates. I then discuss the resolution and scaling of Brune stress drop estimates, in the context of empirical Green's function results from recent earthquake sequences, including the 1992 Joshua Tree, California, mainshock and its aftershocks. I show that no definitive scaling of stress drop with moment is revealed over the moment range 1019-1025; within this sequence, however, there is a tendency for moderate-sized (M 4-5) events to be characterized by high stress drops. However, well-resolved results for recent M > 6 events are inconsistent with any extrapolated stress increase with moment for the aftershocks. Focusing on comer frequency estimates for smaller (M < 3.5) events, I show that resolution is extremely limited even after empirical Green's function deconvolutions. A fundamental limitation to resolution is the paucity of good signal-to-noise at frequencies above 60 Hz, a limitation that will affect nearly all surficial recordings of ground motion in California and many other regions. Thus, while the best available observational results support a constant stress drop for moderate-to large-sized events, very little robust observational evidence exists to constrain the quantities that bear most critically on our understanding of source processes: stress drop values and stress drop scaling for small events.
Obscuring and Feeding Supermassive Black Holes with Evolving Nuclear Star Clusters
NASA Astrophysics Data System (ADS)
Schartmann, M.; Burkert, A.; Krause, M.; Camenzind, M.; Meisenheimer, K.; Davies, R. I.
2010-05-01
Recently, high-resolution observations made with the help of the near-infrared adaptive optics integral field spectrograph SINFONI at the VLT proved the existence of massive and young nuclear star clusters in the centers of a sample of Seyfert galaxies. With the help of high-resolution hydrodynamical simulations with the pluto code, we follow the evolution of such clusters, especially focusing on mass and energy feedback from young stars. This leads to a filamentary inflow of gas on large scales (tens of parsecs), whereas a turbulent and very dense disk builds up on the parsec scale. Here we concentrate on the long-term evolution of the nuclear disk in NGC 1068 with the help of an effective viscous disk model, using the mass input from the large-scale simulations and accounting for star formation in the disk. This two-stage modeling enables us to connect the tens-of-parsecs scale region (observable with SINFONI) with the parsec-scale environment (MIDI observations). At the current age of the nuclear star cluster, our simulations predict disk sizes of the order 0.8 to 0.9 pc, gas masses of order 106 M⊙, and mass transfer rates through the inner boundary of order 0.025 M⊙ yr-1, in good agreement with values derived from observations.
Shafer, Sarah L; Bartlein, Patrick J; Gray, Elizabeth M; Pelltier, Richard T
2015-01-01
Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0-58.0°N latitude by 136.6-103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070-2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
Wen, C; Ma, Y J
2018-03-01
The determination of atomic structures and further quantitative information such as chemical compositions at atomic scale for semiconductor defects or heteroepitaxial interfaces can provide direct evidence to understand their formation, modification, and/or effects on the properties of semiconductor films. The commonly used method, high-resolution transmission electron microscopy (HRTEM), suffers from difficulty in acquiring images that correctly show the crystal structure at atomic resolution, because of the limitation in microscope resolution or deviation from the Scherzer-defocus conditions. In this study, an image processing method, image deconvolution, was used to achieve atomic-resolution (∼1.0 Å) structure images of small lattice-mismatch (∼1.0%) AlN/6H-SiC (0001) and large lattice-mismatch (∼8.5%) AlSb/GaAs (001) heteroepitaxial interfaces using simulated HRTEM images of a conventional 300-kV field-emission-gun transmission electron microscope under non-Scherzer-defocus conditions. Then, atomic-scale chemical compositions at the interface were determined for the atomic intermixing and Lomer dislocation with an atomic step by analyzing the deconvoluted image contrast. Furthermore, the effect of dynamical scattering on contrast analysis was also evaluated for differently weighted atomic columns in the compositions. Copyright © 2018 Elsevier Ltd. All rights reserved.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; ...
2016-11-22
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relativelymore » few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. Lastly, HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.« less
Monitoring Termite-Mediated Ecosystem Processes Using Moderate and High Resolution Satellite Imagery
NASA Astrophysics Data System (ADS)
Lind, B. M.; Hanan, N. P.
2016-12-01
Termites are considered dominant decomposers and prominent ecosystem engineers in the global tropics and they build some of the largest and architecturally most complex non-human-made structures in the world. Termite mounds significantly alter soil texture, structure, and nutrients, and have major implications for local hydrological dynamics, vegetation characteristics, and biological diversity. An understanding of how these processes change across large scales has been limited by our ability to detect termite mounds at high spatial resolutions. Our research develops methods to detect large termite mounds in savannas across extensive geographic areas using moderate and high resolution satellite imagery. We also investigate the effect of termite mounds on vegetation productivity using Landsat-8 maximum composite NDVI data as a proxy for production. Large termite mounds in arid and semi-arid Senegal generate highly reflective `mound scars' with diameters ranging from 10 m at minimum to greater than 30 m. As Sentinel-2 has several bands with 10 m resolution and Landsat-8 has improved calibration, higher radiometric resolution, 15 m spatial resolution (pansharpened), and improved contrast between vegetated and bare surfaces compared to previous Landsat missions, we found that the largest and most influential mounds in the landscape can be detected. Because mounds as small as 4 m in diameter are easily detected in high resolution imagery we used these data to validate detection results and quantify omission errors for smaller mounds.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Temporal and spatial scaling impacts on extreme precipitation
NASA Astrophysics Data System (ADS)
Eggert, B.; Berg, P.; Haerter, J. O.; Jacob, D.; Moseley, C.
2015-01-01
Both in the current climate and in the light of climate change, understanding of the causes and risk of precipitation extremes is essential for protection of human life and adequate design of infrastructure. Precipitation extreme events depend qualitatively on the temporal and spatial scales at which they are measured, in part due to the distinct types of rain formation processes that dominate extremes at different scales. To capture these differences, we first filter large datasets of high-resolution radar measurements over Germany (5 min temporally and 1 km spatially) using synoptic cloud observations, to distinguish convective and stratiform rain events. In a second step, for each precipitation type, the observed data are aggregated over a sequence of time intervals and spatial areas. The resulting matrix allows a detailed investigation of the resolutions at which convective or stratiform events are expected to contribute most to the extremes. We analyze where the statistics of the two types differ and discuss at which resolutions transitions occur between dominance of either of the two precipitation types. We characterize the scales at which the convective or stratiform events will dominate the statistics. For both types, we further develop a mapping between pairs of spatially and temporally aggregated statistics. The resulting curve is relevant when deciding on data resolutions where statistical information in space and time is balanced. Our study may hence also serve as a practical guide for modelers, and for planning the space-time layout of measurement campaigns. We also describe a mapping between different pairs of resolutions, possibly relevant when working with mismatched model and observational resolutions, such as in statistical bias correction.
NASA Astrophysics Data System (ADS)
McClain, Bobbi J.; Porter, William F.
2000-11-01
Satellite imagery is a useful tool for large-scale habitat analysis; however, its limitations need to be tested. We tested these limitations by varying the methods of a habitat evaluation for white-tailed deer ( Odocoileus virginianus) in the Adirondack Park, New York, USA, utilizing harvest data to create and validate the assessment models. We used two classified images, one with a large minimum mapping unit but high accuracy and one with no minimum mapping unit but slightly lower accuracy, to test the sensitivity of the evaluation to these differences. We tested the utility of two methods of assessment, habitat suitability index modeling, and pattern recognition modeling. We varied the scale at which the models were applied by using five separate sizes of analysis windows. Results showed that the presence of a large minimum mapping unit eliminates important details of the habitat. Window size is relatively unimportant if the data are averaged to a large resolution (i.e., township), but if the data are used at the smaller resolution, then the window size is an important consideration. In the Adirondacks, the proportion of hardwood and softwood in an area is most important to the spatial dynamics of deer populations. The low occurrence of open area in all parts of the park either limits the effect of this cover type on the population or limits our ability to detect the effect. The arrangement and interspersion of cover types were not significant to deer populations.
StePS: Stereographically Projected Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-05-01
StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.
Effect of spatial averaging on multifractal properties of meteorological time series
NASA Astrophysics Data System (ADS)
Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika
2016-04-01
Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased by -29.1 % (precipitation; width of MS) up to >4 % (min. Temperature, Radiation; asymmetry of MS). Also, the spatial variability of MS parameters was strongly affected at the highest aggregation (100 km). Obtained results confirm that spatial data aggregation may strongly affect temporal scaling properties. This should be taken into account when upscaling for large-scale studies. Acknowledgements The study was conducted within FACCE MACSUR. Please see Baranowski et al. (2015) for details on funding. References Baranowski, P., Krzyszczak, J., Sławiński, C. et al. (2015). Climate Research 65, 39-52. Hoffman, H., G. Zhao, L.G.J. Van Bussel et al. (2015). Climate Research 65, 53-69. Zhao, G., Siebert, S., Rezaei E. et al. (2015). Agricultural and Forest Meteorology 200, 156-171.
Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model
NASA Astrophysics Data System (ADS)
Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.
2017-12-01
This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.
Large-area mapping of biodiversity
Scott, J.M.; Jennings, M.D.
1998-01-01
The age of discovery, description, and classification of biodiversity is entering a new phase. In responding to the conservation imperative, we can now supplement the essential work of systematics with spatially explicit information on species and assemblages of species. This is possible because of recent conceptual, technical, and organizational progress in generating synoptic views of the earth's surface and a great deal of its biological content, at multiple scales of thematic as well as geographic resolution. The development of extensive spatial data on species distributions and vegetation types provides us with a framework for: (a) assessing what we know and where we know it at meso-scales, and (b) stratifying the biological universe so that higher-resolution surveys can be more efficiently implemented, coveting, for example, geographic adequacy of specimen collections, population abundance, reproductive success, and genetic dynamics. The land areas involved are very large, and the questions, such as resolution, scale, classification, and accuracy, are complex. In this paper, we provide examples from the United States Gap Analysis Program on the advantages and limitations of mapping the occurrence of terrestrial vertebrate species and dominant land-cover types over large areas as joint ventures and in multi-organizational partnerships, and how these cooperative efforts can be designed to implement results from data development and analyses as on-the-ground actions. Clearly, new frameworks for thinking about biogeographic information as well as organizational cooperation are needed if we are to have any hope of documenting the full range of species occurrences and ecological processes in ways meaningful to their management. The Gap Analysis experience provides one model for achieving these new frameworks.
NASA Technical Reports Server (NTRS)
Goldsmith, Paul F.
2012-01-01
Surveys of all different types provide basic data using different tracers. Molecular clouds have structure over a very wide range of scales. Thus, "high resolution" surveys and studies of selected nearby clouds add critical information. The combination of large-area and high resolution allows Increased spatial dynamic range, which in turn enables detection of new and perhaps critical morphology (e.g. filaments). Theoretical modeling has made major progress, and suggests that multiple forces are at work. Galactic-scale modeling also progressing - indicates that stellar feedback is required. Models must strive to reproduce observed cloud structure at all scales. Astrochemical observations are not unrelated to questions of cloud evolution and star formation but we are still learning how to use this capability.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
A unified large/small-scale dynamo in helical turbulence
NASA Astrophysics Data System (ADS)
Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel
2016-09-01
We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.
Tait, E. W.; Ratcliff, L. E.; Payne, M. C.; ...
2016-04-20
Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree withmore » those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. As a result, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable.« less
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Walker, Gregory K.; Mahanama, Sarith P.; Reichle, Rolf H.
2013-01-01
Offline simulations over the conterminous United States (CONUS) with a land surface model are used to address two issues relevant to the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which a realistic increase in the spatial resolution of forecasted precipitation would improve streamflow forecasts. The addition of error to a soil moisture initialization field is found to lead to a nearly proportional reduction in streamflow forecast skill. The linearity of the response allows the determination of a lower bound for the increase in streamflow forecast skill achievable through improved soil moisture estimation, e.g., through satellite-based soil moisture measurements. An increase in the resolution of precipitation is found to have an impact on large-scale streamflow forecasts only when evaporation variance is significant relative to the precipitation variance. This condition is met only in the western half of the CONUS domain. Taken together, the two studies demonstrate the utility of a continental-scale land surface modeling system as a tool for addressing the science of hydrological prediction.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
NASA Astrophysics Data System (ADS)
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
A Large Scale, High Resolution Agent-Based Insurgency Model
2013-09-30
CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...
2015-06-19
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-01-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Macander, M. J.; Frost, G. V., Jr.
2015-12-01
Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.
High-resolution RCMs as pioneers for future GCMs
NASA Astrophysics Data System (ADS)
Schar, C.; Ban, N.; Arteaga, A.; Charpilloz, C.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Leutwyler, D.; Lüthi, D.; Piaget, N.; Ruedisuehli, S.; Schlemmer, L.; Schulthess, T. C.; Wernli, H.
2017-12-01
Currently large efforts are underway to refine the horizontal resolution of global and regional climate models to O(1 km), with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement will move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. High resolution is particularly attractive in order to better represent critical cloud feedback processes (e.g. related to global climate sensitivity and extratropical summer convection) and extreme events (such as heavy precipitation events, floods, and hurricanes). The presentation will be illustrated using decade-long simulations at 2 km horizontal grid spacing, some of these covering the European continent on a computational mesh with 1536x1536x60 grid points. To accomplish such simulations, use is made of emerging heterogeneous supercomputing architectures, using a version of the COSMO limited-area weather and climate model that is able to run entirely on GPUs. Results show that kilometer-scale resolution dramatically improves the simulation of precipitation in terms of the diurnal cycle and short-term extremes. The modeling framework is used to address changes of precipitation scaling with climate change. It is argued that already today, modern supercomputers would in principle enable global atmospheric convection-resolving climate simulations, provided appropriately refactored codes were available, and provided solutions were found to cope with the rapidly growing output volume. A discussion will be provided of key challenges affecting the design of future high-resolution climate models. It is suggested that km-scale RCMs should be exploited to pioneer this terrain, at a time when GCMs are not yet available at such resolutions. Areas of interest include the development of new parameterization schemes adequate for km-scale resolution, the exploration of new validation methodologies and data sets, the assessment of regional-scale climate feedback processes, and the development of alternative output analysis methodologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroy, Adam K.; Hughes, Annie; Schruba, Andreas
2016-11-01
The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high-fidelity, high-resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional “cloud properties” treatment. However, we argue thatmore » our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low- J {sup 12}CO data from recent surveys, we characterize the molecular ISM at 60 pc resolution in the Antennae, the Large Magellanic Cloud (LMC), M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60 pc scales and show galaxy-to-galaxy and intragalaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1 σ width of ∼0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60 pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.« less
SLIDE - a web-based tool for interactive visualization of large-scale -omics data.
Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon
2018-06-28
Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
NASA Astrophysics Data System (ADS)
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Kim, S.; Habibi, H.; Seo, D. J.; Welles, E.; Philips, B.; Adams, E.; Smith, M. B.; Wells, E.
2017-12-01
With the development of the National Water Model (NWM), the NWS has made a step-change advance in operational water forecasting by enabling high-resolution hydrologic modeling across the US. As a part of a separate initiative to enhance flash flood forecasting and inundation mapping capacity, the NWS has been mandated to provide forecasts at even finer spatiotemporal resolutions when and where such information is demanded. In this presentation, we describe implementation of the NWM at a hyper resolution over a nested domain. We use WRF-Hydro as the core model but at significantly higher resolutions with scale-commensurate model parameters. The demonstration domain is multiple urban catchments within the Cities of Arlington and Grand Prairie in the Dallas-Fort Worth Metroplex. This area is susceptible to urban flooding due to the hydroclimatology coupled with large impervious cover. The nested model is based on hyper-resolution terrain data to resolve significant land surface features such as streets and large man-made structures, and forced by the high-resolution radar-based quantitative precipitation information. In this presentation, we summarize progress and preliminary results and share issues and challenges.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Kang, In-Sik; Reale, Oreste
2009-01-01
This talk gives an update on the progress and further plans for a coordinated project to carry out and analyze high-resolution simulations of tropical storm activity with a number of state-of-the-art global climate models. Issues addressed include, the mechanisms by which SSTs control tropical storm. activity on inter-annual and longer time scales, the modulation of that activity by the Madden Julian Oscillation on sub-seasonal time scales, as well as the sensitivity of the results to model formulation. The project also encourages companion coarser resolution runs to help assess resolution dependence, and. the ability of the models to capture the large-scale and long-terra changes in the parameters important for hurricane development. Addressing the above science questions is critical to understanding the nature of the variability of the Asian-Australian monsoon and its regional impacts, and thus CLIVAR RAMP fully endorses the proposed tropical storm simulation activity. The project is open to all interested organizations and investigators, and the results from the runs will be shared among the participants, as well as made available to the broader scientific community for analysis.
High Resolution IRAS Maps and IR Emission of M31 -- II. Diffuse Component and Interstellar Dust
NASA Technical Reports Server (NTRS)
Xu, C.; Helou, G.
1995-01-01
Large-scale dust heating and cooling in the diffuse medium of M31 is studied using the high resolution (HiRes) IRAS maps in conjunction with UV, optical (UBV), and the HI maps. A dust heating/cooling model is developed based on a radiative transfer model which assumes a 'Sandwich' configuration of dust and stars takes account of the effect of dust grain scattering.
Midekisa, Alemayehu; Holl, Felix; Savory, David J; Andrade-Pacheco, Ricardo; Gething, Peter W; Bennett, Adam; Sturrock, Hugh J W
2017-01-01
Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth's land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources.
Holl, Felix; Savory, David J.; Andrade-Pacheco, Ricardo; Gething, Peter W.; Bennett, Adam; Sturrock, Hugh J. W.
2017-01-01
Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth’s land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources. PMID:28953943
NASA Technical Reports Server (NTRS)
Waugh, Darryn W.; Plumb, R. Alan
1994-01-01
We present a trajectory technique, contour advection with surgery (CAS), for tracing the evolution of material contours in a specified (including observed) evolving flow. CAS uses the algorithms developed by Dritschel for contour dynamics/surgery to trace the evolution of specified contours. The contours are represented by a series of particles, which are advected by a specified, gridded, wind distribution. The resolution of the contours is preserved by continually adjusting the number of particles, and finescale features are produced that are not present in the input data (and cannot easily be generated using standard trajectory techniques). The reliability, and dependence on the spatial and temporal resolution of the wind field, of the CAS procedure is examined by comparisons with high-resolution numerical data (from contour dynamics calculations and from a general circulation model), and with routine stratospheric analyses. These comparisons show that the large-scale motions dominate the deformation field and that CAS can accurately reproduce small scales from low-resolution wind fields. The CAS technique therefore enables examination of atmospheric tracer transport at previously unattainable resolution.
Combining points and lines in rectifying satellite images
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed F.
2017-09-01
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
NASA Astrophysics Data System (ADS)
de Beurs, K.; Henebry, G. M.; Owsley, B.; Sokolik, I. N.
2016-12-01
Land surface phenology metrics allow for the summarization of long image time series into a set of annual observations that describe the vegetated growing season. These metrics have been shown to respond to both large scale climatic and anthropogenic impacts. In this study we assemble a time series (2001 - 2014) of Moderate Resolution Imaging Spectroradiometer (MODIS) Nadir BRDF-Adjusted Reflectance data and land surface temperature data at 0.05º spatial resolution. We then derive land surface phenology metrics focusing on the peak of the growing season by fitting quadratic regression models using NDVI and Accumulated Growing Degree-Days (AGDD) derived from land surface temperature. We link the annual information on the peak timing, the thermal time to peak and the maximum of the growing season with five of the most important large scale climate oscillations: NAO, AO, PDO, PNA and ENSO. We demonstrate several significant correlations between the climate oscillations and the land surface phenology peak metrics for a range of different bioclimatic regions in both dryland Central Asia and the northern Polar Regions. We will then link the correlation results with trends derived by the seasonal Mann-Kendall trend detection method applied to several satellite derived vegetation and albedo datasets.
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan
2005-04-01
The reasons for biases in regional climate simulations were investigated in an attempt to discern whether they arise from deficiencies in the model parameterizations or are due to dynamical problems. Using the Regional Atmospheric Modeling System (RAMS) forced by the National Centers for Environmental Prediction-National Center for Atmospheric Research reanalysis, the detailed climate over North America at 50-km resolution for June 2000 was simulated. First, the RAMS equations were modified to make them applicable to a large region, and its turbulence parameterization was corrected. The initial simulations showed large biases in the location of precipitation patterns and surface air temperatures. By implementing higher-resolution soil data, soil moisture and soil temperature initialization, and corrections to the Kain-Fritch convective scheme, the temperature biases and precipitation amount errors could be removed, but the precipitation location errors remained. The precipitation location biases could only be improved by implementing spectral nudging of the large-scale (wavelength of 2500 km) dynamics in RAMS. This corrected for circulation errors produced by interactions and reflection of the internal domain dynamics with the lateral boundaries where the model was forced by the reanalysis.
Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM
NASA Astrophysics Data System (ADS)
Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak
2015-04-01
Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.
Subsurface Monitoring of CO2 Sequestration - A Review and Look Forward
NASA Astrophysics Data System (ADS)
Daley, T. M.
2012-12-01
The injection of CO2 into subsurface formations is at least 50 years old with large-scale utilization of CO2 for enhanced oil recovery (CO2-EOR) beginning in the 1970s. Early monitoring efforts had limited measurements in available boreholes. With growing interest in CO2 sequestration beginning in the 1990's, along with growth in geophysical reservoir monitoring, small to mid-size sequestration monitoring projects began to appear. The overall goals of a subsurface monitoring plan are to provide measurement of CO2 induced changes in subsurface properties at a range of spatial and temporal scales. The range of spatial scales allows tracking of the location and saturation of the plume with varying detail, while finer temporal sampling (up to continuous) allows better understanding of dynamic processes (e.g. multi-phase flow) and constraining of reservoir models. Early monitoring of small scale pilots associated with CO2-EOR (e.g., the McElroy field and the Lost Hills field), developed many of the methodologies including tomographic imaging and multi-physics measurements. Large (reservoir) scale sequestration monitoring began with the Sleipner and Weyburn projects. Typically, large scale monitoring, such as 4D surface seismic, has limited temporal sampling due to costs. Smaller scale pilots can allow more frequent measurements as either individual time-lapse 'snapshots' or as continuous monitoring. Pilot monitoring examples include the Frio, Nagaoka and Otway pilots using repeated well logging, crosswell imaging, vertical seismic profiles and CASSM (continuous active-source seismic monitoring). For saline reservoir sequestration projects, there is typically integration of characterization and monitoring, since the sites are not pre-characterized resource developments (oil or gas), which reinforces the need for multi-scale measurements. As we move beyond pilot sites, we need to quantify CO2 plume and reservoir properties (e.g. pressure) over large scales, while still obtaining high resolution. Typically the high-resolution (spatial and temporal) tools are deployed in permanent or semi-permanent borehole installations, where special well design may be necessary, such as non-conductive casing for electrical surveys. Effective utilization of monitoring wells requires an approach of modular borehole monitoring (MBM) were multiple measurements can be made. An example is recent work at the Citronelle pilot injection site where an MBM package with seismic, fluid sampling and distributed fiber sensing was deployed. For future large scale sequestration monitoring, an adaptive borehole-monitoring program is proposed.
NASA Astrophysics Data System (ADS)
Goode, J. R.; Candelaria, T.; Kramer, N. R.; Hill, A. F.
2016-12-01
As global energy demands increase, generating hydroelectric power by constructing dams and reservoirs on large river systems is increasingly seen as a renewable alternative to fossil fuels, especially in emerging economies. Many large-scale hydropower projects are located in steep mountainous terrain, where environmental factors have the potential to conspire against the sustainability and success of such projects. As reservoir storage capacity decreases when sediment builds up behind dams, high sediment yields can limit project life expectancy and overall hydropower viability. In addition, episodically delivered sediment from landslides can make quantifying sediment loads difficult. These factors, combined with remote access, limit the critical data needed to effectively evaluate development decisions. In the summer of 2015, we conducted a basic survey to characterize the geomorphology, hydrology and ecology of 620 km of the Rio Maranon, Peru - a major tributary to the Amazon River, which flows north from the semi-arid Peruvian Andes - prior to its dissection by several large hydropower dams. Here we present one component of this larger study: a first order analysis of potential sediment inputs to the Rio Maranon, Peru. To evaluate sediment delivery and storage in this system, we used high resolution Google Earth imagery to delineate landslides, combined with high resolution imagery from a DJI Phantom 3 Drone, flown at alluvial fan inputs to the river in the field. Because hillslope-derived sediment inputs from headwater tributaries are important to overall ecosystem health in large river systems, our study has the potential to contribute to the understanding the impacts of large Andean dams on sediment connectivity to the Amazon basin.
Spatial Downscaling of Alien Species Presences using Machine Learning
NASA Astrophysics Data System (ADS)
Daliakopoulos, Ioannis N.; Katsanevakis, Stelios; Moustakas, Aristides
2017-07-01
Large scale, high-resolution data on alien species distributions are essential for spatially explicit assessments of their environmental and socio-economic impacts, and management interventions for mitigation. However, these data are often unavailable. This paper presents a method that relies on Random Forest (RF) models to distribute alien species presence counts at a finer resolution grid, thus achieving spatial downscaling. A sufficiently large number of RF models are trained using random subsets of the dataset as predictors, in a bootstrapping approach to account for the uncertainty introduced by the subset selection. The method is tested with an approximately 8×8 km2 grid containing floral alien species presence and several indices of climatic, habitat, land use covariates for the Mediterranean island of Crete, Greece. Alien species presence is aggregated at 16×16 km2 and used as a predictor of presence at the original resolution, thus simulating spatial downscaling. Potential explanatory variables included habitat types, land cover richness, endemic species richness, soil type, temperature, precipitation, and freshwater availability. Uncertainty assessment of the spatial downscaling of alien species’ occurrences was also performed and true/false presences and absences were quantified. The approach is promising for downscaling alien species datasets of larger spatial scale but coarse resolution, where the underlying environmental information is available at a finer resolution than the alien species data. Furthermore, the RF architecture allows for tuning towards operationally optimal sensitivity and specificity, thus providing a decision support tool for designing a resource efficient alien species census.
Impact of oceanic-scale interactions on the seasonal modulation of ocean dynamics by the atmosphere.
Sasaki, Hideharu; Klein, Patrice; Qiu, Bo; Sasai, Yoshikazu
2014-12-15
Ocean eddies (with a size of 100-300 km), ubiquitous in satellite observations, are known to represent about 80% of the total ocean kinetic energy. Recent studies have pointed out the unexpected role of smaller oceanic structures (with 1-50 km scales) in generating and sustaining these eddies. The interpretation proposed so far invokes the internal instability resulting from the large-scale interaction between upper and interior oceanic layers. Here we show, using a new high-resolution simulation of the realistic North Pacific Ocean, that ocean eddies are instead sustained by a different process that involves small-scale mixed-layer instabilities set up by large-scale atmospheric forcing in winter. This leads to a seasonal evolution of the eddy kinetic energy in a very large part of this ocean, with an amplitude varying by a factor almost equal to 2. Perspectives in terms of the impacts on climate dynamics and future satellite observational systems are briefly discussed.
Impact of oceanic-scale interactions on the seasonal modulation of ocean dynamics by the atmosphere
Sasaki, Hideharu; Klein, Patrice; Qiu, Bo; Sasai, Yoshikazu
2014-01-01
Ocean eddies (with a size of 100–300 km), ubiquitous in satellite observations, are known to represent about 80% of the total ocean kinetic energy. Recent studies have pointed out the unexpected role of smaller oceanic structures (with 1–50 km scales) in generating and sustaining these eddies. The interpretation proposed so far invokes the internal instability resulting from the large-scale interaction between upper and interior oceanic layers. Here we show, using a new high-resolution simulation of the realistic North Pacific Ocean, that ocean eddies are instead sustained by a different process that involves small-scale mixed-layer instabilities set up by large-scale atmospheric forcing in winter. This leads to a seasonal evolution of the eddy kinetic energy in a very large part of this ocean, with an amplitude varying by a factor almost equal to 2. Perspectives in terms of the impacts on climate dynamics and future satellite observational systems are briefly discussed. PMID:25501039
D.P. Turner; W.D. Ritts; B.E. Law; W.B. Cohen; Z. Yan; T. Hudiburg; J.L. Campbell; M. Duane
2007-01-01
Bottom-up scaling of net ecosystem production (NEP) and net biome production (NBP) was used to generate a carbon budget for a large heterogeneous region (the state of Oregon, 2.5x105 km2 ) in the Western United States. Landsat resolution (30 m) remote sensing provided the basis for mapping land cover and disturbance history...
Large-scale horizontal flows from SOUP observations of solar granulation
NASA Astrophysics Data System (ADS)
November, L. J.; Simon, G. W.; Tarbell, T. D.; Title, A. M.; Ferguson, S. H.
1987-09-01
Using high-resolution time-sequence photographs of solar granulation from the SOUP experiment on Spacelab 2 the authors observed large-scale horizontal flows in the solar surface. The measurement method is based upon a local spatial cross correlation analysis. The horizontal motions have amplitudes in the range 300 to 1000 m/s. Radial outflow of granulation from a sunspot penumbra into the surrounding photosphere is a striking new discovery. Both the supergranulation pattern and cellular structures having the scale of mesogranulation are seen. The vertical flows that are inferred by continuity of mass from these observed horizontal flows have larger upflow amplitudes in cell centers than downflow amplitudes at cell boundaries.
Universal nonlinear small-scale dynamo.
Beresnyak, A
2012-01-20
We consider astrophysically relevant nonlinear MHD dynamo at large Reynolds numbers (Re). We argue that it is universal in a sense that magnetic energy grows at a rate which is a constant fraction C(E) of the total turbulent dissipation rate. On the basis of locality bounds we claim that this "efficiency of the small-scale dynamo", C(E), is a true constant for large Re and is determined only by strongly nonlinear dynamics at the equipartition scale. We measured C(E) in numerical simulations and observed a value around 0.05 in the highest resolution simulations. We address the issue of C(E) being small, unlike the Kolmogorov constant which is of order unity. © 2012 American Physical Society
Street Level Hydrology: An Urban Application of the WRF-Hydro Framework in Denver, Colorado
NASA Astrophysics Data System (ADS)
Read, L.; Hogue, T. S.; Salas, F. R.; Gochis, D.
2015-12-01
Urban flood modeling at the watershed scale carries unique challenges in routing complexity, data resolution, social and political issues, and land surface - infrastructure interactions. The ability to accurately trace and predict the flow of water through the urban landscape enables better emergency response management, floodplain mapping, and data for future urban infrastructure planning and development. These services are of growing importance as urban population is expected to continue increasing by 1.84% per year for the next 25 years, increasing the vulnerability of urban regions to damages and loss of life from floods. Although a range of watershed-scale models have been applied in specific urban areas to examine these issues, there is a trend towards national scale hydrologic modeling enabled by supercomputing resources to understand larger system-wide hydrologic impacts and feedbacks. As such it is important to address how urban landscapes can be represented in large scale modeling processes. The current project investigates how coupling terrain and infrastructure routing can improve flow prediction and flooding events over the urban landscape. We utilize the WRF-Hydro modeling framework and a high-resolution terrain routing grid with the goal of compiling standard data needs necessary for fine scale urban modeling and dynamic flood forecasting in the urban setting. The city of Denver is selected as a case study, as it has experienced several large flooding events in the last five years and has an urban annual population growth rate of 1.5%, one of the highest in the U.S. Our work highlights the hydro-informatic challenges associated with linking channel networks and drainage infrastructure in an urban area using the WRF-Hydro modeling framework and high resolution urban models for short-term flood prediction.
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
Overcoming complexities for consistent, continental-scale flood mapping
NASA Astrophysics Data System (ADS)
Smith, Helen; Zaidman, Maxine; Davison, Charlotte
2013-04-01
The EU Floods Directive requires all member states to produce flood hazard maps by 2013. Although flood mapping practices are well developed in Europe, there are huge variations in the scale and resolution of the maps between individual countries. Since extreme flood events are rarely confined to a single country, this is problematic, particularly for the re/insurance industry whose exposures often extend beyond country boundaries. Here, we discuss the challenges of large-scale hydrological and hydraulic modelling, using our experience of developing a 12-country model and set of maps, to illustrate how consistent, high-resolution river flood maps across Europe can be produced. The main challenges addressed include: data acquisition; manipulating the vast quantities of high-resolution data; and computational resources. Our starting point was to develop robust flood-frequency models that are suitable for estimating peak flows for a range of design flood return periods. We used the index flood approach, based on a statistical analysis of historic river flow data pooled on the basis of catchment characteristics. Historical flow data were therefore sourced for each country and collated into a large pan-European database. After a lengthy validation these data were collated into 21 separate analysis zones or regions, grouping smaller river basins according to their physical and climatic characteristics. The very large continental scale basins were each modelled separately on account of their size (e.g. Danube, Elbe, Drava and Rhine). Our methodology allows the design flood hydrograph to be predicted at any point on the river network for a range of return periods. Using JFlow+, JBA's proprietary 2D hydraulic hydrodynamic model, the calculated out-of-bank flows for all watercourses with an upstream drainage area exceeding 50km2 were routed across two different Digital Terrain Models in order to map the extent and depth of floodplain inundation. This generated modelling for a total river length of approximately 250,000km. Such a large-scale, high-resolution modelling exercise is extremely demanding on computational resources and would have been unfeasible without the use of Graphics Processing Units on a network of standard specification gaming computers. Our GPU grid is the world's largest flood-dedicated computer grid. The European river basins were split out into approximately 100 separate hydraulic models and managed individually, although care was taken to ensure flow continuity was maintained between models. The flood hazard maps from the modelling were pieced together using GIS techniques, to provide flood depth and extent information across Europe to a consistent scale and standard. After discussing the methodological challenges, we shall present our flood hazard maps and, from extensive validation work, compare these against historical flow records and observed flood extents.
The Price of Precision: Large-Scale Mapping of Forest Structure and Biomass Using Airborne Lidar
NASA Astrophysics Data System (ADS)
Dubayah, R.
2015-12-01
Lidar remote sensing provides one of the best means for acquiring detailed information on forest structure. However, its application over large areas has been limited largely because of its expense. Nonetheless, extant data exist over many states in the U.S., funded largely by state and federal consortia and mainly for infrastructure, emergency response, flood plain and coastal mapping. These lidar data are almost always acquired in leaf-off seasons, and until recently, usually with low point count densities. Even with these limitations, they provide unprecedented wall-to-wall mappings that enable development of appropriate methodologies for large-scale deployment of lidar. In this talk we summarize our research and lessons learned in deriving forest structure over regional areas as part of NASA's Carbon Monitoring System (CMS). We focus on two areas: the entire state of Maryland and Sonoma County, California. The Maryland effort used low density, leaf-off data acquired by each county in varying epochs, while the on-going Sonoma work employs state-of-the-art, high density, wall-to-wall, leaf-on lidar data. In each area we combine these lidar coverages with high-resolution multispectral imagery from the National Agricultural Imagery Program (NAIP) and in situ plot data to produce maps of canopy height, tree cover and biomass, and compare our results against FIA plot data and national biomass maps. Our work demonstrates that large-scale mapping of forest structure at high spatial resolution is achievable but products may be complex to produce and validate over large areas. Furthermore, fundamental issues involving statistical approaches, plot types and sizes, geolocation, modeling scales, allometry, and even the definitions of "forest" and "non-forest" must be approached carefully. Ultimately, determining the "price of precision", that is, does the value of wall-to-wall forest structure data justify their expense, should consider not only carbon market applications, but the other ways the underlying lidar data may be used.
Intermediate-scale plasma irregularities in the polar ionosphere inferred from GPS radio occultation
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O.; Butala, M. D.; Mannucci, A. J.
2015-02-01
We report intermediate-scale plasma irregularities in the polar ionosphere inferred from high-resolution radio occultation (RO) measurements using GPS (Global Positioning System) to CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) satellite radio links. The high inclination of CASSIOPE and the high rate of signal reception by the GPS Attitude, Positioning, and Profiling RO receiver on CASSIOPE enable a high-resolution investigation of the dynamics of the polar ionosphere with unprecedented detail. Intermediate-scale, scintillation-producing irregularities, which correspond to 1 to 40 km scales, were inferred by applying multiscale spectral analysis on the RO phase measurements. Using our multiscale spectral analysis approach and satellite data (Polar Operational Environmental Satellites and Defense Meteorological Satellite Program), we discovered that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap. We found that large length scales and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap implying that the irregularity scales and phase scintillation characteristics are a function of the solar wind and magnetospheric forcings.
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
The nature of the dense obscuring material in the nucleus of NGC 1068
NASA Technical Reports Server (NTRS)
Tacconi, L. J.; Genzel, R.; Blietz, M.; Cameron, M.; Harris, A. I.; Madden, S.
1994-01-01
High spatial and spectral resolution observations of the distribution, physical parameters, and kinematics of the molecular interstellar medium toward the nucleus of the Seyfert 2 galaxy NGC 1068 are reported. The data consist of 2.4 by 3.4 arcseconds resolution interferometry of the 88.6 GHz HCN J = 1 towards 0 line at 17 km/s spectral resolution, single dish observations of several mm/submm isotopic lines of CO and HCN, and 0.85 arcseconds imaging spectroscopy of the 2.12 micron H2 S(1) line at a velocity resolution of 110 km/s. The central few hundred parsecs of NGC 1068 contain a system of dense (N(H2) approximately 10(exp 5) cm(exp -3)), warm (T greater than or equal to 70 K) molecular cloud cores. The low density molecular envelopes have probably been stripped by the nuclear wind and radiation. The molecular gas layer is located in the plane of NGC 1068's large scale disk (inclination approximately 35 deg) and orbits in elliptical streamlines in response to the central stellar bar. The spatial distribution of the 2 micron H2 emission suggests that gas is shocked at the leading edge of the bar, probably resulting in gas influx into the central 100 pc at a rate of a few solar mass per year. In addition to large scale streaming (with a solid body rotation curve), the HCN velocity field requires the presence of random motions of order 100 km/s. We interpret these large random motions as implying the nuclear gas disk to be very thick (scale height/radius approximately 1), probably as the result of the impact of nuclear radiation and wind on orbiting molecular clouds. Geometry and column density of the molecular cloud layer between approximately 30 pc to 300 pc from the nucleus can plausibly account for the nuclear obscuration and anisotropy of the radiation field in the visible and UV.
Molecular Imaging of Kerogen and Minerals in Shale Rocks across Micro- and Nano- Scales
NASA Astrophysics Data System (ADS)
Hao, Z.; Bechtel, H.; Sannibale, F.; Kneafsey, T. J.; Gilbert, B.; Nico, P. S.
2016-12-01
Fourier transform infrared (FTIR) spectroscopy is a reliable and non-destructive quantitative method to evaluate mineralogy and kerogen content / maturity of shale rocks, although it is traditionally difficult to assess the organic and mineralogical heterogeneity at micrometer and nanometer scales due to the diffraction limit of the infrared light. However, it is truly at these scales that the kerogen and mineral content and their formation in share rocks determines the quality of shale gas reserve, the gas flow mechanisms and the gas production. Therefore, it's necessary to develop new approaches which can image across both micro- and nano- scales. In this presentation, we will describe two new molecular imaging approaches to obtain kerogen and mineral information in shale rocks at the unprecedented high spatial resolution, and a cross-scale quantitative multivariate analysis method to provide rapid geochemical characterization of large size samples. The two imaging approaches are enhanced at nearfield respectively by a Ge-hemisphere (GE) and by a metallic scanning probe (SINS). The GE method is a modified microscopic attenuated total reflectance (ATR) method which rapidly captures a chemical image of the shale rock surface at 1 to 5 micrometer resolution with a large field of view of 600 X 600 micrometer, while the SINS probes the surface at 20 nm resolution which provides a chemically "deconvoluted" map at the nano-pore level. The detailed geochemical distribution at nanoscale is then used to build a machine learning model to generate self-calibrated chemical distribution map at micrometer scale with the input of the GE images. A number of geochemical contents across these two important scales are observed and analyzed, including the minerals (oxides, carbonates, sulphides), the organics (carbohydrates, aromatics), and the absorbed gases. These approaches are self-calibrated, optics friendly and non-destructive, so they hold the potential to monitor shale gas flow at real time inside the micro- or nano- pore network, which is of great interest for optimizing the shale gas extraction.
Tropical Waves and the Quasi-Biennial Oscillation in a 7-km Global Climate Simulation
NASA Technical Reports Server (NTRS)
Holt, Laura A.; Alexander, M. Joan; Coy, Lawrence; Molod, Andrea; Putman, William; Pawson, Steven
2016-01-01
This study investigates tropical waves and their role in driving a quasi-biennial oscillation (QBO)-like signal in stratospheric winds in a global 7-km-horizontal-resolution atmospheric general circulation model. The Nature Run (NR) is a 2-year global mesoscale simulation of the Goddard Earth Observing System Model, version 5 (GEOS-5). In the tropics, there is evidence that the NR supports a broad range of convectively generated waves. The NR precipitation spectrum resembles the observed spectrum in many aspects, including the preference for westward-propagating waves. However, even with very high horizontal resolution and a healthy population of resolved waves, the zonal force provided by the resolved waves is still too low in the QBO region and parameterized gravity wave drag is the main driver of the NR QBO-like oscillation (NRQBO). The authors suggest that causes include coarse vertical resolution and excessive dissipation. Nevertheless, the very-high-resolution NR provides an opportunity to analyze the resolved wave forcing of the NR-QBO. In agreement with previous studies, large-scale Kelvin and small-scale waves contribute to the NRQBO driving in eastward shear zones and small-scale waves dominate the NR-QBO driving in westward shear zones. Waves with zonal wavelength,1000 km account for up to half of the small-scale (,3300 km) resolved wave forcing in eastward shear zones and up to 70% of the small-scale resolved wave forcing in westward shear zones of the NR-QBO.
Shear-driven dynamo waves at high magnetic Reynolds number.
Tobias, S M; Cattaneo, F
2013-05-23
Astrophysical magnetic fields often display remarkable organization, despite being generated by dynamo action driven by turbulent flows at high conductivity. An example is the eleven-year solar cycle, which shows spatial coherence over the entire solar surface. The difficulty in understanding the emergence of this large-scale organization is that whereas at low conductivity (measured by the magnetic Reynolds number, Rm) dynamo fields are well organized, at high Rm their structure is dominated by rapidly varying small-scale fluctuations. This arises because the smallest scales have the highest rate of strain, and can amplify magnetic field most efficiently. Therefore most of the effort to find flows whose large-scale dynamo properties persist at high Rm has been frustrated. Here we report high-resolution simulations of a dynamo that can generate organized fields at high Rm; indeed, the generation mechanism, which involves the interaction between helical flows and shear, only becomes effective at large Rm. The shear does not enhance generation at large scales, as is commonly thought; instead it reduces generation at small scales. The solution consists of propagating dynamo waves, whose existence was postulated more than 60 years ago and which have since been used to model the solar cycle.
NASA Astrophysics Data System (ADS)
Saksena, S.; Merwade, V.; Singhofen, P.
2017-12-01
There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
NASA Astrophysics Data System (ADS)
Sanchez-Gomez, Emilia; Somot, S.; Déqué, M.
2009-10-01
One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation.
Preliminary simulations of the large-scale environment during the FIRE cirrus IFO
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.; Toon, Owen B.
1990-01-01
Large scale forcing (scales greater than 500 km) is the dominant factor in the generation, maintenance, and dissipation of cirrus cloud systems. However, the analyses of data acquired during the first Cirrus IFO have highlighted the importance of mesoscale processes (scales of 20 to 500 km) to the development of cirrus cloud systems. Unfortunately, Starr and Wylie found that the temporal and spatial resolution of the standard and supplemental rawinsonde data were insufficient to allow an explanation of all of the mesoscale cloud features that were present on the 27 to 28 Oct. 1986. It is described how dynamic initialization, or 4-D data assimilation (FDDA) can provide a method to address this problem. The first steps towards application of FDDA to FIRE are also described.
Automated AFM for small-scale and large-scale surface profiling in CMP applications
NASA Astrophysics Data System (ADS)
Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il
2018-03-01
As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.
NASA Astrophysics Data System (ADS)
Ma, Yulong; Liu, Heping
2017-12-01
Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.
NASA Astrophysics Data System (ADS)
Hamada, Y.; O'Connor, B. L.
2012-12-01
Development in arid environments often results in the loss and degradation of the ephemeral streams that provide habitat and critical ecosystem functions such as water delivery, sediment transport, and groundwater recharge. Quantification of these ecosystem functions is challenging because of the episodic nature of runoff events in desert landscapes and the large spatial scale of watersheds that potentially can be impacted by large-scale development. Low-impact development guidelines and regulatory protection of ephemeral streams are often lacking due to the difficulty of accurately mapping and quantifying the critical functions of ephemeral streams at scales larger than individual reaches. Renewable energy development in arid regions has the potential to disturb ephemeral streams at the watershed scale, and it is necessary to develop environmental monitoring applications for ephemeral streams to help inform land management and regulatory actions aimed at protecting and mitigating for impacts related to large-scale land disturbances. This study focuses on developing remote sensing methodologies to identify and monitor impacts on ephemeral streams resulting from the land disturbance associated with utility-scale solar energy development in the desert southwest of the United States. Airborne very high resolution (VHR) multispectral imagery is used to produce stereoscopic, three-dimensional landscape models that can be used to (1) identify and map ephemeral stream channel networks, and (2) support analyses and models of hydrologic and sediment transport processes that pertain to the critical functionality of ephemeral streams. Spectral and statistical analyses are being developed to extract information about ephemeral channel location and extent, micro-topography, riparian vegetation, and soil moisture characteristics. This presentation will demonstrate initial results and provide a framework for future work associated with this project, for developing the necessary field measurements necessary to verify remote sensing landscape models, and for generating hydrologic models and analyses.
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
High Resolution Imaging of the Sun with CORONAS-1
NASA Technical Reports Server (NTRS)
Karovska, Margarita
1998-01-01
We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.
NASA Astrophysics Data System (ADS)
Guenther, A. B.; Duhl, T.
2011-12-01
Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.
A space-time multiscale modelling of Earth's gravity field variations
NASA Astrophysics Data System (ADS)
Wang, Shuo; Panet, Isabelle; Ramillien, Guillaume; Guilloux, Frédéric
2017-04-01
The mass distribution within the Earth varies over a wide range of spatial and temporal scales, generating variations in the Earth's gravity field in space and time. These variations are monitored by satellites as the GRACE mission, with a 400 km spatial resolution and 10 days to 1 month temporal resolution. They are expressed in the form of gravity field models, often with a fixed spatial or temporal resolution. The analysis of these models allows us to study the mass transfers within the Earth system. Here, we have developed space-time multi-scale models of the gravity field, in order to optimize the estimation of gravity signals resulting from local processes at different spatial and temporal scales, and to adapt the time resolution of the model to its spatial resolution according to the satellites sampling. For that, we first build a 4D wavelet family combining spatial Poisson wavelets with temporal Haar wavelets. Then, we set-up a regularized inversion of inter-satellites gravity potential differences in a bayesian framework, to estimate the model parameters. To build the prior, we develop a spectral analysis, localized in time and space, of geophysical models of mass transport and associated gravity variations. Finally, we test our approach to the reconstruction of space-time variations of the gravity field due to hydrology. We first consider a global distribution of observations along the orbit, from a simplified synthetic hydrology signal comprising only annual variations at large spatial scales. Then, we consider a regional distribution of observations in Africa, and a larger number of spatial and temporal scales. We test the influence of an imperfect prior and discuss our results.
Large eddy simulations of compressible magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Grete, Philipp
2017-02-01
Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Held, Isaac; V. Balaji; Fueglistaler, Stephan
We have constructed and analyzed a series of idealized models of tropical convection interacting with large-scale circulations, with 25-50km resolution and with 1-2km cloud resolving resolution to set the stage for rigorous tests of convection closure schemes in high resolution global climate models. Much of the focus has been on the climatology of tropical cyclogenesis in rotating systems and the related problem of the spontaneous aggregation of convection in non-rotating systems. The PI (Held) will be delivering the honorary Bjerknes lecture at the Fall 2016 AGU meeting in December on this work. We have also provided new analyses of long-standingmore » issues related to the interaction between convection and the large-scale circulation: Kelvin waves in the upper troposphere and lower stratosphere, water vapor transport into the stratosphere, and upper tropospheric temperature trends. The results of these analyses help to improve our understanding of processes, and provide tests for future high resolution global modeling. Our final goal of testing new convections schemes in next-generation global atmospheric models at GFDL has been left for future work due to the complexity of the idealized model results meant as tests for these models uncovered in this work and to computational resource limitations. 11 papers have been published with support from this grant, 2 are in review, and another major summary paper is in preparation.« less
NASA Astrophysics Data System (ADS)
Broxton, P. D.; Harpold, A. A.; van Leeuwen, W.; Biederman, J. A.
2016-12-01
Quantifying the amount of snow in forested mountainous environments, as well as how it may change due to warming and forest disturbance, is critical given its importance for water supply and ecosystem health. Forest canopies affect snow accumulation and ablation in ways that are difficult to observe and model. Furthermore, fine-scale forest structure can accentuate or diminish the effects of forest-snow interactions. Despite decades of research demonstrating the importance of fine-scale forest structure (e.g. canopy edges and gaps) on snow, we still lack a comprehensive understanding of where and when forest structure has the largest impact on snowpack mass and energy budgets. Here, we use a hyper-resolution (1 meter spatial resolution) mass and energy balance snow model called the Snow Physics and Laser Mapping (SnowPALM) model along with LIDAR-derived forest structure to determine where spatial variability of fine-scale forest structure has the largest influence on large scale mass and energy budgets. SnowPALM was set up and calibrated at sites representing diverse climates in New Mexico, Arizona, and California. Then, we compared simulations at different model resolutions (i.e. 1, 10, and 100 m) to elucidate the effects of including versus not including information about fine scale canopy structure. These experiments were repeated for different prescribed topographies (i.e. flat, 30% slope north, and south-facing) at each site. Higher resolution simulations had more snow at lower canopy cover, with the opposite being true at high canopy cover. Furthermore, there is considerable scatter, indicating that different canopy arrangements can lead to different amounts of snow, even when the overall canopy coverage is the same. This modeling is contributing to the development of a high resolution machine learning algorithm called the Snow Water Artificial Network (SWANN) model to generate predictions of snow distributions over much larger domains, which has implications for improving land surface models that do not currently resolve or parameterize fine-scale canopy structure. In addition, these findings have implications for understanding the potential of different forest management strategies (i.e. thinning) based on local topography and climate to maximize the amount and retention of snow.
NASA Astrophysics Data System (ADS)
Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O. P.; Butala, M.; Mannucci, A. J.
2014-12-01
In this research, we report intermediate scale plasma density irregularities in the high-latitude ionosphere inferred from high-resolution radio occultation (RO) measurements in the CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) - GPS (Global Positioning System) satellites radio link. The high inclination of the CASSIOPE satellite and high rate of signal receptionby the occultation antenna of the GPS Attitude, Positioning and Profiling (GAP) instrument on the Enhanced Polar Outflow Probe platform on CASSIOPE enable a high temporal and spatial resolution investigation of the dynamics of the polar ionosphere, magnetosphere-ionospherecoupling, solar wind effects, etc. with unprecedented details compared to that possible in the past. We have carried out high spatial resolution analysis in altitude and geomagnetic latitude of scintillation-producing plasma density irregularities in the polar ionosphere. Intermediate scale, scintillation-producing plasma density irregularities, which corresponds to 2 to 40 km spatial scales were inferred by applying multi-scale spectral analysis on the RO phase delay measurements. Using our multi-scale spectral analysis approach and Polar Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) observations, we infer that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap regions. In specific terms, we found that large length scales and and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap region. Hence, the irregularity scales and phase scintillation characteristics are a function of the solar wind and the magnetospheric forcing. Multi-scale analysis may become a powerful diagnostic tool for characterizing how the ionosphere is dynamically driven by these factors.
Identification of differentially methylated sites with weak methylation effect
USDA-ARS?s Scientific Manuscript database
DNA methylation is an epigenetic alteration crucial for regulating stress responses. Identifying large-scale DNA methylation at single nucleotide resolution is made possible by whole genome bisulfite sequencing. An essential task following the generation of bisulfite sequencing data is to detect dif...
High Resolution Laser Mass Spectrometry Bioimaging
Murray, Kermit K.; Seneviratne, Chinthaka A.; Ghorai, Suman
2016-01-01
MSI (MSI) was introduced more than five decades ago with secondary ion mass spectrometry (SIMS) and a decade later with laser desorption/ionization (LDI) mass spectrometry (MS). Large biomolecule imaging by matrix-assisted laser desorption/ionization (MALDI) was developed in the 1990s and ambient laser MS a decade ago. Although SIMS has been capable of imaging with a moderate mass range at sub-micrometer lateral resolution from its inception, laser MS requires additional effort to achieve a lateral resolution of 10 μm or below which is required to image at the size scale of single mammalian cells. This review covers untargeted large biomolecule MSI using lasers for desorption/ionization or laser desorption and post-ionization. These methods include laser microprobe (LDI) MSI, MALDI MSI, laser ambient and atmospheric pressure MSI, and near-field laser ablation MS. Novel approaches to improving lateral resolution are discussed, including oversampling, beam shaping, transmission geometry, reflective and through-hole objectives, microscope mode, and near-field optics. PMID:26972785
High resolution laser mass spectrometry bioimaging.
Murray, Kermit K; Seneviratne, Chinthaka A; Ghorai, Suman
2016-07-15
Mass spectrometry imaging (MSI) was introduced more than five decades ago with secondary ion mass spectrometry (SIMS) and a decade later with laser desorption/ionization (LDI) mass spectrometry (MS). Large biomolecule imaging by matrix-assisted laser desorption/ionization (MALDI) was developed in the 1990s and ambient laser MS a decade ago. Although SIMS has been capable of imaging with a moderate mass range at sub-micrometer lateral resolution from its inception, laser MS requires additional effort to achieve a lateral resolution of 10μm or below which is required to image at the size scale of single mammalian cells. This review covers untargeted large biomolecule MSI using lasers for desorption/ionization or laser desorption and post-ionization. These methods include laser microprobe (LDI) MSI, MALDI MSI, laser ambient and atmospheric pressure MSI, and near-field laser ablation MS. Novel approaches to improving lateral resolution are discussed, including oversampling, beam shaping, transmission geometry, reflective and through-hole objectives, microscope mode, and near-field optics. Copyright © 2016 Elsevier Inc. All rights reserved.
Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.
NASA Astrophysics Data System (ADS)
Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.
2014-12-01
High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.
NASA Astrophysics Data System (ADS)
Katavouta, Anna; Thompson, Keith
2017-04-01
A high resolution regional model (1/36 degree) of the Gulf of Maine, Scotian Shelf and adjacent deep ocean (GoMSS) is developed to downscale ocean conditions from an existing global operational system. First, predictions from the regional GoMSS model in a one-way nesting set up are evaluated using observations from multiple sources including satellite-borne sensors of surface temperature and sea level, CTDs, Argo floats and moored current meters. It is shown that on the shelf, the regional model predicts more realistic fields than the global system because it has higher resolution and includes tides that are absent from the global system. However, in deep water the regional model misplaces deep ocean eddies and meanders associated with the Gulf Stream. This is because of unrealistic internally generated variability (associated with the one-way nesting set up) that leads to decoupling of the regional model from the global system in the deep water. To overcome this problem, the large scales (length scales > 90 km) of the regional model are spectrally nudged towards the global system fields. This leads to more realistic predictions off the shelf. Wavenumber spectra show that even though spectral nudging constrains the large scales, it does not suppress the variability on small scales; on the contrary, it favours the formation of eddies with length scales below the cut-off wavelength of the spectral nudging.
Global high-resolution simulations of tropospheric nitrogen dioxide using CHASER V4.0
NASA Astrophysics Data System (ADS)
Sekiya, Takashi; Miyazaki, Kazuyuki; Ogochi, Koji; Sudo, Kengo; Takigawa, Masayuki
2018-03-01
We evaluate global tropospheric nitrogen dioxide (NO2) simulations using the CHASER V4.0 global chemical transport model (CTM) at horizontal resolutions of 0.56, 1.1, and 2.8°. Model evaluation was conducted using satellite tropospheric NO2 retrievals from the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment-2 (GOME-2) and aircraft observations from the 2014 Front Range Air Pollution and Photochemistry Experiment (FRAPPÉ). Agreement against satellite retrievals improved greatly at 1.1 and 0.56° resolutions (compared to 2.8° resolution) over polluted and biomass burning regions. The 1.1° simulation generally captured the regional distribution of the tropospheric NO2 column well, whereas 0.56° resolution was necessary to improve the model performance over areas with strong local sources, with mean bias reductions of 67 % over Beijing and 73 % over San Francisco in summer. Validation using aircraft observations indicated that high-resolution simulations reduced negative NO2 biases below 700 hPa over the Denver metropolitan area. These improvements in high-resolution simulations were attributable to (1) closer spatial representativeness between simulations and observations and (2) better representation of large-scale concentration fields (i.e., at 2.8°) through the consideration of small-scale processes. Model evaluations conducted at 0.5 and 2.8° bin grids indicated that the contributions of both these processes were comparable over most polluted regions, whereas the latter effect (2) made a larger contribution over eastern China and biomass burning areas. The evaluations presented in this paper demonstrate the potential of using a high-resolution global CTM for studying megacity-scale air pollutants across the entire globe, potentially also contributing to global satellite retrievals and chemical data assimilation.
Remote sensing in support of high-resolution terrestrial carbon monitoring and modeling
NASA Astrophysics Data System (ADS)
Hurtt, G. C.; Zhao, M.; Dubayah, R.; Huang, C.; Swatantran, A.; ONeil-Dunne, J.; Johnson, K. D.; Birdsey, R.; Fisk, J.; Flanagan, S.; Sahajpal, R.; Huang, W.; Tang, H.; Armstrong, A. H.
2014-12-01
As part of its Phase 1 Carbon Monitoring System (CMS) activities, NASA initiated a Local-Scale Biomass Pilot study. The goals of the pilot study were to develop protocols for fusing high-resolution remotely sensed observations with field data, provide accurate validation test areas for the continental-scale biomass product, and demonstrate efficacy for prognostic terrestrial ecosystem modeling. In Phase 2, this effort was expanded to the state scale. Here, we present results of this activity focusing on the use of remote sensing in high-resolution ecosystem modeling. The Ecosystem Demography (ED) model was implemented at 90 m spatial resolution for the entire state of Maryland. We rasterized soil depth and soil texture data from SSURGO. For hourly meteorological data, we spatially interpolated 32-km 3-hourly NARR into 1-km hourly and further corrected them at monthly level using PRISM data. NLCD data were used to mask sand, seashore, and wetland. High-resolution 1 m forest/non-forest mapping was used to define forest fraction of 90 m cells. Three alternative strategies were evaluated for initialization of forest structure using high-resolution lidar, and the model was used to calculate statewide estimates of forest biomass, carbon sequestration potential, time to reach sequestration potential, and sensitivity to future forest growth and disturbance rates, all at 90 m resolution. To our knowledge, no dynamic ecosystem model has been run at such high spatial resolution over such large areas utilizing remote sensing and validated as extensively. There are over 3 million 90 m land cells in Maryland, greater than 43 times the ~73,000 half-degree cells in a state-of-the-art global land model.
Benchmark of Client and Server-Side Catchment Delineation Approaches on Web-Based Systems
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.; Sit, M. A.
2016-12-01
Recent advances in internet and cyberinfrastructure technologies have provided the capability to acquire large scale spatial data from various gauges and sensor networks. The collection of environmental data increased demand for applications which are capable of managing and processing large-scale and high-resolution data sets. With the amount and resolution of data sets provided, one of the challenging tasks for organizing and customizing hydrological data sets is delineation of watersheds on demand. Watershed delineation is a process for creating a boundary that represents the contributing area for a specific control point or water outlet, with intent of characterization and analysis of portions of a study area. Although many GIS tools and software for watershed analysis are available on desktop systems, there is a need for web-based and client-side techniques for creating a dynamic and interactive environment for exploring hydrological data. In this project, we demonstrated several watershed delineation techniques on the web with various techniques implemented on the client-side using JavaScript and WebGL, and on the server-side using Python and C++. We also developed a client-side GPGPU (General Purpose Graphical Processing Unit) algorithm to analyze high-resolution terrain data for watershed delineation which allows parallelization using GPU. The web-based real-time analysis of watershed segmentation can be helpful for decision-makers and interested stakeholders while eliminating the need of installing complex software packages and dealing with large-scale data sets. Utilization of the client-side hardware resources also eliminates the need of servers due its crowdsourcing nature. Our goal for future work is to improve other hydrologic analysis methods such as rain flow tracking by adapting presented approaches.
Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling
NASA Astrophysics Data System (ADS)
Her, Y. G.
2017-12-01
Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological observations such as soil moisture and radar rainfall depth and by sharing the model and its codes in public domain, respectively.
Small-Scale Spectral and Color Analysis of Ritchey Crater Impact Materials
NASA Astrophysics Data System (ADS)
Bray, Veronica; Chojnacki, Matthew; McEwen, Alfred; Heyd, Rodney
2014-11-01
Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) analysis of Ritchey crater on Mars has allowed identification of the minerals uplifted from depth within its central peak as well as the dominant spectral signature of the crater fill materials which surround it. However, the 18m/px resolution of CRISM prevents full analysis of the nature of small-scale dykes, mega breccia blocks and finer scale crater-fill units. We extend our existing CRISM-based compositional mapping of the Ritchey crater interior to sub-CRISM pixel scales with the use of High Resolution Imaging Science Experiment (HiRISE) Color Ratio Products (CRPs). These CRPs are then compared to CRISM images; correlation between color ratio and CRISM spectral signature for a large bedrock unit is defined and used to suggest similar composition for a smaller unit with the same color ratio. Megabreccia deposits, angular fragments of rock in excess of 1 meter in diameter within a finer grained matrix, are common at Ritchey. The dominant spectral signature from each megabreccia unit varies with location around Ritchey and appears to reflect the matrix composition (based on texture and albedo similarities to surrounding rocks) rather than clast composition. In cases where the breccia block size is large enough for CRISM analysis, many different mineral compositions are noted (low calcium pyroxene (LCP) olivine (OL), alteration products) depending on the location. All block compositions (as inferred from CRPs) are observed down to the limit of HiRISE resolution. We have found a variety of dyke compositions within our mapping area. Correlation between CRP color and CRISM spectra in this area suggest that large 10 m wide) dykes within LCP-bearing bedrock close to the crater center tend to have similar composition to the host rock. Smaller dykes running non-parallel to the larger dykes are inferred to be OL-rich suggesting multiple phases of dyke formation within the Ritchey crater and its bedrock.
Meteorological impact assessment of possible large scale irrigation in Southwest Saudi Arabia
NASA Astrophysics Data System (ADS)
Ter Maat, H. W.; Hutjes, R. W. A.; Ohba, R.; Ueda, H.; Bisselink, B.; Bauer, T.
2006-11-01
On continental to regional scales feedbacks between landuse and landcover change and climate have been widely documented over the past 10-15 years. In the present study we explore the possibility that also vegetation changes over much smaller areas may affect local precipitation regimes. Large scale (˜ 10 5 ha) irrigated plantations in semi-arid environments under particular conditions may affect local circulations and induce additional rainfall. Capturing this rainfall 'surplus' could then reduce the need for external irrigation sources and eventually lead to self-sustained water cycling. This concept is studied in the coastal plains in South West Saudi Arabia where the mountains of the Asir region exhibit the highest rainfall of the peninsula due to orographic lifting and condensation of moisture imported with the Indian Ocean monsoon and with disturbances from the Mediterranean Sea. We use a regional atmospheric modeling system (RAMS) forced by ECMWF analysis data to resolve the effect of complex surface conditions in high resolution (Δ x = 4 km). After validation, these simulations are analysed with a focus on the role of local processes (sea breezes, orographic lifting and the formation of fog in the coastal mountains) in generating rainfall, and on how these will be affected by large scale irrigated plantations in the coastal desert. The validation showed that the model simulates the regional and local weather reasonably well. The simulations exhibit a slightly larger diurnal temperature range than those captured by the observations, but seem to capture daily sea-breeze phenomena well. Monthly rainfall is well reproduced at coarse resolutions, but appears more localized at high resolutions. The hypothetical irrigated plantation (3.25 10 5 ha) has significant effects on atmospheric moisture, but due to weakened sea breezes this leads to limited increases of rainfall. In terms of recycling of irrigation gifts the rainfall enhancement in this particular setting is rather insignificant.
Magnetically-coupled microcalorimeter arrays for x-ray astrophysics
NASA Astrophysics Data System (ADS)
Bandler, Simon
The "X-ray Surveyor" has been listed by NASA as one of the four major large mission concepts to be studied in the next Astrophysics Decadal Review in its preliminary list of large concepts. One of the key instruments on such a mission would be a very large format X-ray microcalorimeter array, with an array size of greater than 100 thousand pixels. Magnetically-coupled microcalorimeters (MCC) are one of the technologies with the greatest potential to meet the requirements of this mission, and this proposal is one to carry out research specifically to reach the goals of this vision. The "X-ray Surveyor" is a concept for a future mission that will make X-ray observations that are instrumental to understanding the quickly emerging population of galaxies and supermassive black holes at z ~10. The observations will trace the formation of galaxies and their assembly into large-scale structures starting from the earliest possible epochs. This mission would be observing baryons and large-scale physical processes outside of the very densest regions in the local Universe. This can be achieved with an X-ray observatory with similar angular resolution as Chandra but with significantly improved optic area and detector sensitivity. Chandra-scale angular resolution (1" or better) is essential in building more powerful, higher throughput observatories to avoid source confusion and remain photon-limited rather than background-limited. A prime consideration for the microcalorimeter camera on this type of mission is maintaining ~ 1 arcsec spatial resolution over the largest possible field of view, even if this means a slight trade-off against the spectral resolution. A uniform array of 1" pixels covering at least 5'x5' field of view is desired. To reduce the number of sensors read out, in geometries where extremely fine pitch (~50 microns) is desired, the most promising technologies are those in which a thermal sensor such an MCC can read out a sub-array of 20-25 individual 1'• pixels. Projections based on the current state of this technology indicate that less than 5 eV energy resolution can be achieved with this sort of geometry. Theoretically, magnetically-coupled microcalorimeters are well-equipped to achieve the very highest energy resolutions, especially when several absorbers are attached to each sensor, increasing the heat capacity. This program will build upon the work carried out by our group on metallic magnetic calorimeters (MMC) and Magnetic penetration thermometers (MPT) in the antecedent program. In this program we will carry out development in three main areas. First, we will develop sensor geometries that are optimized for reading out sub-arrays of pixels with a single sensor of the type that is likely desired by the "X-ray Surveyor". Second, we will further develop large-format arraying prototypes with the engineering of wiring-pixel approaches that are scalable to the large-format arrays that are needed. Third, we will develop the read-out technology that will be necessary, which utilizes the next generation of X-ray microcalorimeter read-out approach, a microwave multiplexing readout.
LAI inversion algorithm based on directional reflectance kernels.
Tang, S; Chen, J M; Zhu, Q; Li, X; Chen, M; Sun, R; Zhou, Y; Deng, F; Xie, D
2007-11-01
Leaf area index (LAI) is an important ecological and environmental parameter. A new LAI algorithm is developed using the principles of ground LAI measurements based on canopy gap fraction. First, the relationship between LAI and gap fraction at various zenith angles is derived from the definition of LAI. Then, the directional gap fraction is acquired from a remote sensing bidirectional reflectance distribution function (BRDF) product. This acquisition is obtained by using a kernel driven model and a large-scale directional gap fraction algorithm. The algorithm has been applied to estimate a LAI distribution in China in mid-July 2002. The ground data acquired from two field experiments in Changbai Mountain and Qilian Mountain were used to validate the algorithm. To resolve the scale discrepancy between high resolution ground observations and low resolution remote sensing data, two TM images with a resolution approaching the size of ground plots were used to relate the coarse resolution LAI map to ground measurements. First, an empirical relationship between the measured LAI and a vegetation index was established. Next, a high resolution LAI map was generated using the relationship. The LAI value of a low resolution pixel was calculated from the area-weighted sum of high resolution LAIs composing the low resolution pixel. The results of this comparison showed that the inversion algorithm has an accuracy of 82%. Factors that may influence the accuracy are also discussed in this paper.
NASA Astrophysics Data System (ADS)
Hull, Charles L. H.; Girart, Josep M.; Tychoniec, Łukasz; Rao, Ramprasad; Cortés, Paulo C.; Pokhrel, Riwaj; Zhang, Qizhou; Houde, Martin; Dunham, Michael M.; Kristensen, Lars E.; Lai, Shih-Ping; Li, Zhi-Yun; Plambeck, Richard L.
2017-10-01
We present high angular resolution dust polarization and molecular line observations carried out with the Atacama Large Millimeter/submillimeter Array (ALMA) toward the Class 0 protostar Serpens SMM1. By complementing these observations with new polarization observations from the Submillimeter Array (SMA) and archival data from the Combined Array for Research in Millimeter-wave Astronomy (CARMA) and the James Clerk Maxwell Telescopes (JCMT), we can compare the magnetic field orientations at different spatial scales. We find major changes in the magnetic field orientation between large (˜0.1 pc) scales—where the magnetic field is oriented E-W, perpendicular to the major axis of the dusty filament where SMM1 is embedded—and the intermediate and small scales probed by CARMA (˜1000 au resolution), the SMA (˜350 au resolution), and ALMA (˜140 au resolution). The ALMA maps reveal that the redshifted lobe of the bipolar outflow is shaping the magnetic field in SMM1 on the southeast side of the source; however, on the northwestern side and elsewhere in the source, low-velocity shocks may be causing the observed chaotic magnetic field pattern. High-spatial-resolution continuum and spectral-line observations also reveal a tight (˜130 au) protobinary system in SMM1-b, the eastern component of which is launching an extremely high-velocity, one-sided jet visible in both {CO}(J=2\\to 1) and {SiO}(J=5\\to 4); however, that jet does not appear to be shaping the magnetic field. These observations show that with the sensitivity and resolution of ALMA, we can now begin to understand the role that feedback (e.g., from protostellar outflows) plays in shaping the magnetic field in very young, star-forming sources like SMM1.
NASA Technical Reports Server (NTRS)
Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.
2015-01-01
The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.
NASA Technical Reports Server (NTRS)
Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.
2015-01-01
The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.
Wang, Ran; Gamon, John A; Cavender-Bares, Jeannine; Townsend, Philip A; Zygielbaum, Arthur I
2018-03-01
Remote sensing has been used to detect plant biodiversity in a range of ecosystems based on the varying spectral properties of different species or functional groups. However, the most appropriate spatial resolution necessary to detect diversity remains unclear. At coarse resolution, differences among spectral patterns may be too weak to detect. In contrast, at fine resolution, redundant information may be introduced. To explore the effect of spatial resolution, we studied the scale dependence of spectral diversity in a prairie ecosystem experiment at Cedar Creek Ecosystem Science Reserve, Minnesota, USA. Our study involved a scaling exercise comparing synthetic pixels resampled from high-resolution images within manipulated diversity treatments. Hyperspectral data were collected using several instruments on both ground and airborne platforms. We used the coefficient of variation (CV) of spectral reflectance in space as the indicator of spectral diversity and then compared CV at different scales ranging from 1 mm 2 to 1 m 2 to conventional biodiversity metrics, including species richness, Shannon's index, Simpson's index, phylogenetic species variation, and phylogenetic species evenness. In this study, higher species richness plots generally had higher CV. CV showed higher correlations with Shannon's index and Simpson's index than did species richness alone, indicating evenness contributed to the spectral diversity. Correlations with species richness and Simpson's index were generally higher than with phylogenetic species variation and evenness measured at comparable spatial scales, indicating weaker relationships between spectral diversity and phylogenetic diversity metrics than with species diversity metrics. High resolution imaging spectrometer data (1 mm 2 pixels) showed the highest sensitivity to diversity level. With decreasing spatial resolution, the difference in CV between diversity levels decreased and greatly reduced the optical detectability of biodiversity. The optimal pixel size for distinguishing α diversity in these prairie plots appeared to be around 1 mm to 10 cm, a spatial scale similar to the size of an individual herbaceous plant. These results indicate a strong scale-dependence of the spectral diversity-biodiversity relationships, with spectral diversity best able to detect a combination of species richness and evenness, and more weakly detecting phylogenetic diversity. These findings can be used to guide airborne studies of biodiversity and develop more effective large-scale biodiversity sampling methods. ©2018 The Authors Ecological Applications published by Wiley Periodicals, Inc. on behalf of Ecological Society of America.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Christensen, Hannah M.; Juricke, Stephan; Subramanian, Aneesh; Watson, Peter A. G.; Weisheimer, Antje; Palmer, Tim N.
2017-03-01
The Climate SPHINX (Stochastic Physics HIgh resolutioN eXperiments) project is a comprehensive set of ensemble simulations aimed at evaluating the sensitivity of present and future climate to model resolution and stochastic parameterisation. The EC-Earth Earth system model is used to explore the impact of stochastic physics in a large ensemble of 30-year climate integrations at five different atmospheric horizontal resolutions (from 125 up to 16 km). The project includes more than 120 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), together with coupled transient runs (1850-2100). A total of 20.4 million core hours have been used, made available from a single year grant from PRACE (the Partnership for Advanced Computing in Europe), and close to 1.5 PB of output data have been produced on SuperMUC IBM Petascale System at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. About 140 TB of post-processed data are stored on the CINECA supercomputing centre archives and are freely accessible to the community thanks to an EUDAT data pilot project. This paper presents the technical and scientific set-up of the experiments, including the details on the forcing used for the simulations performed, defining the SPHINX v1.0 protocol. In addition, an overview of preliminary results is given. An improvement in the simulation of Euro-Atlantic atmospheric blocking following resolution increase is observed. It is also shown that including stochastic parameterisation in the low-resolution runs helps to improve some aspects of the tropical climate - specifically the Madden-Julian Oscillation and the tropical rainfall variability. These findings show the importance of representing the impact of small-scale processes on the large-scale climate variability either explicitly (with high-resolution simulations) or stochastically (in low-resolution simulations).
NASA Astrophysics Data System (ADS)
Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.
2018-06-01
High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.
NASA Astrophysics Data System (ADS)
Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.
2017-09-01
High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.
Quantifying Stock Return Distributions in Financial Markets
Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593
Quantifying Stock Return Distributions in Financial Markets.
Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.
Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
NASA Astrophysics Data System (ADS)
Harders, Rieka; Ranero, Cesar R.; Weinrebe, Wilhelm; von Huene, Roland
2014-05-01
Subduction of kms-tall and tens-of-km wide seamounts cause important landsliding events at subduction zones around the word. Along the Middle America Trench, previous work based on regional swath bathymetry maps (with 100 m grids) and multichannel seismic images have shown that seamount subduction produces large-scale slumping and sliding. Some of the mass wasting event may have been catastrophic and numerical modeling has indicated that they may have produced important local tsunamis. We have re-evaluated the structure of several active submarine landlide complexes caused by large seamount subduction using side scan sonar data. The comparison of the side scan sonar data to local high-resolution bathymetry grids indicates that the backscatter data has a resolution that is somewhat similar to that produced by a 10 m bathymetry grid. Although this is an arbitrary comparison, the side scan sonar data provides comparatively much higher resolution information than the previously used regional multibeam bathymetry. We have mapped the geometry and relief of the head and side walls of the complexes, the distribution of scars and the different sediment deposits to produce a new interpretation of the modes of landsliding during subduction of large seamounts. The new higher resolution information shows that landsliding processes are considerably more complex than formerly assumed. Landslides are of notably smaller dimensions that the lower resolution data had previously appear to indicate. However, significantly large events may have occur far more often than earlier interpretations had inferred representing a more common threat that previously assumed.
Multiple Scale Remote Sensing for Monitoring Rangelands
USDA-ARS?s Scientific Manuscript database
Based on a land-cover classification from NASA’s MODerate resolution Imaging Spectroradiometer (MODIS), rangelands cover 48% of the Earth’s land surface, not including Antarctica. Nearly all analyses imply the most economical means of monitoring large areas of rangelands worldwide is with remote se...
Dust Polarization toward Embedded Protostars in Ophiuchus with ALMA. I. VLA 1623
NASA Astrophysics Data System (ADS)
Sadavoy, Sarah I.; Myers, Philip C.; Stephens, Ian W.; Tobin, John; Commerçon, Benoît; Henning, Thomas; Looney, Leslie; Kwon, Woojin; Segura-Cox, Dominique; Harris, Robert
2018-06-01
We present high-resolution (∼30 au) ALMA Band 6 dust polarization observations of VLA 1623. The VLA 1623 data resolve compact ∼40 au inner disks around the two protobinary sources, VLA 1623-A and VLA 1623-B, and also an extended ∼180 au ring of dust around VLA 1623-A. This dust ring was previously identified as a large disk in lower-resolution observations. We detect highly structured dust polarization toward the inner disks and the extended ring with typical polarization fractions ≈1.7% and ≈2.4%, respectively. The two components also show distinct polarization morphologies. The inner disks have uniform polarization angles aligned with their minor axes. This morphology is consistent with expectations from dust scattering. By contrast, the extended dust ring has an azimuthal polarization morphology not previously seen in lower-resolution observations. We find that our observations are well-fit by a static, oblate spheroid model with a flux-frozen, poloidal magnetic field. We propose that the polarization traces magnetic grain alignment likely from flux freezing on large scales and magnetic diffusion on small scales. Alternatively, the azimuthal polarization may be attributed to grain alignment by the anisotropic radiation field. If the grains are radiatively aligned, then our observations indicate that large (∼100 μm) dust grains grow quickly at large angular extents. Finally, we identify significant proper motion of VLA 1623 using our observations and those in the literature. This result indicates that the proper motion of nearby systems must be corrected for when combining ALMA data from different epochs.
What if we took a global look?
NASA Astrophysics Data System (ADS)
Ouellet Dallaire, C.; Lehner, B.
2014-12-01
Freshwater resources are facing unprecedented pressures. In hope to cope with this, Environmental Hydrology, Freshwater Biology, and Fluvial Geomorphology have defined conceptual approaches such as "environmental flow requirements", "instream flow requirements" or "normative flow regime" to define appropriate flow regime to maintain a given ecological status. These advances in the fields of freshwater resources management are asking scientists to create bridges across disciplines. Holistic and multi-scales approaches are becoming more and more common in water sciences research. The intrinsic nature of river systems demands these approaches to account for the upstream-downstream link of watersheds. Before recent technological developments, large scale analyses were cumbersome and, often, the necessary data was unavailable. However, new technologies, both for information collection and computing capacity, enable a high resolution look at the global scale. For rivers around the world, this new outlook is facilitated by the hydrologically relevant geo-spatial database HydroSHEDS. This database now offers more than 24 millions of kilometers of rivers, some never mapped before, at the click of a fingertip. Large and, even, global scale assessments can now be used to compare rivers around the world. A river classification framework was developed using HydroSHEDS called GloRiC (Global River Classification). This framework advocates for holistic approach to river systems by using sub-classifications drawn from six disciplines related to river sciences: Hydrology, Physiography and climate, Geomorphology, Chemistry, Biology and Human impact. Each of these disciplines brings complementary information on the rivers that is relevant at different scales. A first version of a global river reach classification was produced at the 500m resolution. Variables used in the classification have influence on processes involved at different scales (ex. topography index vs. pH). However, all variables are computed at the same high spatial resolution. This way, we can have a global look at local phenomenon.
SMOS L1C and L2 Validation in Australia
NASA Technical Reports Server (NTRS)
Rudiger, Christoph; Walker, Jeffrey P.; Kerr, Yann H.; Mialon, Arnaud; Merlin, Olivier; Kim, Edward J.
2012-01-01
Extensive airborne field campaigns (Australian Airborne Cal/val Experiments for SMOS - AACES) were undertaken during the 2010 summer and winter seasons of the southern hemisphere. The purpose of those campaigns was the validation of the Level 1c (brightness temperature) and Level 2 (soil moisture) products of the ESA-led Soil Moisture and Ocean Salinity (SMOS) mission. As SMOS is the first satellite to globally map L-band (1.4GHz) emissions from the Earth?s surface, and the first 2-dimensional interferometric microwave radiometer used for Earth observation, large scale and long-term validation campaigns have been conducted world-wide, of which AACES is the most extensive. AACES combined large scale medium-resolution airborne L-band and spectral observations, along with high-resolution in-situ measurements of soil moisture across a 50,000km2 area of the Murrumbidgee River catchment, located in south-eastern Australia. This paper presents a qualitative assessment of the SMOS brightness temperature and soil moisture products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renzi, Fabiana; Panetta, Gianna; Vallone, Beatrice
Recombinant His-tagged XendoU, a eukaryotic endoribonuclease, appeared to aggregate in the presence of divalent cations. Monodisperse protein which yielded crystals diffracting to 2.2 Å was obtained by addition of EDTA. XendoU is the first endoribonuclease described in higher eukaryotes as being involved in the endonucleolytic processing of intron-encoded small nucleolar RNAs. It is conserved among eukaryotes and its viral homologue is essential in SARS replication and transcription. The large-scale purification and crystallization of recombinant XendoU are reported. The tendency of the recombinant enzyme to aggregate could be reversed upon the addition of chelating agents (EDTA, imidazole): aggregation is a potentialmore » drawback when purifying and crystallizing His-tagged proteins, which are widely used, especially in high-throughput structural studies. Purified monodisperse XendoU crystallized in two different space groups: trigonal P3{sub 1}21, diffracting to low resolution, and monoclinic C2, diffracting to higher resolution.« less
NASA Astrophysics Data System (ADS)
Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.
2015-05-01
A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
NASA Astrophysics Data System (ADS)
Flores, A. N.; Lakshmi, V.; Al-Barakat, R.; Maksimowicz, M.
2017-12-01
Land grabbing, the acquisition of large areas of land by external entities, results from interactions of complex global economic, social, and political processes. These transactions are controversial because they can result in large-scale disruptions to historical land uses, including increased intensity of agricultural practices and significant conversions in land cover. These large-scale disruptions have the potential to impact surface water and energy balance because vegetation controls the partitioning of incoming energy into latent and sensible heat fluxes and precipitation into runoff and infiltration. Because large-scale land acquisitions can impact local ecosystem services, it is important to document changes in terrestrial vegetation associated with these acquisitions to support the assessment of associated impacts on regional surface water and energy balance, spatiotemporal scales of those changes, and interactions and feedbacks with other processes, particularly in the atmosphere. We use remote sensing data from multiple satellite platforms to diagnose and characterize changes in terrestrial vegetation and ecohydrology in Mozambique during periods that bracket periods associated with significant. The Advanced very High Resolution Radiometer (AVHRR) sensor provides long-term continuous data that can document historical seasonal cycles of vegetation greenness. These data are augmented with analyses from Landsat multispectral data, which provides significantly higher spatial resolution. Here we quantify spatiotemporal changes in vegetation are associated with periods of significant land acquisitions in Mozambique. This analysis complements a suite of land-atmosphere modeling experiments designed to deduce potential changes in land surface water and energy budgets associated with these acquisitions. This work advance understanding of how telecouplings between global economic and political forcings and regional hydrology and climate.
Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review.
Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela
2017-01-01
Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.
Mechanisms and Model Diversity of Trade-Wind Shallow Cumulus Cloud Feedbacks: A Review
NASA Astrophysics Data System (ADS)
Vial, Jessica; Bony, Sandrine; Stevens, Bjorn; Vogel, Raphaela
2017-11-01
Shallow cumulus clouds in the trade-wind regions are at the heart of the long standing uncertainty in climate sensitivity estimates. In current climate models, cloud feedbacks are strongly influenced by cloud-base cloud amount in the trades. Therefore, understanding the key factors controlling cloudiness near cloud-base in shallow convective regimes has emerged as an important topic of investigation. We review physical understanding of these key controlling factors and discuss the value of the different approaches that have been developed so far, based on global and high-resolution model experimentations and process-oriented analyses across a range of models and for observations. The trade-wind cloud feedbacks appear to depend on two important aspects: (1) how cloudiness near cloud-base is controlled by the local interplay between turbulent, convective and radiative processes; (2) how these processes interact with their surrounding environment and are influenced by mesoscale organization. Our synthesis of studies that have explored these aspects suggests that the large diversity of model responses is related to fundamental differences in how the processes controlling trade cumulus operate in models, notably, whether they are parameterized or resolved. In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes. Uncertainties are difficult to narrow using current observations, as the trade cumulus variability and its relation to large-scale environmental factors strongly depend on the time and/or spatial scales at which the mechanisms are evaluated. New opportunities for testing physical understanding of the factors controlling shallow cumulus cloud responses using observations and high-resolution modeling on large domains are discussed.
A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.
Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu
2017-10-01
The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.
Analysis of terrestrial conditions and dynamics
NASA Technical Reports Server (NTRS)
Goward, S. N. (Principal Investigator)
1984-01-01
Land spectral reflectance properties for selected locations, including the Goddard Space Flight Center, the Wallops Flight Facility, a MLA test site in Cambridge, Maryland, and an acid test site in Burlington, Vermont, were measured. Methods to simulate the bidirectional reflectance properties of vegetated landscapes and a data base for spatial resolution were developed. North American vegetation patterns observed with the Advanced Very High Resolution Radiometer were assessed. Data and methods needed to model large-scale vegetation activity with remotely sensed observations and climate data were compiled.
NASA Astrophysics Data System (ADS)
Franci, Luca; Landi, Simone; Matteini, Lorenzo; Verdini, Andrea; Hellinger, Petr
2016-04-01
We investigate the properties of the ion-scale spectral break of solar wind turbulence by means of two-dimensional, large-scale, high-resolution hybrid particle-in-cell simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we add a spectrum of in-plane large- scale magnetic and kinetic fluctuations, with energy equipartition and vanishing correlation. We perform a set of ten simulations with different values of the ion plasma beta, β_i. In all cases, we observe the power spectrum of the total magnetic fluctuations following a power law with a spectral index of -5/3 in the inertial range, with a smooth break around ion scales and a steeper power law in the sub-ion range. This spectral break always occurs at spatial scales of the order of the proton gyroradius, ρ_i, and the proton inertial length, di = ρi / √{β_i}. When the plasma beta is of the order of 1, the two scales are very close to each other and determining which is directly related to the steepening of the spectra it's not straightforward at all. In order to overcome this limitation, we extended the range of values of βi over three orders of magnitude, from 0.01 to 10, so that the two ion scales were well separated. This let us observe that the break always seems to occur at the larger of the two scales, i.e., at di for βi 1. The effect of βi on the spectra of the parallel and perpendicular magnetic components separately and of the density fluctuations is also investigated. We compare all our numerical results with solar wind observations and suggest possible explanations for our findings.
Environment and host as large-scale controls of ectomycorrhizal fungi.
van der Linde, Sietse; Suz, Laura M; Orme, C David L; Cox, Filipa; Andreae, Henning; Asi, Endla; Atkinson, Bonnie; Benham, Sue; Carroll, Christopher; Cools, Nathalie; De Vos, Bruno; Dietrich, Hans-Peter; Eichhorn, Johannes; Gehrmann, Joachim; Grebenc, Tine; Gweon, Hyun S; Hansen, Karin; Jacob, Frank; Kristöfel, Ferdinand; Lech, Paweł; Manninger, Miklós; Martin, Jan; Meesenburg, Henning; Merilä, Päivi; Nicolas, Manuel; Pavlenda, Pavel; Rautio, Pasi; Schaub, Marcus; Schröck, Hans-Werner; Seidling, Walter; Šrámek, Vít; Thimonier, Anne; Thomsen, Iben Margrete; Titeux, Hugues; Vanguelova, Elena; Verstraeten, Arne; Vesterdal, Lars; Waldner, Peter; Wijk, Sture; Zhang, Yuxin; Žlindra, Daniel; Bidartondo, Martin I
2018-06-06
Explaining the large-scale diversity of soil organisms that drive biogeochemical processes-and their responses to environmental change-is critical. However, identifying consistent drivers of belowground diversity and abundance for some soil organisms at large spatial scales remains problematic. Here we investigate a major guild, the ectomycorrhizal fungi, across European forests at a spatial scale and resolution that is-to our knowledge-unprecedented, to explore key biotic and abiotic predictors of ectomycorrhizal diversity and to identify dominant responses and thresholds for change across complex environmental gradients. We show the effect of 38 host, environment, climate and geographical variables on ectomycorrhizal diversity, and define thresholds of community change for key variables. We quantify host specificity and reveal plasticity in functional traits involved in soil foraging across gradients. We conclude that environmental and host factors explain most of the variation in ectomycorrhizal diversity, that the environmental thresholds used as major ecosystem assessment tools need adjustment and that the importance of belowground specificity and plasticity has previously been underappreciated.
Multiresolution comparison of precipitation datasets for large-scale models
NASA Astrophysics Data System (ADS)
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Multilevel Cloud Structures above Svalbard
NASA Astrophysics Data System (ADS)
Dörnbrack, Andreas; Pitts, Micheal; Poole, Lamont; Gisinger, Sonja; Maturlli, Marion
2017-04-01
The presentation focusses on the reslts recently published by the authors under the heading "picture of the month" in Monthly Weather Review. The presented picture of the month is a superposition of space-borne lidar observations and high-resolution temperature fields of the ECMWF integrated forecast system (IFS). It displays complex tropospheric and stratospheric clouds in the Arctic winter 2015/16. Near the end of December 2015, the unusual northeastward propagation of warm and humid subtropical air masses as far north as 80°N lifted the tropopause by more than 3 km in 24 h and cooled the stratosphere on a large scale. A widespread formation of thick cirrus clouds near the tropopause and of synoptic-scale polar stratospheric clouds (PSCs) occurred as the temperature dropped below the thresholds for the existence of cloud particles. Additionally, mountain waves were excited by the strong flow at the western edge of the ridge across Svalbard, leading to the formation of mesoscale ice PSCs. The most recent IFS cycle using a horizontal resolution of 8 km globally reproduces the large-scale and mesoscale flow features and leads to a remarkable agreement with the wave structure revealed by the space-borne observations.
Nanoposition sensors with superior linear response to position and unlimited travel ranges
NASA Astrophysics Data System (ADS)
Lee, Sheng-Chiang; Peters, Randall D.
2009-04-01
With the advancement in nanotechnology, the ability of positioning/measuring at subnanometer scale has been one of the most critical issues for the nanofabrication industry and researchers using scanning probe microscopy. Commercial nanopositioners have achieved direct measurements at the scale of 0.01 nm with capacitive sensing metrology. However, the commercial sensors have small dynamic ranges (up to only a few hundred micrometers) and are relatively large in size (centimeters in the transverse directions to the motion), which is necessary for healthy signal detections but making it difficult to use on smaller devices. This limits applications in which large materials (on the scale of centimeters or greater) are handled with needs of subnanometer resolutions. What has been done in the past is to combine the fine and coarse translation stages with different dynamic ranges to simultaneously achieve long travel range and high spatial resolution. In this paper, we present a novel capacitive position sensing metrology with ultrawide dynamic range from subnanometer to literally any practically desired length for a translation stage. This sensor will greatly simplify the task and enhance the performance of direct metrology in a hybrid translational stage covering translation tasks from subnanometer to centimeters.
Supporting observation campaigns with high resolution modeling
NASA Astrophysics Data System (ADS)
Klocke, Daniel; Brueck, Matthias; Voigt, Aiko
2017-04-01
High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.
NASA Astrophysics Data System (ADS)
Aliseda, Alberto; Bourgoin, Mickael; Eswirp Collaboration
2014-11-01
We present preliminary results from a recent grid turbulence experiment conducted at the ONERA wind tunnel in Modane, France. The ESWIRP Collaboration was conceived to probe the smallest scales of a canonical turbulent flow with very high Reynolds numbers. To achieve this, the largest scales of the turbulence need to be extremely big so that, even with the large separation of scales, the smallest scales would be well above the spatial and temporal resolution of the instruments. The ONERA wind tunnel in Modane (8 m -diameter test section) was chosen as a limit of the biggest large scales achievable in a laboratory setting. A giant inflatable grid (M = 0.8 m) was conceived to induce slowly-decaying homogeneous isotropic turbulence in a large region of the test section, with minimal structural risk. An international team or researchers collected hot wire anemometry, ultrasound anemometry, resonant cantilever anemometry, fast pitot tube anemometry, cold wire thermometry and high-speed particle tracking data of this canonical turbulent flow. While analysis of this large database, which will become publicly available over the next 2 years, has only started, the Taylor-scale Reynolds number is estimated to be between 400 and 800, with Kolmogorov scales as large as a few mm . The ESWIRP Collaboration is formed by an international team of scientists to investigate experimentally the smallest scales of turbulence. It was funded by the European Union to take advantage of the largest wind tunnel in Europe for fundamental research.
High-resolution modeling of local air-sea interaction within the Marine Continent using COAMPS
NASA Astrophysics Data System (ADS)
Jensen, T. G.; Chen, S.; Flatau, M. K.; Smith, T.; Rydbeck, A.
2016-12-01
The Maritime Continent (MC) is a region of intense deep atmospheric convection that serves as an important source of forcing for the Hadley and Walker circulations. The convective activity in the MC region spans multiple scales from local mesoscales to regional scales, and impacts equatorial wave propagation, coupled air-sea interaction and intra seasonal oscillations. The complex distribution of islands, shallow seas with fairly small heat storage and deep seas with large heat capacity is challenging to model. Diurnal convection over land-sea is part of a land-sea breeze system on a small scale, and is highly influenced by large variations in orography over land and marginal seas. Daytime solar insolation, run-off from the Archipelago and nighttime rainfall tends to stabilize the water column, while mixing by tidal currents and locally forced winds promote vertical mixing. The runoff from land and rivers and high net precipitation result in fresh water lenses that enhance vertical stability in the water column and help maintain high SST. We use the fully coupled atmosphere-ocean-wave version of the Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS) developed at NRL with resolution of a few kilometers to investigate the air-sea interaction associated with the land-sea breeze system in the MC under active and inactive phases of the Madden-Julian Oscillation. The high resolution enables simulation of strong SST gradients associated with local upwelling in deeper waters and strong salinity gradients near rivers and from heavy precipitation.
Study of Structure and Small-Scale Fragmentation in TMC-1
NASA Technical Reports Server (NTRS)
Langer, W. D.; Velusamy, T.; Kuiper, T. B.; Levin, S.; Olsen, E.; Migenes, V.
1995-01-01
Large-scale C(sup 18)O maps show that the Taurus molecular cloud 1 (TMC-1) has numerous cores located along a ridge which extends about 12 minutes by at least 35 minutes. The cores traced by C(sup 18)O are about a few arcminutes (0.1-0.2 pc) in extent, typically contain about 0.5-3 solar mass, and are probably gravitationally bound. We present a detailed study of the small-scale fragmentary structure of one of these cores, called core D, within TMC-1 using very high spectral and spatial resolution maps of CCS and CS. The CCS lines are excellent tracers for investigating the density, temperature, and velocity structure in dense cores. The high spectral resolution, 0.008 km /s, data consist mainly of single-dish, Nyquist-sampled maps of CCS at 22 GHz with 45 sec spatial resolution taken with NASA's 70 m DSN antenna at Goldstone. The high spatial resolution spectral line maps were made with the Very Large Array (9 sec resolution) at 22 GHz and with the OVRO millimeter array in CCS and CS at 93 GHz and 98 GHz, respectively, with 6 sec resolution. These maps are supplemented with single-dish observations of CCS and CC(sup 34)S spectra at 33 GHz using a NASA 34 m DSN antenna, CCS 93 GHz, C(sup 34)S (2-1), and C(sup 18)O (1-0) single-dish observations made with the AT&T Bell Laboratories 7 m antenna. Our high spectral and spatial CCS and CS maps show that core D is highly fragmented. The single-dish CCS observations map out several clumps which range in size from approx. 45 sec to 90 sec (0.03-0.06 pc). These clumps have very narrow intrinsic line widths, 0.11-0.25 km/s, slightly larger than the thermal line width for CCS at 10 K, and masses about 0.03-0.2 solar mass. Interferometer observations of some of these clumps show that they have considerable additional internal structure, consisting of several condensations ranging in size from approx. 10 sec- 30 sec (0.007-0.021 pc), also with narrow line widths. The mass of these smallest fragments is of order 0.01 solar mass. These small-scale structures traced by CCS appear to be gravitationally unbound by a large factor. Most of these objects have masses that fall below those of the putative proto-brown dwarfs (approx. less than 0.1 solar mass). The presence of many small gravitationally unbound clumps suggests that fragmentation mechanisms other than a purely Jeans gravitational instability may be important for the dynamics of these cold dense cores.
Pau, G. S. H.; Bisht, G.; Riley, W. J.
2014-09-17
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.
Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt
2009-05-30
The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.
Mantini, D.; Marzetti, L.; Corbetta, M.; Romani, G.L.; Del Gratta, C.
2017-01-01
Two major non-invasive brain mapping techniques, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), have complementary advantages with regard to their spatial and temporal resolution. We propose an approach based on the integration of EEG and fMRI, enabling the EEG temporal dynamics of information processing to be characterized within spatially well-defined fMRI large-scale networks. First, the fMRI data are decomposed into networks by means of spatial independent component analysis (sICA), and those associated with intrinsic activity and/or responding to task performance are selected using information from the related time-courses. Next, the EEG data over all sensors are averaged with respect to event timing, thus calculating event-related potentials (ERPs). The ERPs are subjected to temporal ICA (tICA), and the resulting components are localized with the weighted minimum norm (WMNLS) algorithm using the task-related fMRI networks as priors. Finally, the temporal contribution of each ERP component in the areas belonging to the fMRI large-scale networks is estimated. The proposed approach has been evaluated on visual target detection data. Our results confirm that two different components, commonly observed in EEG when presenting novel and salient stimuli respectively, are related to the neuronal activation in large-scale networks, operating at different latencies and associated with different functional processes. PMID:20052528
Large-scale Electrophysiology: Acquisition, Compression, Encryption, and Storage of Big Data
Brinkmann, Benjamin H.; Bower, Mark R.; Stengel, Keith A.; Worrell, Gregory A.; Stead, Matt
2009-01-01
The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32 kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single neuron action potentials, high frequency oscillations, and high amplitude ultraslow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information. PMID:19427545
Simulation Based Exploration of Critical Zone Dynamics in Intensively Managed Landscapes
NASA Astrophysics Data System (ADS)
Kumar, P.
2017-12-01
The advent of high-resolution measurements of topographic and (vertical) vegetation features using areal LiDAR are enabling us to resolve micro-scale ( 1m) landscape structural characteristics over large areas. Availability of hyperspectral measurements is further augmenting these LiDAR data by enabling the biogeochemical characterization of vegetation and soils at unprecedented spatial resolutions ( 1-10m). Such data have opened up novel opportunities for modeling Critical Zone processes and exploring questions that were not possible before. We show how an integrated 3-D model at 1m grid resolution can enable us to resolve micro-topographic and ecological dynamics and their control on hydrologic and biogeochemical processes over large areas. We address the computational challenge of such detailed modeling by exploiting hybrid CPU and GPU computing technologies. We show results of moisture, biogeochemical, and vegetation dynamics from studies in the Critical Zone Observatory for Intensively managed Landscapes (IMLCZO) in the Midwestern United States.
Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos
NASA Technical Reports Server (NTRS)
Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.
1994-01-01
Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Markakis, K.; Valari, M.; Colette, A.; Sanchez, O.; Perrussel, O.; Honore, C.; Vautard, R.; Klimont, Z.; Rao, S.
2014-01-01
Ozone and PM2.5 concentrations over the city of Paris are modeled with the CHIMERE air-quality model at 4 km × 4 km horizontal resolution for two future emission scenarios. High-resolution (1 km × 1 km) emission projection until 2020 for the greater Paris region is developed by local experts (AIRPARIF) and is further extended to year 2050 based on regional scale emission projections developed by the Global Energy Assessment. Model evaluation is performed based on a 10 yr control simulation. Ozone is in very good agreement with measurements while PM2.5 is underestimated by 20% over the urban area mainly due to a large wet bias in wintertime precipitation. A significant increase of maximum ozone relative to present time levels over Paris is modeled under the "business as usual" scenario (+7 ppb) while a more optimistic mitigation scenario leads to moderate ozone decrease (-3.5 ppb) in year 2050. These results are substantially different to previous regional scale projections where 2050 ozone is found to decrease under both future scenarios. A sensitivity analysis showed that this difference is due to the fact that ozone formation over Paris at the current, urban scale study, is driven by VOC-limited chemistry, whereas at the regional scale ozone formation occurs under NOx-sensitive conditions. This explains why the sharp NOx reductions implemented in the future scenarios have a different effect on ozone projections at different scales. In rural areas projections at both scales yield similar results showing that the longer time-scale processes of emission transport and ozone formation are less sensitive to model resolution. PM2.5 concentrations decrease by 78% and 89% under "business as usual" and "mitigation" scenarios respectively compared to present time period. The reduction is much more prominent over the urban part of the domain due to the effective reductions of road transport and residential emissions resulting in the smoothing of the large urban increment modelled in the control simulation.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
NASA Astrophysics Data System (ADS)
Plebe, Alice; Grasso, Giorgio
2016-12-01
This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.
The observation of possible reconnection events in the boundary changes of solar coronal holes
NASA Technical Reports Server (NTRS)
Kahler, S. W.; Moses, J. Daniel
1989-01-01
Coronal holes are large scale regions of magnetically open fields which are easily observed in solar soft X-ray images. The boundaries of coronal holes are separatrices between large scale regions of open and closed magnetic fields where one might expect to observe evidence of solar magnetic reconnection. Previous studies by Nolte and colleagues using Skylab X-ray images established that large scale (greater than or equal to 9 x 10(4) km) changes in coronal hole boundaries were due to coronal processes, i.e., magnetic reconnection, rather than to photospheric motions. Those studies were limited to time scales of about one day, and no conclusion could be drawn about the size and time scales of the reconnection process at hole boundaries. Sequences of appropriate Skylab X-ray images were used with a time resolution of about 90 min during times of the central meridian passages of the coronal hole labelled Coronal Hole 1 to search for hole boundary changes which can yield the spatial and temporal scales of coronal magnetic reconnection. It was found that 29 of 32 observed boundary changes could be associated with bright points. The appearance of the bright point may be the signature of reconnection between small scale and large scale magnetic fields. The observed boundary changes contributed to the quasi-rigid rotation of Coronal Hole 1.
NASA Astrophysics Data System (ADS)
Shin, S.; Pokhrel, Y. N.
2016-12-01
Land surface models have been used to assess water resources sustainability under changing Earth environment and increasing human water needs. Overwhelming observational records indicate that human activities have ubiquitous and pertinent effects on the hydrologic cycle; however, they have been crudely represented in large scale land surface models. In this study, we enhance an integrated continental-scale land hydrology model named Leaf-Hydro-Flood to better represent land-water management. The model is implemented at high resolution (5km grids) over the continental US. Surface water and groundwater are withdrawn based on actual practices. Newly added irrigation, water diversion, and dam operation schemes allow better simulations of stream flows, evapotranspiration, and infiltration. Results of various hydrologic fluxes and stores from two sets of simulation (one with and the other without human activities) are compared over a range of river basin and aquifer scales. The improved simulations of land hydrology have potential to build consistent modeling framework for human-water-climate interactions.
NASA Astrophysics Data System (ADS)
Yue, X.; Wang, W.; Schreiner, W. S.; Kuo, Y. H.; Lei, J.; Liu, J.; Burns, A. G.; Zhang, Y.; Zhang, S.
2015-12-01
Based on slant total electron content (TEC) observations made by ~10 satellites and ~450 ground IGS GNSS stations, we constructed a 4-D ionospheric electron density reanalysis during the March 17, 2013 geomagnetic storm. Four main large-scale ionospheric disturbances are identified from reanalysis: (1) The positive storm during the initial phase; (2) The SED (storm enhanced density) structure in both northern and southern hemisphere; (3) The large positive storm in main phase; (4) The significant negative storm in middle and low latitude during recovery phase. We then run the NCAR-TIEGCM model with Heelis electric potential empirical model as polar input. The TIEGCM can reproduce 3 of 4 large-scale structures (except SED) very well. We then further analyzed the altitudinal variations of these large-scale disturbances and found several interesting things, such as the altitude variation of SED, the rotation of positive/negative storm phase with local time. Those structures could not be identified clearly by traditional used data sources, which either has no gloval coverage or no vertical resolution. The drivers such as neutral wind/density and electric field from TIEGCM simulations are also analyzed to self-consistantly explain the identified disturbance features.
NASA Technical Reports Server (NTRS)
Putnam, William M.
2011-01-01
Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Schaff, D. P.
2012-12-01
Archives of digital seismic data recorded by seismometer networks around the world have grown tremendously over the last several decades helped by the deployment of seismic stations and their continued operation within the framework of monitoring earthquake activity and verification of the Nuclear Test-Ban Treaty. We show results from our continuing effort in developing efficient waveform cross-correlation and double-difference analysis methods for the large-scale processing of regional and global seismic archives to improve existing earthquake parameter estimates, detect seismic events with magnitudes below current detection thresholds, and improve real-time monitoring procedures. We demonstrate the performance of these algorithms as applied to the 28-year long seismic archive of the Northern California Seismic Network. The tools enable the computation of periodic updates of a high-resolution earthquake catalog of currently over 500,000 earthquakes using simultaneous double-difference inversions, achieving up to three orders of magnitude resolution improvement over existing hypocenter locations. This catalog, together with associated metadata, form the underlying relational database for a real-time double-difference scheme, DDRT, which rapidly computes high-precision correlation times and hypocenter locations of new events with respect to the background archive (http://ddrt.ldeo.columbia.edu). The DDRT system facilitates near-real-time seismicity analysis, including the ability to search at an unprecedented resolution for spatio-temporal changes in seismogenic properties. In areas with continuously recording stations, we show that a detector built around a scaled cross-correlation function can lower the detection threshold by one magnitude unit compared to the STA/LTA based detector employed at the network. This leads to increased event density, which in turn pushes the resolution capability of our location algorithms. On a global scale, we are currently building the computational framework for double-difference processing the combined parametric and waveform archives of the ISC, NEIC, and IRIS with over three million recorded earthquakes worldwide. Since our methods are scalable and run on inexpensive Beowulf clusters, periodic re-analysis of such archives may thus become a routine procedure to continuously improve resolution in existing global earthquake catalogs. Results from subduction zones and aftershock sequences of recent great earthquakes demonstrate the considerable social and economic impact that high-resolution images of active faults, when available in real-time, will have in the prompt evaluation and mitigation of seismic hazards. These results also highlight the need for consistent long-term seismic monitoring and archiving of records.
Measuring Cosmic Expansion and Large Scale Structure with Destiny
NASA Technical Reports Server (NTRS)
Benford, Dominic J.; Lauer, Tod R.
2007-01-01
Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4
Daily time series evapotranspiration maps for Oklahoma and Texas panhandle
USDA-ARS?s Scientific Manuscript database
Evapotranspiration (ET) is an important process in ecosystems’ water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. ...
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
Shafer, Sarah; Bartlein, Patrick J.; Gray, Elizabeth M.; Pelltier, Richard T.
2015-01-01
Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0–58.0°N latitude by 136.6–103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070–2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas.
Shafer, Sarah L.; Bartlein, Patrick J.; Gray, Elizabeth M.; Pelltier, Richard T.
2015-01-01
Future climate change may significantly alter the distributions of many plant taxa. The effects of climate change may be particularly large in mountainous regions where climate can vary significantly with elevation. Understanding potential future vegetation changes in these regions requires methods that can resolve vegetation responses to climate change at fine spatial resolutions. We used LPJ, a dynamic global vegetation model, to assess potential future vegetation changes for a large topographically complex area of the northwest United States and southwest Canada (38.0–58.0°N latitude by 136.6–103.0°W longitude). LPJ is a process-based vegetation model that mechanistically simulates the effect of changing climate and atmospheric CO2 concentrations on vegetation. It was developed and has been mostly applied at spatial resolutions of 10-minutes or coarser. In this study, we used LPJ at a 30-second (~1-km) spatial resolution to simulate potential vegetation changes for 2070–2099. LPJ was run using downscaled future climate simulations from five coupled atmosphere-ocean general circulation models (CCSM3, CGCM3.1(T47), GISS-ER, MIROC3.2(medres), UKMO-HadCM3) produced using the A2 greenhouse gases emissions scenario. Under projected future climate and atmospheric CO2 concentrations, the simulated vegetation changes result in the contraction of alpine, shrub-steppe, and xeric shrub vegetation across the study area and the expansion of woodland and forest vegetation. Large areas of maritime cool forest and cold forest are simulated to persist under projected future conditions. The fine spatial-scale vegetation simulations resolve patterns of vegetation change that are not visible at coarser resolutions and these fine-scale patterns are particularly important for understanding potential future vegetation changes in topographically complex areas. PMID:26488750
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Solar Confocal Interferometers for Sub-Picometer-Resolution Spectral Filters
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Pietraszewski, Chris; West, Edward A.; Dines, Terence C.
2006-01-01
The confocal Fabry-Perot interferometer allows sub-picometer spectral resolution of Fraunhofer line profiles. Such high spectral resolution is needed to keep pace with the higher spatial resolution of the new set of large-aperture solar telescopes. The line-of-sight spatial resolution derived for line profile inversions would then track the improvements of the transverse spatial scale provided by the larger apertures. The confocal interferometer's unique properties allow a simultaneous increase in both etendue and spectral power. Methods: We have constructed and tested two confocal interferometers. Conclusions: In this paper we compare the confocal interferometer with other spectral imaging filters, provide initial design parameters, show construction details for two designs, and report on the laboratory test results for these interferometers, and propose a multiple etalon system for future testing of these units and to obtain sub-picometer spectral resolution information on the photosphere in both the visible and near-infrared.
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Single-trabecula building block for large-scale finite element models of cancellous bone.
Dagan, D; Be'ery, M; Gefen, A
2004-07-01
Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
Influence of Kuroshio Oceanic Eddies on North Pacific Weather Patterns
NASA Astrophysics Data System (ADS)
Ma, X.; Chang, P.; Saravanan, R.; Montuoro, R.; Hsieh, J. S.; Wu, D.; Lin, X.; Wu, L.; Jing, Z.
2016-02-01
High-resolution satellite observations reveal energetic meso-scale ocean eddy activity and positive correlation between meso-scale sea surface temperature (SST) and surface wind along oceanic frontal zones, such as the Kuroshio and Gulf Stream, suggesting a potential role of meso-scale oceanic eddies in forcing the atmosphere. Using a 27 km horizontal resolution Weather Research Forecasting (WRF) model forced with observed daily SST at 0.09° spatial resolution during boreal winter season, two ensembles of 10 WRF simulations, in one of which meso-scale SST variability induced by ocean eddies was suppressed, were conducted in the North Pacific to study the local and remote influence of meso-scale oceanic eddies in the Kuroshio Extention Region (KER) on the atmosphere. Suppression of meso-scale oceanic eddies results in a deep tropospheric response along and downstream of the KER, including a significant decrease (increase) in winter season mean rainfall along the KER (west coast of US), a reduction of storm genesis in the KER, and a southward shift of the jet stream and North Pacific storm track in the eastern North Pacific. The simulated local and remote rainfall response to meso-scale oceanic eddies in the KER is also supported by observational analysis. A mechanism invoking moist baroclinic instability is proposed as a plausible explanation for the linkage between meso-scale oceanic eddies in the KER and large-scale atmospheric response in the North Pacific. It is argued that meso-scale oceanic eddies can have a rectified effect on planetary boundary layer moisture, the stability of the lower atmosphere and latent heat release, which in turn affect cyclogenesis. The accumulated effect of the altered storm development downstream further contributes to the equivalent barotropic mean flow change in the eastern North Pacific basin.
NASA Astrophysics Data System (ADS)
Scholz, L. T.; Bierer, B.; Ortiz Perez, A.; Woellenstein, J.; Sachs, T.; Palzer, S.
2016-12-01
The determination of carbon dioxide (CO2) fluxes between ecosystems and the atmosphere is crucial for understanding ecological processes on regional and global scales. High quality data sets with full uncertainty estimates are needed to evaluate model simulations. However, current flux monitoring techniques are unsuitable to provide reliable data of a large area at both a detailed level and an appropriate resolution, at best in combination with a high sampling rate. Currently used sensing technologies, such as non-dispersive infrared (NDIR) gas analyzers, cannot be deployed in large numbers to provide high spatial resolution due to their costs and complex maintenance requirements. Here, we propose a novel CO2 measurement system, whose gas sensing unit is made up of low-cost, low-power consuming components only, such as an IR-LED and a photoacoustic detector. The sensor offers a resolution of < 50 ppm in the interesting concentration range up to 5000 ppm and an almost linear and fast sensor response of just a few seconds. Since the sensor can be applied in-situ without special precautions, it allows for environmental monitoring in a non-invasive way. Its low energy consumption enables long-term measurements. The low overall costs favor the manufacturing in large quantities. This allows the operation of multiple sensors at a reasonable price and thus provides concentration measurements at any desired spatial coverage and at high temporal resolution. With appropriate 3D configuration of the units, vertical and horizontal fluxes can be determined. By applying a closely meshed wireless sensor network, inhomogeneities as well as CO2 sources and sinks in the lower atmosphere can be monitored. In combination with sensors for temperature, pressure and humidity, our sensor paves the way towards the reliable and extensive monitoring of ecosystem-atmosphere exchange rates. The technique can also be easily adapted to other relevant greenhouse gases.
Evaluation of a multi-scale WRF-CAM5 simulation during the 2010 East Asian Summer Monsoon
Campbell, Patrick; Zhang, Yang; Wang, Kai; ...
2017-09-08
The Weather Research and Forecasting model with Chemistry (WRF-Chem) with the physics package of the Community Atmosphere Model Version 5 (CAM5) has been applied at multiple scales over Eastern China (EC) and the Yangtze River Delta (YRD) to evaluate how increased horizontal resolution with physics designed for a coarser resolution climate model impacts aerosols and clouds, and the resulting precipitation characteristics and performance during the 2010 East Asian Summer Monsoon (EASM). Despite large underpredictions in surface aerosol concentrations and aerosol optical depth, there is good spatial agreement with surface observations of chemical predictions, and increasing spatial resolution tends to improvemore » performance. Model bias and normalized root mean square values for precipitation predictions are relatively small, but there are significant differences when comparing modeled and observed probability density functions for precipitation in EC and YRD. Increasing model horizontal resolution tends to reduce model bias and error for precipitation predictions. The surface and column aerosol loading is maximized between about 32°N and 42°N in early to mid-May during the 2010 EASM, and then shifts north while decreasing in magnitude during July and August. Changing model resolution moderately changes the spatiotemporal relationships between aerosols, cloud properties, and precipitation during the EASM, thus demonstrating the importance of model grid resolution in simulating EASM circulation and rainfall patterns over EC and the YRD. In conclusion, results from this work demonstrate the capability and limitations in the aerosol, cloud, and precipitation representation of WRF-CAM5 for regional-scale applications down to relatively fine horizontal resolutions. Further WRF-CAM5 model development and application in this area is needed.« less
Evaluation of a multi-scale WRF-CAM5 simulation during the 2010 East Asian Summer Monsoon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Patrick; Zhang, Yang; Wang, Kai
The Weather Research and Forecasting model with Chemistry (WRF-Chem) with the physics package of the Community Atmosphere Model Version 5 (CAM5) has been applied at multiple scales over Eastern China (EC) and the Yangtze River Delta (YRD) to evaluate how increased horizontal resolution with physics designed for a coarser resolution climate model impacts aerosols and clouds, and the resulting precipitation characteristics and performance during the 2010 East Asian Summer Monsoon (EASM). Despite large underpredictions in surface aerosol concentrations and aerosol optical depth, there is good spatial agreement with surface observations of chemical predictions, and increasing spatial resolution tends to improvemore » performance. Model bias and normalized root mean square values for precipitation predictions are relatively small, but there are significant differences when comparing modeled and observed probability density functions for precipitation in EC and YRD. Increasing model horizontal resolution tends to reduce model bias and error for precipitation predictions. The surface and column aerosol loading is maximized between about 32N and 42N in early to mid-May during the 2010 EASM, and then shifts north while decreasing in magnitude during July and August. Changing model resolution moderately changes the spatiotemporal relationships between aerosols, cloud properties, and precipitation during the EASM, thus demonstrating the importance of model grid resolution in simulating EASM circulation and rainfall patterns over EC and the YRD. Results from this work demonstrate the capability and limitations in the aerosol, cloud, and precipitation representation of WRF-CAM5 for regional-scale applications down to relatively fine horizontal resolutions. Further WRF-CAM5 model development and application in this area is needed.« less
Evaluation of a multi-scale WRF-CAM5 simulation during the 2010 East Asian Summer Monsoon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Patrick; Zhang, Yang; Wang, Kai
The Weather Research and Forecasting model with Chemistry (WRF-Chem) with the physics package of the Community Atmosphere Model Version 5 (CAM5) has been applied at multiple scales over Eastern China (EC) and the Yangtze River Delta (YRD) to evaluate how increased horizontal resolution with physics designed for a coarser resolution climate model impacts aerosols and clouds, and the resulting precipitation characteristics and performance during the 2010 East Asian Summer Monsoon (EASM). Despite large underpredictions in surface aerosol concentrations and aerosol optical depth, there is good spatial agreement with surface observations of chemical predictions, and increasing spatial resolution tends to improvemore » performance. Model bias and normalized root mean square values for precipitation predictions are relatively small, but there are significant differences when comparing modeled and observed probability density functions for precipitation in EC and YRD. Increasing model horizontal resolution tends to reduce model bias and error for precipitation predictions. The surface and column aerosol loading is maximized between about 32°N and 42°N in early to mid-May during the 2010 EASM, and then shifts north while decreasing in magnitude during July and August. Changing model resolution moderately changes the spatiotemporal relationships between aerosols, cloud properties, and precipitation during the EASM, thus demonstrating the importance of model grid resolution in simulating EASM circulation and rainfall patterns over EC and the YRD. In conclusion, results from this work demonstrate the capability and limitations in the aerosol, cloud, and precipitation representation of WRF-CAM5 for regional-scale applications down to relatively fine horizontal resolutions. Further WRF-CAM5 model development and application in this area is needed.« less
NASA Astrophysics Data System (ADS)
Hugue, F.; Lapointe, M.; Eaton, B. C.; Lepoutre, A.
2016-01-01
We illustrate an approach to quantify patterns in hydraulic habitat composition and local heterogeneity applicable at low cost over very large river extents, with selectable reach window scales. Ongoing developments in remote sensing and geographical information science massively improve efficiencies in analyzing earth surface features. With the development of new satellite sensors and drone platforms and with the lowered cost of high resolution multispectral imagery, fluvial geomorphology is experiencing a revolution in mapping streams at high resolution. Exploiting the power of aerial or satellite imagery is particularly useful in a riverscape research framework (Fausch et al., 2002), where high resolution sampling of fluvial features and very large coverage extents are needed. This study presents a satellite remote sensing method that requires very limited field calibration data to estimate over various scales ranging from 1 m to many tens or river kilometers (i) spatial composition metrics for key hydraulic mesohabitat types and (ii) reach-scale wetted habitat heterogeneity indices such as the hydromorphological index of diversity (HMID). When the purpose is hydraulic habitat characterization applied over long river networks, the proposed method (although less accurate) is much less computationally expensive and less data demanding than two dimensional computational fluid dynamics (CFD). Here, we illustrate the tools based on a Worldview 2 satellite image of the Kiamika River, near Mont Laurier, Quebec, Canada, specifically over a 17-km river reach below the Kiamika dam. In the first step, a high resolution water depth (D) map is produced from a spectral band ratio (calculated from the multispectral image), calibrated with limited field measurements. Next, based only on known river discharge and estimated cross section depths at time of image capture, empirical-based pseudo-2D hydraulic rules are used to rapidly generate a two-dimensional map of flow velocity (V) over the 17-km Kiamika reach. The joint distribution of D and V variables over wetted zones then is used to reveal structural patterns in hydraulic habitat availability at patch, reach, and segment scales. Here we analyze 156 bivariate (D, V) density function plots estimated over moving reach windows along the satellite scene extent to extract 14 physical habitat metrics (such as river width, mean and modal depths and velocity, variances and covariance in D and V over 1-m pixels, HMID, entropy). A principal component analysis on the set of metrics is then used to cluster river reaches in regard to similarity in their hydraulic habitat composition and heterogeneity. Applications of this approach can include (i) specific fish habitat detection at riverscape scales (e.g., large areas of riffle spawning beds, deeper pools) for regional management, (ii) studying how river habitat heterogeneity is correlated to fish distribution and (iii) guidance for site location for restoration of key habitats or for post regulation monitoring of representative reaches of various types.
NASA Astrophysics Data System (ADS)
Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis
2018-02-01
We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.
Everaers, Ralf; Rosa, Angelo
2012-01-07
The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.
Scales of snow depth variability in high elevation rangeland sagebrush
NASA Astrophysics Data System (ADS)
Tedesche, Molly E.; Fassnacht, Steven R.; Meiman, Paul J.
2017-09-01
In high elevation semi-arid rangelands, sagebrush and other shrubs can affect transport and deposition of wind-blown snow, enabling the formation of snowdrifts. Datasets from three field experiments were used to investigate the scales of spatial variability of snow depth around big mountain sagebrush ( Artemisia tridentata Nutt.) at a high elevation plateau rangeland in North Park, Colorado, during the winters of 2002, 2003, and 2008. Data were collected at multiple resolutions (0.05 to 25 m) and extents (2 to 1000 m). Finer scale data were collected specifically for this study to examine the correlation between snow depth, sagebrush microtopography, the ground surface, and the snow surface, as well as the temporal consistency of snow depth patterns. Variograms were used to identify the spatial structure and the Moran's I statistic was used to determine the spatial correlation. Results show some temporal consistency in snow depth at several scales. Plot scale snow depth variability is partly a function of the nature of individual shrubs, as there is some correlation between the spatial structure of snow depth and sagebrush, as well as between the ground and snow depth. The optimal sampling resolution appears to be 25-cm, but over a large area, this would require a multitude of samples, and thus a random stratified approach is recommended with a fine measurement resolution of 5-cm.
NASA Astrophysics Data System (ADS)
DiBianca, Frank A.; Melnyk, Roman; Sambari, Aniket; Jordan, Lawrence M.; Laughter, Joseph S.; Zou, Ping
2000-04-01
A technique called Variable-Resolution X-ray (VRX) detection that greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) is presented. The technique is based on a principle called 'projective compression' that allows the resolution element of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. Preliminary results from a 576-channel solid-state detector are presented. The detector has a dual-arm geometry and is comprised of CdWO4 scintillator crystals arranged in 24 modules of 24 channels/module. The scintillators are 0.85 mm wide and placed on 1 mm centers. Measurements of signal level, MTF and SNR, all versus detector angle, are presented.
Comparison of VRX CT scanners geometries
NASA Astrophysics Data System (ADS)
DiBianca, Frank A.; Melnyk, Roman; Duckworth, Christopher N.; Russ, Stephan; Jordan, Lawrence M.; Laughter, Joseph S.
2001-06-01
A technique called Variable-Resolution X-ray (VRX) detection greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) as the field size decreases. The technique is based on a principle called `projective compression' that allows both the resolution element and the sampling distance of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. This paper compares the benefits obtainable with two different VRX detector geometries: the single-arm geometry and the dual-arm geometry. The analysis is based on Monte Carlo simulations and direct calculations. The results of this study indicate that the dual-arm system appears to have more advantages than the single-arm technique.
2017-01-01
The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100
An efficient photogrammetric stereo matching method for high-resolution images
NASA Astrophysics Data System (ADS)
Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao
2016-12-01
Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.
Compactified cosmological simulations of the infinite universe
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-06-01
We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.
On the amplification of magnetic fields in cosmic filaments and galaxy clusters
NASA Astrophysics Data System (ADS)
Vazza, F.; Brüggen, M.; Gheller, C.; Wang, P.
2014-12-01
The amplification of primordial magnetic fields via a small-scale turbulent dynamo during structure formation might be able to explain the observed magnetic fields in galaxy clusters. The magnetization of more tenuous large-scale structures such as cosmic filaments is more uncertain, as it is challenging for numerical simulations to achieve the required dynamical range. In this work, we present magnetohydrodynamical cosmological simulations on large uniform grids to study the amplification of primordial seed fields in the intracluster medium (ICM) and in the warm-hot-intergalactic medium (WHIM). In the ICM, we confirm that turbulence caused by structure formation can produce a significant dynamo amplification, even if the amplification is smaller than what is reported in other papers. In the WHIM inside filaments, we do not observe significant dynamo amplification, even though we achieve Reynolds numbers of Re ˜ 200-300. The maximal amplification for large filaments is of the order of ˜100 for the magnetic energy, corresponding to a typical field of a few ˜nG starting from a primordial weak field of 10-10 G (comoving). In order to start a small-scale dynamo, we found that a minimum of ˜102 resolution elements across the virial radius of galaxy clusters was necessary. In filaments we could not find a minimum resolution to set off a dynamo. This stems from the inefficiency of supersonic motions in the WHIM in triggering solenoidal modes and small-scale twisting of magnetic field structures. Magnetic fields this small will make it hard to detect filaments in radio observations.
Estimating planktonic diversity through spatial dominance patterns in a model ocean.
Soccodato, Alice; d'Ovidio, Francesco; Lévy, Marina; Jahn, Oliver; Follows, Michael J; De Monte, Silvia
2016-10-01
In the open ocean, the observation and quantification of biodiversity patterns is challenging. Marine ecosystems are indeed largely composed by microbial planktonic communities whose niches are affected by highly dynamical physico-chemical conditions, and whose observation requires advanced methods for morphological and molecular classification. Optical remote sensing offers an appealing complement to these in-situ techniques. Global-scale coverage at high spatiotemporal resolution is however achieved at the cost of restrained information on the local assemblage. Here, we use a coupled physical and ecological model ocean simulation to explore one possible metrics for comparing measures performed on such different scales. We show that a large part of the local diversity of the virtual plankton ecosystem - corresponding to what accessible by genomic methods - can be inferred from crude, but spatially extended, information - as conveyed by remote sensing. Shannon diversity of the local community is indeed highly correlated to a 'seascape' index, which quantifies the surrounding spatial heterogeneity of the most abundant functional group. The error implied in drastically reducing the resolution of the plankton community is shown to be smaller in frontal regions as well as in regions of intermediate turbulent energy. On the spatial scale of hundreds of kms, patterns of virtual plankton diversity are thus largely sustained by mixing communities that occupy adjacent niches. We provide a proof of principle that in the open ocean information on spatial variability of communities can compensate for limited local knowledge, suggesting the possibility of integrating in-situ and satellite observations to monitor biodiversity distribution at the global scale. Copyright © 2016 Elsevier B.V. All rights reserved.
Magnetic Doppler imaging of Ap stars
NASA Astrophysics Data System (ADS)
Silvester, J.; Wade, G. A.; Kochukhov, O.; Landstreet, J. D.; Bagnulo, S.
2008-04-01
Historically, the magnetic field geometries of the chemically peculiar Ap stars were modelled in the context of a simple dipole field. However, with the acquisition of increasingly sophisticated diagnostic data, it has become clear that the large-scale field topologies exhibit important departures from this simple model. Recently, new high-resolution circular and linear polarisation spectroscopy has even hinted at the presence of strong, small-scale field structures, which were completely unexpected based on earlier modelling. This project investigates the detailed structure of these strong fossil magnetic fields, in particular the large-scale field geometry, as well as small scale magnetic structures, by mapping the magnetic and chemical surface structure of a selected sample of Ap stars. These maps will be used to investigate the relationship between the local field vector and local surface chemistry, looking for the influence the field may have on the various chemical transport mechanisms (i.e., diffusion, convection and mass loss). This will lead to better constraints on the origin and evolution, as well as refining the magnetic field model for Ap stars. Mapping will be performed using high resolution and signal-to-noise ratio time-series of spectra in both circular and linear polarisation obtained using the new-generation ESPaDOnS (CFHT, Mauna Kea, Hawaii) and NARVAL spectropolarimeters (Pic du Midi Observatory). With these data we will perform tomographic inversion of Doppler-broadened Stokes IQUV Zeeman profiles of a large variety of spectral lines using the INVERS10 magnetic Doppler imaging code, simultaneously recovering the detailed surface maps of the vector magnetic field and chemical abundances.
NASA Astrophysics Data System (ADS)
Lyon, Vincent; Wosnik, Martin
2013-11-01
Marine hydrokinetic (MHK) energy conversion devices are subject to a wide range of turbulent scales, either due to upstream bathymetry, obstacles and waves, or from wakes of upstream devices in array configurations. The commonly used, robust Acoustic Doppler Current Profilers (ADCP) are well suited for long term flow measurements in the marine environment, but are limited to low sampling rates due to their operational principle. The resulting temporal and spatial resolution is insufficient to measure all turbulence scales of interest to the device, e.g., ``blade-scale turbulence.'' The present study systematically characterizes the spatial and temporal resolution of ADCP, Acoustic Doppler Velocimetry (ADV), and Particle Image Velocimetry (PIV). Measurements were conducted in a large cross section tow tank (3.7m × 2.4m) for several benchmark cases, including low and high turbulence intensity uniform flow as well as in the wake of a cylinder, to quantitatively investigate the flow scales which each of the instruments can resolve. The purpose of the study is to supply data for mathematical modeling to improve predictions from ADCP measurements, which can help lead to higher-fidelity energy resource assessment and more accurate device evaluation, including wake measurements. Supported by NSF-CBET grant 1150797.
NASA Astrophysics Data System (ADS)
Kourafalou, V.; Kang, H.; Perlin, N.; Le Henaff, M.; Lamkin, J. T.
2016-02-01
Connectivity around the South Florida coastal regions and between South Florida and Cuba are largely influenced by a) local coastal processes and b) circulation in the Florida Straits, which is controlled by the larger scale Florida Current variability. Prediction of the physical connectivity is a necessary component for several activities that require ocean forecasts, such as oil spills, fisheries research, search and rescue. This requires a predictive system that can accommodate the intense coastal to offshore interactions and the linkages to the complex regional circulation. The Florida Straits, South Florida and Florida Keys Hybrid Coordinate Ocean Model is such a regional ocean predictive system, covering a large area over the Florida Straits and the adjacent land areas, representing both coastal and oceanic processes. The real-time ocean forecast system is high resolution ( 900m), embedded in larger scale predictive models. It includes detailed coastal bathymetry, high resolution/high frequency atmospheric forcing and provides 7-day forecasts, updated daily (see: http://coastalmodeling.rsmas.miami.edu/). The unprecedented high resolution and coastal details of this system provide value added on global forecasts through downscaling and allow a variety of applications. Examples will be presented, focusing on the period of a 2015 fisheries cruise around the coastal areas of Cuba, where model predictions helped guide the measurements on biophysical connectivity, under intense variability of the mesoscale eddy field and subsequent Florida Current meandering.
Dynamic Moss Observed with Hi-C
NASA Technical Reports Server (NTRS)
Alexander, Caroline; Winebarger, Amy; Morton, Richard; Savage, Sabrina
2014-01-01
The High-resolution Coronal Imager (Hi-C), flown on 11 July 2012, has revealed an unprecedented level of detail and substructure within the solar corona. Hi--C imaged a large active region (AR11520) with 0.2-0.3'' spatial resolution and 5.5s cadence over a 5 minute period. An additional dataset with a smaller FOV, the same resolution, but with a higher temporal cadence (1s) was also taken during the rocket flight. This dataset was centered on a large patch of 'moss' emission that initially seemed to show very little variability. Image processing revealed this region to be much more dynamic than first thought with numerous bright and dark features observed to appear, move and disappear over the 5 minute observation. Moss is thought to be emission from the upper transition region component of hot loops so studying its dynamics and the relation between the bright/dark features and underlying magnetic features is important to tie the interaction of the different atmospheric layers together. Hi-C allows us to study the coronal emission of the moss at the smallest scales while data from SDO/AIA and HMI is used to give information on these structures at different heights/temperatures. Using the high temporal and spatial resolution of Hi-C the observed moss features were tracked and the distribution of displacements, speeds, and sizes were measured. This allows us to comment on both the physical processes occurring within the dynamic moss and the scales at which these changes are occurring.
Dynamic Moss Observed with Hi-C
NASA Technical Reports Server (NTRS)
Alexander, Caroline; Winebarger, Amy; Morton, Richard; Savage, Sabrina
2014-01-01
The High-resolution Coronal Imager (Hi-C), flown on 11 July 2012, has revealed an unprecedented level of detail and substructure within the solar corona. Hi-C imaged a large active region (AR11520) with 0.2-0.3'' spatial resolution and 5.5s cadence over a 5 minute period. An additional dataset with a smaller FOV, the same resolution, but with a higher temporal cadence (1s) was also taken during the rocket flight. This dataset was centered on a large patch of 'moss' emission that initially seemed to show very little variability. Image processing revealed this region to be much more dynamic than first thought with numerous bright and dark features observed to appear, move and disappear over the 5 minute observation. Moss is thought to be emission from the upper transition region component of hot loops so studying its dynamics and the relation between the bright/dark features and underlying magnetic features is important to tie the interaction of the different atmospheric layers together. Hi-C allows us to study the coronal emission of the moss at the smallest scales while data from SDO/AIA and HMI is used to give information on these structures at different heights/temperatures. Using the high temporal and spatial resolution of Hi-C the observed moss features were tracked and the distribution of displacements, speeds, and sizes were measured. This allows us to comment on both the physical processes occurring within the dynamic moss and the scales at which these changes are occurring.
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
NASA Astrophysics Data System (ADS)
Pradhan, Aniruddhe; Akhavan, Rayhaneh
2017-11-01
Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.
Extreme weather: Subtropical floods and tropical cyclones
NASA Astrophysics Data System (ADS)
Shaevitz, Daniel A.
Extreme weather events have a large effect on society. As such, it is important to understand these events and to project how they may change in a future, warmer climate. The aim of this thesis is to develop a deeper understanding of two types of extreme weather events: subtropical floods and tropical cyclones (TCs). In the subtropics, the latitude is high enough that quasi-geostrophic dynamics are at least qualitatively relevant, while low enough that moisture may be abundant and convection strong. Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. In the first part of this thesis, I examine the possible triggering of convection by the large-scale dynamics and investigate the coupling between the two. Specifically two examples of extreme precipitation events in the subtropics are analyzed, the 2010 and 2014 floods of India and Pakistan and the 2015 flood of Texas and Oklahoma. I invert the quasi-geostrophic omega equation to decompose the large-scale vertical motion profile to components due to synoptic forcing and diabatic heating. Additionally, I present model results from within the Column Quasi-Geostrophic framework. A single column model and cloud-revolving model are forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. It is found that convection was triggered primarily by mechanically forced orographic ascent over the Himalayas during the India/Pakistan flood and by upper-level Potential Vorticity disturbances during the Texas/Oklahoma flood. Furthermore, a climate attribution analysis was conducted for the Texas/Oklahoma flood and it is found that anthropogenic climate change was responsible for a small amount of rainfall during the event but the intensity of this event may be greatly increased if it occurs in a future climate. In the second part of this thesis, I examine the ability of high-resolution global atmospheric models to simulate TCs. Specifically, I present an intercomparison of several models' ability to simulate the global characteristics of TCs in the current climate. This is a necessary first step before using these models to project future changes in TCs. Overall, the models were able to reproduce the geographic distribution of TCs reasonably well, with some of the models performing remarkably well. The intensity of TCs varied widely between the models, with some of this difference being due to model resolution.
NASA Astrophysics Data System (ADS)
Mizyuk, Artem; Senderov, Maxim; Korotaev, Gennady
2016-04-01
Large number of numerical ocean models were implemented for the Black Sea basin during last two decades. They reproduce rather similar structure of synoptical variability of the circulation. Since 00-s numerical studies of the mesoscale structure are carried out using high performance computing (HPC). With the growing capacity of computing resources it is now possible to reconstruct the Black Sea currents with spatial resolution of several hundreds meters. However, how realistic these results can be? In the proposed study an attempt is made to understand which spatial scales are reproduced by ocean model in the Black Sea. Simulations are made using parallel version of NEMO (Nucleus for European Modelling of the Ocean). A two regional configurations with spatial resolutions 5 km and 2.5 km are described. Comparison of the SST from simulations with two spatial resolutions shows rather qualitative difference of the spatial structures. Results of high resolution simulation are compared also with satellite observations and observation-based products from Copernicus using spatial correlation and spectral analysis. Spatial scales of correlations functions for simulated and observed SST are rather close and differs much from satellite SST reanalysis. Evolution of spectral density for modelled SST and reanalysis showed agreed time periods of small scales intensification. Using of the spectral analysis for satellite measurements is complicated due to gaps. The research leading to this results has received funding from Russian Science Foundation (project № 15-17-20020)
Global climate models (GCMs) are currently used to obtain information about future changes in the large-scale climate. However, such simulations are typically done at coarse spatial resolutions, with model grid boxes on the order of 100 km on a horizontal side. Therefore, techniq...
Automated geographic registration and radiometric correction for UAV-based mosaics
USDA-ARS?s Scientific Manuscript database
Texas A&M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to s...
Large-scale imaging of cortical network activity with calcium indicators.
Ikegaya, Yuji; Le Bon-Jego, Morgane; Yuste, Rafael
2005-06-01
Bulk loading of calcium indicators has provided a unique opportunity to reconstruct the activity of cortical networks with single-cell resolution. Here we describe the detailed methods of bulk loading of AM dyes we developed and have been improving for imaging with a spinning disk confocal microscope.
NASA Technical Reports Server (NTRS)
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang;
2015-01-01
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.
NASA Astrophysics Data System (ADS)
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat
2015-06-01
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.
Large-scale anisotropy in stably stratified rotating flows
Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...
2014-08-28
We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less
Initial conditions and modeling for simulations of shock driven turbulent material mixing
Grinstein, Fernando F.
2016-11-17
Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
The Advanced Pair Telescope (APT) Mission Concept
NASA Technical Reports Server (NTRS)
Hunter, Stanley; Buckley, James H.
2008-01-01
We present a mission concept for the Advanced Pair Telescope (APT), a high-energy gamma-ray instrument with an order of magnitude improvement in sensitivity, 6 sr field of view, and angular resolution a factor of 3-10 times that of GLAST. With its very wide instantaneous field-of-view and large effective area, this instrument would be capable of detecting GRBs at very large redshifts, would enable a very high resolution study of SNRs and PWN, and could provide hour-scale temporal resolution of transients from many AGN and galactic sources. The APT instrument will consist of a Xe time-projection-chamber tracker that bridges the energy regime between Compton scattering and pair production and will provide an unprecedented improvement in angular resolution; a thick scintillating-fiber trackerlcalorimeter that will provide sensitivity and energy resolution to higher energies and will possess a factor of 10 improvement in geometric factor over GLAST; and an anticoincidence detector using scintillator-tiles to reject charged particles. After the anticipated 10-years of GLAST operation , the APT instrument would provide continued coverage of the critial high-energy gamma-ray band (between 30 MeV to 100 GeV), providing an essential component of broad-band multiwavelength studies of the high-energy universe.
NASA Astrophysics Data System (ADS)
Alonso, C.; Benito, R. M.; Tarquis, A. M.
2012-04-01
Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI. The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Scaling analysis and modeling techniques are increasingly understood to be the result of nonlinear dynamic mechanisms repeating scale after scale from large to small scales leading to non-classical resolution dependencies. In the remote sensing framework the main characteristic of sensors images is the high local variability in their values. This variability is a consequence of the increase in spatial and radiometric resolution that implies an increase in complexity that it is necessary to characterize. Fractal and multifractal techniques has been proven to be useful to extract such complexities from remote sensing images and will applied in this study to see the scaling behavior for each sensor in generalized fractal dimensions. The studied area is located in the provinces of Caceres and Salamanca (east of Iberia Peninsula) with an extension of 32 x 32 km2. The altitude in the area varies from 1,560 to 320 m, comprising natural vegetation in the mountain area (forest and bushes) and agricultural crops in the valleys. Scaling analysis were applied to Landsat-5 and MODIS TERRA to the normalized derived vegetation index (NDVI) on the same region with one day of difference, 13 and 12 of July 2003 respectively. From these images the area of interest was selected obtaining 1024 x 1024 pixels for Landsat image and 128 x 128 pixels for MODIS image. This implies that the resolution for MODIS is 250x250 m. and for Landsat is 30x30 m. From the reflectance data obtained from NIR and RED bands, NDVI was calculated for each image focusing this study on 0.2 to 0.5 ranges of values. Once that both NDVI fields were obtained several fractal dimensions were estimated in each one segmenting the values in 0.20-0.25, 0.25-0.30 and so on to rich 0.45-0.50. In all the scaling analysis the scale size length was expressed in meters, and not in pixels, to make the comparison between both sensors possible. Results are discussed. Acknowledgements This work has been supported by the Spanish MEC under Projects No. AGL2010-21501/AGR, MTM2009-14621 and i-MATH No. CSD2006-00032
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong
2018-04-01
Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.
NASA Astrophysics Data System (ADS)
Trinks, Immo; Neubauer, Wolfgang; Hinterleitner, Alois; Kucera, Matthias; Löcker, Klaus; Nau, Erich; Wallner, Mario; Gabler, Manuel; Zitz, Thomas
2014-05-01
Over the past three years the 2010 in Vienna founded Ludwig Boltzmann Institute for Archaeological Prospection and Virtual Archaeology (http://archpro.lbg.ac.at), in collaboration with its ten European partner organizations, has made considerable progress in the development and application of near-surface geophysical survey technology and methodology mapping square kilometres rather than hectares in unprecedented spatial resolution. The use of multiple novel motorized multichannel GPR and magnetometer systems (both Förster/Fluxgate and Cesium type) in combination with advanced and centimetre precise positioning systems (robotic totalstations and Realtime Kinematic GPS) permitting efficient navigation in open fields have resulted in comprehensive blanket coverage archaeological prospection surveys of important cultural heritage sites, such as the landscape surrounding Stonehenge in the framework of the Stonehenge Hidden Landscape Project, the mapping of the World Cultural Heritage site Birka-Hovgården in Sweden, or the detailed investigation of the Roman urban landscape of Carnuntum near Vienna. Efficient state-of-the-art archaeological prospection survey solutions require adequate fieldwork methodologies and appropriate data processing tools for timely quality control of the data in the field and large-scale data visualisations after arrival back in the office. The processed and optimized visualisations of the geophysical measurement data provide the basis for subsequent archaeological interpretation. Integration of the high-resolution geophysical prospection data with remote sensing data acquired through aerial photography, airborne laser- and hyperspectral-scanning, terrestrial laser-scanning or detailed digital terrain models derived through photogrammetric methods permits improved understanding and spatial analysis as well as the preparation of comprehensible presentations for the stakeholders (scientific community, cultural heritage managers, public). Of paramount importance in regard to large-scale high-resolution data acquisition when using motorized survey systems is the exact data positioning as well as the removal of any measurement effects caused by the survey vehicle. The large amount of generated data requires efficient semi-automatic and automatized tools for the extraction and rendering of important information. Semi-automatic data segmentation and classification precede the detailed 3D archaeological interpretation, which still requires considerable manual input. We present the latest technological and methodological developments in regard to motorized near-surface GPR and magnetometer prospection as well as application examples from different iconic European archaeological sites.
Forced Imbibition in Porous Media: A Fourfold Scenario
NASA Astrophysics Data System (ADS)
Odier, Céleste; Levaché, Bertrand; Santanach-Carreras, Enric; Bartolo, Denis
2017-11-01
We establish a comprehensive description of the patterns formed when a wetting liquid displaces a viscous fluid confined in a porous medium. Building on model microfluidic experiments, we evidence four imbibition scenarios all yielding different large-scale morphologies. Combining high-resolution imaging and confocal microscopy, we show that they originate from two liquid-entrainment transitions and a Rayleigh-Plateau instability at the pore scale. Finally, we demonstrate and explain the long-time coarsening of the resulting patterns.
NASA Astrophysics Data System (ADS)
Markakis, K.; Valari, M.; Colette, A.; Sanchez, O.; Perrussel, O.; Honore, C.; Vautard, R.; Klimont, Z.; Rao, S.
2014-07-01
Ozone and PM2.5 concentrations over the city of Paris are modeled with the CHIMERE air-quality model at 4 km × 4 km horizontal resolution for two future emission scenarios. A high-resolution (1 km × 1 km) emission projection until 2020 for the greater Paris region is developed by local experts (AIRPARIF) and is further extended to year 2050 based on regional-scale emission projections developed by the Global Energy Assessment. Model evaluation is performed based on a 10-year control simulation. Ozone is in very good agreement with measurements while PM2.5 is underestimated by 20% over the urban area mainly due to a large wet bias in wintertime precipitation. A significant increase of maximum ozone relative to present-day levels over Paris is modeled under the "business-as-usual" scenario (+7 ppb) while a more optimistic "mitigation" scenario leads to a moderate ozone decrease (-3.5 ppb) in year 2050. These results are substantially different to previous regional-scale projections where 2050 ozone is found to decrease under both future scenarios. A sensitivity analysis showed that this difference is due to the fact that ozone formation over Paris at the current urban-scale study is driven by volatile organic compound (VOC)-limited chemistry, whereas at the regional-scale ozone formation occurs under NOx-sensitive conditions. This explains why the sharp NOx reductions implemented in the future scenarios have a different effect on ozone projections at different scales. In rural areas, projections at both scales yield similar results showing that the longer timescale processes of emission transport and ozone formation are less sensitive to model resolution. PM2.5 concentrations decrease by 78% and 89% under business-as-usual and mitigation scenarios, respectively, compared to the present-day period. The reduction is much more prominent over the urban part of the domain due to the effective reductions of road transport and residential emissions resulting in the smoothing of the large urban increment modeled in the control simulation.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.; Lakshmi, V.
2017-12-01
Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.
Anisotropy of the Cosmic Microwave Background Radiation on Large and Medium Angular Scales
NASA Technical Reports Server (NTRS)
Houghton, Anthony; Timbie, Peter
1998-01-01
This grant has supported work at Brown University on measurements of the 2.7 K Cosmic Microwave Background Radiation (CMB). The goal has been to characterize the spatial variations in the temperature of the CMB in order to understand the formation of large-scale structure in the universe. We have concurrently pursued two measurements using millimeter-wave telescopes carried aloft by scientific balloons. Both systems operate over a range of wavelengths, chosen to allow spectral removal of foreground sources such as the atmosphere, Galaxy, etc. The angular resolution of approx. 25 arcminutes is near the angular scale at which the most structure is predicted by current models to be visible in the CMB angular power spectrum. The main goal is to determine the angular scale of this structure; in turn we can infer the density parameter, Omega, for the universe as well as other cosmological parameters, such as the Hubble constant.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
Recent Developments in Transition-Edge Strip Detectors for Solar X-Rays
NASA Technical Reports Server (NTRS)
Rausch, Adam J.; Deiker, Steven W.; Hilton, Gene; Irwin, Kent D.; Martinez-Galarce, Dennis S.; Shing, Lawrence; Stern, Robert A.; Ullom, Joel N.; Vale, Leila R.
2008-01-01
LMSAL and NIST are developing position-sensitive x-ray strip detectors based on Transition Edge Sensor (TES) microcalorimeters optimized for solar physics. By combining high spectral (E/ delta E approximately equals 1600) and temporal (single photon delta t approximately equals 10 micro s) resolutions with imaging capabilities, these devices will be able to study high-temperature (>l0 MK) x-ray lines as never before. Diagnostics from these lines should provide significant new insight into the physics of both microflares and the early stages of flares. Previously, the large size of traditional TESs, along with the heat loads associated with wiring large arrays, presented obstacles to using these cryogenic detectors for solar missions. Implementing strip detector technology at small scales, however, addresses both issues: here, a line of substantially smaller effective pixels requires only two TESs, decreasing both the total array size and the wiring requirements for the same spatial resolution. Early results show energy resolutions of delta E(sub fwhm) approximately equals 30 eV and spatial resolutions of approximately 10-15 micron, suggesting the strip-detector concept is viable.
Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Marks, D. G.; Gurney, R. J.
2009-12-01
The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
Observations of Seafloor Roughness in a Tidally Modulated Inlet
NASA Astrophysics Data System (ADS)
Lippmann, T. C.; Hunt, J.
2014-12-01
The vertical structure of shallow water flows are influenced by the presence of a bottom boundary layer, which spans the water column for long period waves or mean flows. The nature of the boundary is determined in part by the roughness elements that make up the seafloor, and includes sometimes complex undulations associated with regular and irregular shaped bedforms whose scales range several orders of magnitude from orbital wave ripples (10-1 m) to mega-ripples (100 m) and even larger features (101-103) such as sand waves, bars, and dunes. Modeling efforts often parameterize the effects of roughness elements on flow fields, depending on the complexity of the boundary layer formulations. The problem is exacerbated by the transient nature of bedforms and their large spatial extent and variability. This is particularly important in high flow areas with large sediment transport, such as tidally dominated sandy inlets like New River Inlet, NC. Quantification of small scale seafloor variability over large spatial areas requires the use of mobile platforms that can measure with fine scale (order cm) accuracy in wide swaths. The problem is difficult in shallow water where waves and currents are large, and water clarity is often limited. In this work, we present results from bathymetric surveys obtained with the Coastal Bathymetry Survey System, a personal watercraft equipped with a Imagenex multibeam acoustic echosounder and Applanix POS-MV 320 GPS-aided inertial measurement unit. This system is able to measure shallow water seafloor bathymetry and backscatter intensity with very fine scale (10-1 m) resolution and over relatively large scales (103 m) in the presence of high waves and currents. Wavenumber spectra show that the noise floor of the resolved multibeam bathymetry is on the order of 2.5 - 5 cm in amplitude, depending on water depths ranging 2 - 6 m, and about 30 cm in wavelength. Seafloor roughness elements are estimated from wavenumber spectra across the inlet from bathymetric maps of the seafloor obtained with 10-25 cm horizontal resolution. Implications of the effects of the bottom variability on the vertical structure of the currents will be discussed. This work was supported by ONR and NOAA.
Influence of resolution in irrigated area mapping and area estimation
Velpuri, N.M.; Thenkabail, P.S.; Gumma, M.K.; Biradar, C.; Dheeravath, V.; Noojipady, P.; Yuanjie, L.
2009-01-01
The overarching goal of this paper was to determine how irrigated areas change with resolution (or scale) of imagery. Specific objectives investigated were to (a) map irrigated areas using four distinct spatial resolutions (or scales), (b) determine how irrigated areas change with resolutions, and (c) establish the causes of differences in resolution-based irrigated areas. The study was conducted in the very large Krishna River basin (India), which has a high degree of formal contiguous, and informal fragmented irrigated areas. The irrigated areas were mapped using satellite sensor data at four distinct resolutions: (a) NOAA AVHRR Pathfinder 10,000 m, (b) Terra MODIS 500 m, (c) Terra MODIS 250 m, and (d) Landsat ETM+ 30 m. The proportion of irrigated areas relative to Landsat 30 m derived irrigated areas (9.36 million hectares for the Krishna basin) were (a) 95 percent using MODIS 250 m, (b) 93 percent using MODIS 500 m, and (c) 86 percent using AVHRR 10,000 m. In this study, it was found that the precise location of the irrigated areas were better established using finer spatial resolution data. A strong relationship (R2 = 0.74 to 0.95) was observed between irrigated areas determined using various resolutions. This study proved the hypotheses that "the finer the spatial resolution of the sensor used, greater was the irrigated area derived," since at finer spatial resolutions, fragmented areas are detected better. Accuracies and errors were established consistently for three classes (surface water irrigated, ground water/conjunctive use irrigated, and nonirrigated) across the four resolutions mentioned above. The results showed that the Landsat data provided significantly higher overall accuracies (84 percent) when compared to MODIS 500 m (77 percent), MODIS 250 m (79 percent), and AVHRR 10,000 m (63 percent). ?? 2009 American Society for Photogrammetry and Remote Sensing.
An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-06-01
The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Mapping nonlinear receptive field structure in primate retina at single cone resolution
Li, Peter H; Greschner, Martin; Gunning, Deborah E; Mathieson, Keith; Sher, Alexander; Litke, Alan M; Paninski, Liam
2015-01-01
The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits. DOI: http://dx.doi.org/10.7554/eLife.05241.001 PMID:26517879
Image interpolation used in three-dimensional range data compression.
Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian
2016-05-20
Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.
What is the effect of LiDAR-derived DEM resolution on large-scale watershed model results?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ping Yang; Daniel B. Ames; Andre Fonseca
This paper examines the effect of raster cell size on hydrographic feature extraction and hydrological modeling using LiDAR derived DEMs. LiDAR datasets for three experimental watersheds were converted to DEMs at various cell sizes. Watershed boundaries and stream networks were delineated from each DEM and were compared to reference data. Hydrological simulations were conducted and the outputs were compared. Smaller cell size DEMs consistently resulted in less difference between DEM-delineated features and reference data. However, minor differences been found between streamflow simulations resulted for a lumped watershed model run at daily simulations aggregated at an annual average. These findings indicatemore » that while higher resolution DEM grids may result in more accurate representation of terrain characteristics, such variations do not necessarily improve watershed scale simulation modeling. Hence the additional expense of generating high resolution DEM's for the purpose of watershed modeling at daily or longer time steps may not be warranted.« less
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
1994-01-01
Simulated data from the UCLA cumulus ensemble model are used to investigate the quasi-universal validity of closure assumptions used in existing cumulus parameterizations. A closure assumption is quasi-universally valid if it is sensitive neither to convective cloud regimes nor to horizontal resolutions of large-scale/mesoscale models. The dependency of three types of closure assumptions, as classified by Arakawa and Chen, on the horizontal resolution is addressed in this study. Type I is the constraint on the coupling of the time tendencies of large-scale temperature and water vapor mixing ratio. Type II is the constraint on the coupling of cumulus heating and cumulus drying. Type III is a direct constraint on the intensity of a cumulus ensemble. The macroscopic behavior of simulated cumulus convection is first compared with the observed behavior in view of Type I and Type II closure assumptions using 'quick-look' and canonical correlation analyses. It is found that they are statistically similar to each other. The three types of closure assumptions are further examined with simulated data averaged over selected subdomain sizes ranging from 64 to 512 km. It is found that the dependency of Type I and Type II closure assumptions on the horizontal resolution is very weak and that Type III closure assumption is somewhat dependent upon the horizontal resolution. The influences of convective and mesoscale processes on the closure assumptions are also addressed by comparing the structures of canonical components with the corresponding vertical profiles in the convective and stratiform regions of cumulus ensembles analyzed directly from simulated data. The implication of these results for cumulus parameterization is discussed.
Exploring connectivity with large-scale Granger causality on resting-state functional MRI.
DSouza, Adora M; Abidin, Anas Z; Leistritz, Lutz; Wismüller, Axel
2017-08-01
Large-scale Granger causality (lsGC) is a recently developed, resting-state functional MRI (fMRI) connectivity analysis approach that estimates multivariate voxel-resolution connectivity. Unlike most commonly used multivariate approaches, which establish coarse-resolution connectivity by aggregating voxel time-series avoiding an underdetermined problem, lsGC estimates voxel-resolution, fine-grained connectivity by incorporating an embedded dimension reduction. We investigate application of lsGC on realistic fMRI simulations, modeling smoothing of neuronal activity by the hemodynamic response function and repetition time (TR), and empirical resting-state fMRI data. Subsequently, functional subnetworks are extracted from lsGC connectivity measures for both datasets and validated quantitatively. We also provide guidelines to select lsGC free parameters. Results indicate that lsGC reliably recovers underlying network structure with area under receiver operator characteristic curve (AUC) of 0.93 at TR=1.5s for a 10-min session of fMRI simulations. Furthermore, subnetworks of closely interacting modules are recovered from the aforementioned lsGC networks. Results on empirical resting-state fMRI data demonstrate recovery of visual and motor cortex in close agreement with spatial maps obtained from (i) visuo-motor fMRI stimulation task-sequence (Accuracy=0.76) and (ii) independent component analysis (ICA) of resting-state fMRI (Accuracy=0.86). Compared with conventional Granger causality approach (AUC=0.75), lsGC produces better network recovery on fMRI simulations. Furthermore, it cannot recover functional subnetworks from empirical fMRI data, since quantifying voxel-resolution connectivity is not possible as consequence of encountering an underdetermined problem. Functional network recovery from fMRI data suggests that lsGC gives useful insight into connectivity patterns from resting-state fMRI at a multivariate voxel-resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.; ...
2017-07-06
Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.
Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less
Xenia Mission: Spacecraft Design Concept
NASA Technical Reports Server (NTRS)
Hopkins, R. C.; Johnson, C. L.; Kouveliotou, C.; Jones, D.; Baysinger, M.; Bedsole, T.; Maples, C. C.; Benfield, P. J.; Turner, M.; Capizzo, P.;
2009-01-01
The proposed Xenia mission will, for the first time, chart the chemical and dynamical state of the majority of baryonic matter in the universe. using high-resolution spectroscopy, Xenia will collect essential information from major traces of the formation and evolution of structures from the early universe to the present time. The mission is based on innovative instrumental and observational approaches: observing with fast reaction gamma-ray bursts (GRBs) with a high spectral resolution. This enables the study of their (star-forming) environment from the dark to the local universe and the use of GRBs as backlight of large-scale cosmological structures, observing and surveying extended sources with high sensitivity using two wide field-of-view x-ray telescopes - one with a high angular resolution and the other with a high spectral resolution.
Recent assimilation developments of FOAM the Met Office ocean forecast system
NASA Astrophysics Data System (ADS)
Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert
2015-04-01
FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.
USGS Releases New Digital Aerial Products
,
2005-01-01
The U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) has initiated distribution of digital aerial photographic products produced by scanning or digitizing film from its historical aerial photography film archive. This archive, located in Sioux Falls, South Dakota, contains thousands of rolls of film that contain more than 8 million frames of historic aerial photographs. The largest portion of this archive consists of original film acquired by Federal agencies from the 1930s through the 1970s to produce 1:24,000-scale USGS topographic quadrangle maps. Most of this photography is reasonably large scale (USGS photography ranges from 1:8,000 to 1:80,000) to support the production of the maps. Two digital products are currently available for ordering: high-resolution scanned products and medium-resolution digitized products.
Development of dynamic kinetic resolution on large scale for (±)-1-phenylethylamine.
Thalén, Lisa K; Bäckvall, Jan-E
2010-09-13
Candida antarctica lipase B (CALB) and racemization catalyst 4 were combined in the dynamic kinetic resolution (DKR) of (±)-1-phenylethylamine (1). Several reaction parameters have been investigated to modify the method for application on multigram scale. A comparison of isopropyl acetate and alkyl methoxyacetates as acyl donors was carried out. It was found that lower catalyst loadings could be used to obtain (R)-2-methoxy-N-(1-phenylethyl)acetamide (3) in good yield and high ee when alkyl methoxyacetates were used as acyl donors compared to when isopropyl acetate was used as the acyl donor. The catalyst loading could be decreased to 1.25 mol % Ru-catalyst 4 and 10 mg CALB per mmol 1 when alkyl methoxyacetates were used as the acyl donor.
NASA Astrophysics Data System (ADS)
Comas, X.; Wright, W. J.; Hynek, S. A.; Ntarlagiannis, D.; Terry, N.; Job, M. J.; Fletcher, R. C.; Brantley, S.
2017-12-01
Previous studies in the Rio Icacos watershed in the Luquillo Mountains (Puerto Rico) have shown that regolith materials are rapidly developed from the alteration of quartz diorite bedrock, and create a blanket on top of the bedrock with a thickness that decreases with proximity to the knickpoint. The watershed is also characterized by a system of heterogeneous fractures that likely drive bedrock weathering and the formation of corestones and associated spheroidal fracturing and rindlets. Previous efforts to characterize the spatial distribution of fractures were based on aerial images that did not account for the architecture of the critical zone below the subsurface. In this study we use an array of near-surface geophysical methods at multiple scales to better understand how the spatial distribution and density of fractures varies with topography and proximity to the knickpoint. Large km-scale surveys using ground penetrating radar (GPR), terrain conductivity, and capacitively coupled resistivity, were combined with smaller scale surveys (10-100 m) using electrical resistivity imaging (ERI), and shallow seismics, and were directly constrained with boreholes from previous studies. Geophysical results were compared to theoretical models of compressive stress as due to gravity and regional compression, and showed consistency at describing increased dilation of fractures with proximity to the knickpoint. This study shows the potential of multidisciplinary approaches to model critical zone processes at multiple scales of measurement and high spatial resolution. The approach can be particularly efficient at large km-scales when applying geophysical methods that allow for rapid data acquisition (i.e. walking pace) at high spatial resolution (i.e. cm scales).
Investigating the scale-adaptivity of a shallow cumulus parameterization scheme with LES
NASA Astrophysics Data System (ADS)
Brast, Maren; Schemann, Vera; Neggers, Roel
2017-04-01
In this study we investigate the scale-adaptivity of a new parameterization scheme for shallow cumulus clouds in the gray zone. The Eddy-Diffusivity Multiple Mass-Flux (or ED(MF)n ) scheme is a bin-macrophysics scheme, in which subgrid transport is formulated in terms of discretized size densities. While scale-adaptivity in the ED-component is achieved using a pragmatic blending approach, the MF-component is filtered such that only the transport by plumes smaller than the grid size is maintained. For testing, ED(MF)n is implemented in a large-eddy simulation (LES) model, replacing the original subgrid-scheme for turbulent transport. LES thus plays the role of a non-hydrostatic testing ground, which can be run at different resolutions to study the behavior of the parameterization scheme in the boundary-layer gray zone. In this range convective cumulus clouds are partially resolved. We find that at high resolutions the clouds and the turbulent transport are predominantly resolved by the LES, and the transport represented by ED(MF)n is small. This partitioning changes towards coarser resolutions, with the representation of shallow cumulus clouds becoming exclusively carried by the ED(MF)n. The way the partitioning changes with grid-spacing matches the results of previous LES studies, suggesting some scale-adaptivity is captured. Sensitivity studies show that a scale-inadaptive ED component stays too active at high resolutions, and that the results are fairly insensitive to the number of transporting updrafts in the ED(MF)n scheme. Other assumptions in the scheme, such as the distribution of updrafts across sizes and the value of the area fraction covered by updrafts, are found to affect the location of the gray zone.
Algebraic dynamic multilevel method for compositional flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Cusini, Matteo; Fryer, Barnaby; van Kruijsdijk, Cor; Hajibeygi, Hadi
2018-02-01
This paper presents the algebraic dynamic multilevel method (ADM) for compositional flow in three dimensional heterogeneous porous media in presence of capillary and gravitational effects. As a significant advancement compared to the ADM for immiscible flows (Cusini et al., 2016) [33], here, mass conservation equations are solved along with k-value based thermodynamic equilibrium equations using a fully-implicit (FIM) coupling strategy. Two different fine-scale compositional formulations are considered: (1) the natural variables and (2) the overall-compositions formulation. At each Newton's iteration the fine-scale FIM Jacobian system is mapped to a dynamically defined (in space and time) multilevel nested grid. The appropriate grid resolution is chosen based on the contrast of user-defined fluid properties and on the presence of specific features (e.g., well source terms). Consistent mapping between different resolutions is performed by the means of sequences of restriction and prolongation operators. While finite-volume restriction operators are employed to ensure mass conservation at all resolutions, various prolongation operators are considered. In particular, different interpolation strategies can be used for the different primary variables, and multiscale basis functions are chosen as pressure interpolators so that fine scale heterogeneities are accurately accounted for across different resolutions. Several numerical experiments are conducted to analyse the accuracy, efficiency and robustness of the method for both 2D and 3D domains. Results show that ADM provides accurate solutions by employing only a fraction of the number of grid-cells employed in fine-scale simulations. As such, it presents a promising approach for large-scale simulations of multiphase flow in heterogeneous reservoirs with complex non-linear fluid physics.
Altitude profiles of temperature from 4 to 80 km over the tropics from MST radar and lidar
NASA Astrophysics Data System (ADS)
Parameswaran, K.; Sasi, M. N.; Ramkumar, G.; Nair, P. R.; Deepa, V.; Murthy, B. V. K.; Nayar, S. R. P.; Revathy, K.; Mrudula, G.; Satheesan, K.; Bhavanikumar, Y.; Sivakumar, V.; Raghunath, K.; Rajendraprasad, T.; Krishnaiah, M.
2000-10-01
Using ground-based techniques of MST radar and Lidar, temperature profiles in the entire height range of 4 to 75km are obtained for the first time at a tropical location. The temporal resolution of the profiles is ~1h in the lower altitudes and 12.5min in the higher altitudes and altitude resolution is ~300m. The errors involved in the derived values are presented. Preliminary analysis of temperature variations in a night revealed fluctuations with characteristics resembling those of large-scale gravity waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pau, G. S. H.; Bisht, G.; Riley, W. J.
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
NASA Astrophysics Data System (ADS)
Omrani, H.; Drobinski, P.; Dubos, T.
2009-09-01
In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.
Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G
2016-05-25
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.
Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.
2016-01-01
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162
NASA Astrophysics Data System (ADS)
Gowda, P. H.
2016-12-01
Evapotranspiration (ET) is an important process in ecosystems' water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. There are efforts to develop such datasets on a regional to global scale but often faced with the limitations of spatial-temporal resolution tradeoffs in satellite remote sensing technology. In this study, we developed frameworks for generating high and medium resolution daily ET maps from Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data, respectively. For developing high resolution (30-m) daily time series ET maps with Landsat TM data, the series version of Two Source Energy Balance (TSEB) model was used to compute sensible and latent heat fluxes of soil and canopy separately. Landsat 5 (2000-2011) and Landsat 8 (2013-2014) imageries for row 28/35 and 27/36 covering central Oklahoma was used. MODIS data (2001-2014) covering Oklahoma and Texas Panhandle was used to develop medium resolution (250-m), time series daily ET maps with SEBS (Surface Energy Balance System) model. An extensive network of weather stations managed by Texas High Plains ET Network and Oklahoma Mesonet was used to generate spatially interpolated inputs of air temperature, relative humidity, wind speed, solar radiation, pressure, and reference ET. A linear interpolation sub-model was used to estimate the daily ET between the image acquisition days. Accuracy assessment of daily ET maps were done against eddy covariance data from two grassland sites at El Reno, OK. Statistical results indicated good performance by modeling frameworks developed for deriving time series ET maps. Results indicated that the proposed ET mapping framework is suitable for deriving daily time series ET maps at regional scale with Landsat and MODIS data.
Low-Cost Ultra-High Spatial and Temporal Resolution Mapping of Intertidal Rock Platforms
NASA Astrophysics Data System (ADS)
Bryson, M.; Johnson-Roberson, M.; Murphy, R.
2012-07-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time which could compliment field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at relatively course, sub-meter resolutions or with limited temporal resolutions and relatively high costs for small-scale environmental science and ecology studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric pipeline that was developed for constructing highresolution, 3D, photo-realistic terrain models of intertidal rocky shores. The processing pipeline uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine colour and topographic information at sub-centimeter resolutions over an area of approximately 100m, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rock platform at Cape Banks, Sydney, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.
Artificial fluid properties for large-eddy simulation of compressible turbulent mixing
NASA Astrophysics Data System (ADS)
Cook, Andrew W.
2007-05-01
An alternative methodology is described for large-eddy simulation (LES) of flows involving shocks, turbulence, and mixing. In lieu of filtering the governing equations, it is postulated that the large-scale behavior of a LES fluid, i.e., a fluid with artificial properties, will be similar to that of a real fluid, provided the artificial properties obey certain constraints. The artificial properties consist of modifications to the shear viscosity, bulk viscosity, thermal conductivity, and species diffusivity of a fluid. The modified transport coefficients are designed to damp out high wavenumber modes, close to the resolution limit, without corrupting lower modes. Requisite behavior of the artificial properties is discussed and results are shown for a variety of test problems, each designed to exercise different aspects of the models. When combined with a tenth-order compact scheme, the overall method exhibits excellent resolution characteristics for turbulent mixing, while capturing shocks and material interfaces in a crisp fashion.
CALDER: Cryogenic light detectors for background-free searches
NASA Astrophysics Data System (ADS)
Di Domizio, S.; Bellini, F.; Cardani, L.; Casali, N.; Castellano, M. G.; Colantoni, I.; Cosmelli, C.; Cruciani, A.; D'Addabbo, A.; Martinez, M.; Minutolo, L.; Tomei, C.; Vignati, M.
2018-01-01
CALDER is a R&D project for the development of cryogenic light detectors with an active surface of 5x5cm2 and an energy resolution of 20 eV RMS for visible and UV photons. These devices can enhance the sensitivity of next generation large mass bolometric detectors for rare event searches, providing an active background rejection method based on particle discrimination. A CALDER detector is composed by a large area Si absorber substrate with superconducting kinetic inductance detectors (KIDs) deposited on it. The substrate converts the incoming light into athermal phonons, that are then sensed by the KIDs. KID technology combine fabrication simplicity with natural attitude to frequency-domain multiplexing, making it an ideal candidate for a large scale bolometric experiments. We will give an overview of the CALDER project and show the performances obtained with prototype detectors both in terms of energy resolution and efficiency.
Large-scale impacts of herbivores on the structural diversity of African savannas
Asner, Gregory P.; Levick, Shaun R.; Kennedy-Bowdoin, Ty; Knapp, David E.; Emerson, Ruth; Jacobson, James; Colgan, Matthew S.; Martin, Roberta E.
2009-01-01
African savannas are undergoing management intensification, and decision makers are increasingly challenged to balance the needs of large herbivore populations with the maintenance of vegetation and ecosystem diversity. Ensuring the sustainability of Africa's natural protected areas requires information on the efficacy of management decisions at large spatial scales, but often neither experimental treatments nor large-scale responses are available for analysis. Using a new airborne remote sensing system, we mapped the three-dimensional (3-D) structure of vegetation at a spatial resolution of 56 cm throughout 1640 ha of savanna after 6-, 22-, 35-, and 41-year exclusions of herbivores, as well as in unprotected areas, across Kruger National Park in South Africa. Areas in which herbivores were excluded over the short term (6 years) contained 38%–80% less bare ground compared with those that were exposed to mammalian herbivory. In the longer-term (> 22 years), the 3-D structure of woody vegetation differed significantly between protected and accessible landscapes, with up to 11-fold greater woody canopy cover in the areas without herbivores. Our maps revealed 2 scales of ecosystem response to herbivore consumption, one broadly mediated by geologic substrate and the other mediated by hillslope-scale variation in soil nutrient availability and moisture conditions. Our results are the first to quantitatively illustrate the extent to which herbivores can affect the 3-D structural diversity of vegetation across large savanna landscapes. PMID:19258457
Detection of submicron scale cracks and other surface anomalies using positron emission tomography
Cowan, Thomas E.; Howell, Richard H.; Colmenares, Carlos A.
2004-02-17
Detection of submicron scale cracks and other mechanical and chemical surface anomalies using PET. This surface technique has sufficient sensitivity to detect single voids or pits of sub-millimeter size and single cracks or fissures of millimeter size; and single cracks or fissures of millimeter-scale length, micrometer-scale depth, and nanometer-scale length, micrometer-scale depth, and nanometer-scale width. This technique can also be applied to detect surface regions of differing chemical reactivity. It may be utilized in a scanning or survey mode to simultaneously detect such mechanical or chemical features over large interior or exterior surface areas of parts as large as about 50 cm in diameter. The technique involves exposing a surface to short-lived radioactive gas for a time period, removing the excess gas to leave a partial monolayer, determining the location and shape of the cracks, voids, porous regions, etc., and calculating the width, depth, and length thereof. Detection of 0.01 mm deep cracks using a 3 mm detector resolution has been accomplished using this technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Duane L; Pouquet, Dr. Annick; Mininni, Dr. Pablo D.
2015-01-01
We report results on rotating stratified turbulence in the absence of forcing, with large-scale isotropic initial conditions, using direct numerical simulations computed on grids of up tomore » $4096^3$ points. The Reynolds and Froude numbers are respectively equal to $$Re=5.4\\times 10^4$$ and $Fr=0.0242$$. The ratio of the Brunt-V\\"ais\\"al\\"a to the inertial wave frequency, $$N/f$, is taken to be equal to 5, a choice appropriate to model the dynamics of the southern abyssal ocean at mid latitudes. This gives a global buoyancy Reynolds number $$R_B=ReFr^2=32$$, a value sufficient for some isotropy to be recovered in the small scales beyond the Ozmidov scale, but still moderate enough that the intermediate scales where waves are prevalent are well resolved. We concentrate on the large-scale dynamics and confirm that the Froude number based on a typical vertical length scale is of order unity, with strong gradients in the vertical. Two characteristic scales emerge from this computation, and are identified from sharp variations in the spectral distribution of either total energy or helicity. A spectral break is also observed at a scale at which the partition of energy between the kinetic and potential modes changes abruptly, and beyond which a Kolmogorov-like spectrum recovers. Large slanted layers are ubiquitous in the flow in the velocity and temperature fields, and a large-scale enhancement of energy is also observed, directly attributable to the effect of rotation.« less
Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR
NASA Astrophysics Data System (ADS)
Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.
2017-12-01
Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.
Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics
Scott, Gregory; Fagerholm, Erik D.; Mutoh, Hiroki; Leech, Robert; Sharp, David J.; Shew, Woodrow L.
2014-01-01
Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain. PMID:25505314
Towards a Full-sky, High-resolution Dust Extinction Map with WISE and Planck
NASA Astrophysics Data System (ADS)
Meisner, Aaron M.; Finkbeiner, D. P.
2014-01-01
We have recently completed a custom processing of the entire WISE 12 micron All-sky imaging data set. The result is a full-sky map of diffuse, mid-infrared Galactic dust emission with angular resolution of 15 arcseconds, and with contaminating artifacts such as compact sources removed. At the same time, the 2013 Planck HFI maps represent a complementary data set in the far-infrared, with zero-point relatively immune to zodiacal contamination and angular resolution superior to previous full-sky data sets at similar frequencies. Taken together, these WISE and Planck data products present an opportunity to improve upon the SFD (1998) dust extinction map, by virtue of enhanced angular resolution and potentially better-controlled systematics on large scales. We describe our continuing efforts to construct and test high-resolution dust extinction and temperature maps based on our custom WISE processing and Planck HFI data.
A Universal Model for Solar Eruptions
NASA Astrophysics Data System (ADS)
Wyper, Peter; Antiochos, Spiro K.; DeVore, C. Richard
2017-08-01
We present a universal model for solar eruptions that encompasses coronal mass ejections (CMEs) at one end of the scale, to coronal jets at the other. The model is a natural extension of the Magnetic Breakout model for large-scale fast CMEs. Using high-resolution adaptive mesh MHD simulations conducted with the ARMS code, we show that so-called blowout or mini-filament coronal jets can be explained as one realisation of the breakout process. We also demonstrate the robustness of this “breakout-jet” model by studying three realisations in simulations with different ambient field inclinations. We conclude that magnetic breakout supports both large-scale fast CMEs and small-scale coronal jets, and by inference eruptions at scales in between. Thus, magnetic breakout provides a unified model for solar eruptions. P.F.W was supported in this work by an award of a RAS Fellowship and an appointment to the NASA Postdoctoral Program. C.R.D and S.K.A were supported by NASA’s LWS TR&T and H-SR programs.
Magnetic pattern at supergranulation scale: the void size distribution
NASA Astrophysics Data System (ADS)
Berrilli, F.; Scardigli, S.; Del Moro, D.
2014-08-01
The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.
Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.
Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen
2018-01-19
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Regional sea level variability in a high-resolution global coupled climate model
NASA Astrophysics Data System (ADS)
Palko, D.; Kirtman, B. P.
2016-12-01
The prediction of trends at regional scales is essential in order to adapt to and prepare for the effects of climate change. However, GCMs are unable to make reliable predictions at regional scales. The prediction of local sea level trends is particularly critical. The main goal of this research is to utilize high-resolution (HR) (0.1° resolution in the ocean) coupled model runs of CCSM4 to analyze regional sea surface height (SSH) trends. Unlike typical, lower resolution (1.0°) GCM runs these HR runs resolve features in the ocean, like the Gulf Stream, which may have a large effect on regional sea level. We characterize the variability of regional SSH along the Atlantic coast of the US using tide gauge observations along with fixed radiative forcing runs of CCSM4 and HR interactive ensemble runs. The interactive ensemble couples an ensemble mean atmosphere with a single ocean realization. This coupling results in a 30% decrease in the strength of the Atlantic meridional overturning circulation; therefore, the HR interactive ensemble is analogous to a HR hosing experiment. By characterizing the variability in these high-resolution GCM runs and observations we seek to understand what processes influence coastal SSH along the Eastern Coast of the United States and better predict future SLR.
NASA Astrophysics Data System (ADS)
Barthlott, C.; Hoose, C.
2015-11-01
This paper assesses the resolution dependance of clouds and precipitation over Germany by numerical simulations with the COnsortium for Small-scale MOdeling (COSMO) model. Six intensive observation periods of the HOPE (HD(CP)2 Observational Prototype Experiment) measurement campaign conducted in spring 2013 and 1 summer day of the same year are simulated. By means of a series of grid-refinement resolution tests (horizontal grid spacing 2.8, 1 km, 500, and 250 m), the applicability of the COSMO model to represent real weather events in the gray zone, i.e., the scale ranging between the mesoscale limit (no turbulence resolved) and the large-eddy simulation limit (energy-containing turbulence resolved), is tested. To the authors' knowledge, this paper presents the first non-idealized COSMO simulations in the peer-reviewed literature at the 250-500 m scale. It is found that the kinetic energy spectra derived from model output show the expected -5/3 slope, as well as a dependency on model resolution, and that the effective resolution lies between 6 and 7 times the nominal resolution. Although the representation of a number of processes is enhanced with resolution (e.g., boundary-layer thermals, low-level convergence zones, gravity waves), their influence on the temporal evolution of precipitation is rather weak. However, rain intensities vary with resolution, leading to differences in the total rain amount of up to +48 %. Furthermore, the location of rain is similar for the springtime cases with moderate and strong synoptic forcing, whereas significant differences are obtained for the summertime case with air mass convection. Domain-averaged liquid water paths and cloud condensate profiles are used to analyze the temporal and spatial variability of the simulated clouds. Finally, probability density functions of convection-related parameters are analyzed to investigate their dependance on model resolution and their impact on cloud formation and subsequent precipitation.
Global tropospheric ozone modeling: Quantifying errors due to grid resolution
NASA Astrophysics Data System (ADS)
Wild, Oliver; Prather, Michael J.
2006-06-01
Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.
Exploring image data assimilation in the prospect of high-resolution satellite data
NASA Astrophysics Data System (ADS)
Verron, J. A.; Duran, M.; Gaultier, L.; Brankart, J. M.; Brasseur, P.
2016-02-01
Many recent works show the key importance of studying the ocean at fine scales including the meso- and submesoscales. Satellite observations such as ocean color data provide informations on a wide range of scales but do not directly provide information on ocean dynamics. Satellite altimetry provide informations on the ocean dynamic topography (SSH) but so far with a limited resolution in space and even more, in time. However, in the near future, high-resolution SSH data (e.g. SWOT) will give a vision of the dynamic topography at such fine space resolution. This raises some challenging issues for data assimilation in physical oceanography: develop reliable methodology to assimilate high resolution data, make integrated use of various data sets including biogeochemical data, and even more simply, solve the challenge of handling large amont of data and huge state vectors. In this work, we propose to consider structured information rather than pointwise data. First, we take an image data assimilation approach in studying the feasibility of inverting tracer observations from Sea Surface Temperature and/or Ocean Color datasets, to improve the description of mesoscale dynamics provided by altimetric observations. Finite Size Lyapunov Exponents are used as an image proxy. The inverse problem is formulated in a Bayesian framework and expressed in terms of a cost function measuring the misfits between the two images. Second, we explore the inversion of SWOT-like high resolution SSH data and more especially the various possible proxies of the actual SSH that could be used to control the ocean circulation at various scales. One focus is made on controlling the subsurface ocean from surface only data. A key point lies in the errors and uncertainties that are associated to SWOT data.
Bottom-up production of meta-atoms for optical magnetism in visible and NIR light
NASA Astrophysics Data System (ADS)
Barois, Philippe; Ponsinet, Virginie; Baron, Alexandre; Richetti, Philippe
2018-02-01
Many unusual optical properties of metamaterials arise from the magnetic response of engineered structures of sub-wavelength size (meta-atoms) exposed to light. The top-down approach whereby engineered nanostructure of well-defined morphology are engraved on a surface proved to be successful for the generation of strong optical magnetism. It faces however the limitations of high cost and small active area in visible light where nanometre resolution is needed. The bottom-up approach whereby the fabrication metamaterials of large volume or large area results from the combination of nanochemitry and self-assembly techniques may constitute a cost-effective alternative. This approach nevertheless requires the large-scale production of functional building-blocks (meta-atoms) bearing a strong magnetic optical response. We propose in this paper a few tracks that lead to the large scale synthesis of magnetic metamaterials operating in visible or near IR light.
Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive
NASA Astrophysics Data System (ADS)
Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.
2012-03-01
High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.
Molecular dynamics simulations of large macromolecular complexes.
Perilla, Juan R; Goh, Boon Chong; Cassidy, C Keith; Liu, Bo; Bernardi, Rafael C; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus
2015-04-01
Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. Copyright © 2015 Elsevier Ltd. All rights reserved.
Enhanced mixing and spatial instability in concentrated bacterial suspensions
NASA Astrophysics Data System (ADS)
Sokolov, Andrey; Goldstein, Raymond E.; Feldchtein, Felix I.; Aranson, Igor S.
2009-09-01
High-resolution optical coherence tomography is used to study the onset of a large-scale convective motion in free-standing thin films of adjustable thickness containing suspensions of swimming aerobic bacteria. Clear evidence is found that beyond a threshold film thickness there exists a transition from quasi-two-dimensional collective swimming to three-dimensional turbulent behavior. The latter state, qualitatively different from bioconvection in dilute bacterial suspensions, is characterized by enhanced diffusivities of oxygen and bacteria. These results emphasize the impact of self-organized bacterial locomotion on the onset of three-dimensional dynamics, and suggest key ingredients necessary to extend standard models of bioconvection to incorporate effects of large-scale collective motion.
Comparison and validation of gridded precipitation datasets for Spain
NASA Astrophysics Data System (ADS)
Quintana-Seguí, Pere; Turco, Marco; Míguez-Macho, Gonzalo
2016-04-01
In this study, two gridded precipitation datasets are compared and validated in Spain: the recently developed SAFRAN dataset and the Spain02 dataset. These are validated using rain gauges and they are also compared to the low resolution ERA-Interim reanalysis. The SAFRAN precipitation dataset has been recently produced, using the SAFRAN meteorological analysis, which is extensively used in France (Durand et al. 1993, 1999; Quintana-Seguí et al. 2008; Vidal et al., 2010) and which has recently been applied to Spain (Quintana-Seguí et al., 2015). SAFRAN uses an optimal interpolation (OI) algorithm and uses all available rain gauges from the Spanish State Meteorological Agency (Agencia Estatal de Meteorología, AEMET). The product has a spatial resolution of 5 km and it spans from September 1979 to August 2014. This dataset has been produced mainly to be used in large scale hydrological applications. Spain02 (Herrera et al. 2012, 2015) is another high quality precipitation dataset for Spain based on a dense network of quality-controlled stations and it has different versions at different resolutions. In this study we used the version with a resolution of 0.11°. The product spans from 1971 to 2010. Spain02 is well tested and widely used, mainly, but not exclusively, for RCM model validation and statistical downscliang. ERA-Interim is a well known global reanalysis with a spatial resolution of ˜79 km. It has been included in the comparison because it is a widely used product for continental and global scale studies and also in smaller scale studies in data poor countries. Thus, its comparison with higher resolution products of a data rich country, such as Spain, allows us to quantify the errors made when using such datasets for national scale studies, in line with some of the objectives of the EU-FP7 eartH2Observe project. The comparison shows that SAFRAN and Spain02 perform similarly, even though their underlying principles are different. Both products are largely better than ERA-Interim, which has a much coarser representation of the relief, which is crucial for precipitation. These results are a contribution to the Spanish Case Study of the eartH2Observe project, which is focused on the simulation of drought processes in Spain using Land-Surface Models (LSM). This study will also be helpful in the Spanish MARCO project, which aims at improving the ability of RCMs to simulate hydrometeorological extremes.
ERIC Educational Resources Information Center
Patel, Vimla L.; Branch, Timothy; Gutnik, Lily; Arocha, Jose F.
2006-01-01
High-risk behavior in youths related to HIV transmission continues to occur despite large-scale efforts to disseminate information about safe sexual practices through education. Our study examined the relationships among knowledge, decision-making strategies, and risk assessment about HIV by youths during peer group focused discussions. Two focus…
Behavior under the Microscope: Increasing the Resolution of Our Experimental Procedures
ERIC Educational Resources Information Center
Palmer, David C.
2010-01-01
Behavior analysis has exploited conceptual tools whose experimental validity has been amply demonstrated, but their relevance to large-scale and fine-grained behavioral phenomena remains uncertain, because the experimental analysis of these domains faces formidable obstacles of measurement and control. In this essay I suggest that, at least at the…
1983-11-01
compound operations, with status. (h) Pre-programmed CRC and double-precision multiply/divide algo- rithms. (i) Double length accumulator with full...IH1.25 _ - MICROCOP ’ RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A .4 ’* • • . - . .. •. . . . . . . . . . . . . . • - -. .• ,. o. . . .- "o