Sample records for large scale grid

  1. Methods and apparatus of analyzing electrical power grid data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.

    Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less

  2. CFD Script for Rapid TPS Damage Assessment

    NASA Technical Reports Server (NTRS)

    McCloud, Peter

    2013-01-01

    This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.

  3. Using Computing and Data Grids for Large-Scale Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2001-01-01

    We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.

  4. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  5. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  6. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    NASA Astrophysics Data System (ADS)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  7. Wind-Tunnel Experiments for Gas Dispersion in an Atmospheric Boundary Layer with Large-Scale Turbulent Motion

    NASA Astrophysics Data System (ADS)

    Michioka, Takenobu; Sato, Ayumu; Sada, Koichi

    2011-10-01

    Large-scale turbulent motions enhancing horizontal gas spread in an atmospheric boundary layer are simulated in a wind-tunnel experiment. The large-scale turbulent motions can be generated using an active grid installed at the front of the test section in the wind tunnel, when appropriate parameters for the angular deflection and the rotation speed are chosen. The power spectra of vertical velocity fluctuations are unchanged with and without the active grid because they are strongly affected by the surface. The power spectra of both streamwise and lateral velocity fluctuations with the active grid increase in the low frequency region, and are closer to the empirical relations inferred from field observations. The large-scale turbulent motions do not affect the Reynolds shear stress, but change the balance of the processes involved. The relative contributions of ejections to sweeps are suppressed by large-scale turbulent motions, indicating that the motions behave as sweep events. The lateral gas spread is enhanced by the lateral large-scale turbulent motions generated by the active grid. The large-scale motions, however, do not affect the vertical velocity fluctuations near the surface, resulting in their having a minimal effect on the vertical gas spread. The peak concentration normalized using the root-mean-squared value of concentration fluctuation is remarkably constant over most regions of the plume irrespective of the operation of the active grid.

  8. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    NASA Astrophysics Data System (ADS)

    Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.

    2013-04-01

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.

  9. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  10. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  11. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  12. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  13. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  14. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  15. Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids

    NASA Astrophysics Data System (ADS)

    Abbà, Antonella; Campaniello, Dario; Nini, Michele

    2017-06-01

    The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.

  16. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  17. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  18. Transmission Technologies and Operational Characteristic Analysis of Hybrid UHV AC/DC Power Grids in China

    NASA Astrophysics Data System (ADS)

    Tian, Zhang; Yanfeng, Gong

    2017-05-01

    In order to solve the contradiction between demand and distribution range of primary energy resource, Ultra High Voltage (UHV) power grids should be developed rapidly to meet development of energy bases and accessing of large-scale renewable energy. This paper reviewed the latest research processes of AC/DC transmission technologies, summarized the characteristics of AC/DC power grids, concluded that China’s power grids certainly enter a new period of large -scale hybrid UHV AC/DC power grids and characteristics of “strong DC and weak AC” becomes increasingly pro minent; possible problems in operation of AC/DC power grids was discussed, and interaction or effect between AC/DC power grids was made an intensive study of; according to above problems in operation of power grids, preliminary scheme is summarized as fo llows: strengthening backbone structures, enhancing AC/DC transmission technologies, promoting protection measures of clean energ y accessing grids, and taking actions to solve stability problems of voltage and frequency etc. It’s valuable for making hybrid UHV AC/DC power grids adapt to operating mode of large power grids, thus guaranteeing security and stability of power system.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  20. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  1. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  2. Dynamic Smagorinsky model on anisotropic grids

    NASA Technical Reports Server (NTRS)

    Scotti, A.; Meneveau, C.; Fatica, M.

    1996-01-01

    Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.

  3. Optimal configurations of spatial scale for grid cell firing under noise and uncertainty

    PubMed Central

    Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil

    2014-01-01

    We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144

  4. Grid-based mapping: A method for rapidly determining the spatial distributions of small features over very large areas

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas

    2017-06-01

    The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.

  5. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  6. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  7. Research on the impacts of large-scale electric vehicles integration into power grid

    NASA Astrophysics Data System (ADS)

    Su, Chuankun; Zhang, Jian

    2018-06-01

    Because of its special energy driving mode, electric vehicles can improve the efficiency of energy utilization and reduce the pollution to the environment, which is being paid more and more attention. But the charging behavior of electric vehicles is random and intermittent. If the electric vehicle is disordered charging in a large scale, it causes great pressure on the structure and operation of the power grid and affects the safety and economic operation of the power grid. With the development of V2G technology in electric vehicle, the study of the charging and discharging characteristics of electric vehicles is of great significance for improving the safe operation of the power grid and the efficiency of energy utilization.

  8. INFN, IT the GENIUS grid portal and the robot certificates to perform phylogenetic analysis on large scale: a success story from the International LIBI project

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio

    This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.

  9. Variable Grid Traveltime Tomography for Near-surface Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Cai, A.; Zhang, J.

    2017-12-01

    We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.

  10. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    NASA Astrophysics Data System (ADS)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  11. Voltage collapse in complex power grids

    PubMed Central

    Simpson-Porco, John W.; Dörfler, Florian; Bullo, Francesco

    2016-01-01

    A large-scale power grid's ability to transfer energy from producers to consumers is constrained by both the network structure and the nonlinear physics of power flow. Violations of these constraints have been observed to result in voltage collapse blackouts, where nodal voltages slowly decline before precipitously falling. However, methods to test for voltage collapse are dominantly simulation-based, offering little theoretical insight into how grid structure influences stability margins. For a simplified power flow model, here we derive a closed-form condition under which a power network is safe from voltage collapse. The condition combines the complex structure of the network with the reactive power demands of loads to produce a node-by-node measure of grid stress, a prediction of the largest nodal voltage deviation, and an estimate of the distance to collapse. We extensively test our predictions on large-scale systems, highlighting how our condition can be leveraged to increase grid stability margins. PMID:26887284

  12. Experimental Study of Homogeneous Isotropic Slowly-Decaying Turbulence in Giant Grid-Wind Tunnel Set Up

    NASA Astrophysics Data System (ADS)

    Aliseda, Alberto; Bourgoin, Mickael; Eswirp Collaboration

    2014-11-01

    We present preliminary results from a recent grid turbulence experiment conducted at the ONERA wind tunnel in Modane, France. The ESWIRP Collaboration was conceived to probe the smallest scales of a canonical turbulent flow with very high Reynolds numbers. To achieve this, the largest scales of the turbulence need to be extremely big so that, even with the large separation of scales, the smallest scales would be well above the spatial and temporal resolution of the instruments. The ONERA wind tunnel in Modane (8 m -diameter test section) was chosen as a limit of the biggest large scales achievable in a laboratory setting. A giant inflatable grid (M = 0.8 m) was conceived to induce slowly-decaying homogeneous isotropic turbulence in a large region of the test section, with minimal structural risk. An international team or researchers collected hot wire anemometry, ultrasound anemometry, resonant cantilever anemometry, fast pitot tube anemometry, cold wire thermometry and high-speed particle tracking data of this canonical turbulent flow. While analysis of this large database, which will become publicly available over the next 2 years, has only started, the Taylor-scale Reynolds number is estimated to be between 400 and 800, with Kolmogorov scales as large as a few mm . The ESWIRP Collaboration is formed by an international team of scientists to investigate experimentally the smallest scales of turbulence. It was funded by the European Union to take advantage of the largest wind tunnel in Europe for fundamental research.

  13. Some effects of horizontal discretization on linear baroclinic and symmetric instabilities

    NASA Astrophysics Data System (ADS)

    Barham, William; Bachman, Scott; Grooms, Ian

    2018-05-01

    The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.

  14. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  15. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create profit for investors for renting their transmission capacity, and cheaper electricity for end users. We propose a hybrid method based on a heuristic and deterministic method to attain new transmission lines additions and increase transmission capacity. Renewable energy resources (RES) have zero operating cost, which makes them very attractive for generation companies and market participants. In addition, RES have zero carbon emission, which helps relieve the concerns of environmental impacts of electric generation resources' carbon emission. RES are wind, solar, hydro, biomass, and geothermal. By 2030, the expectation is that more than 30% of electricity in the U.S. will come from RES. One major contributor of RES generation will be from wind energy resources (WES). Furthermore, WES will be an important component of the future generation portfolio. However, the nature of WES is that it experiences a high intermittency and volatility. Because of the great expectation of high WES penetration and the nature of such resources, researchers focus on studying the effects of such resources on the electric grid operation and its adequacy from different aspects. Additionally, current market operations of electric grids add another complication to consider while integrating RES (e.g., specifically WES). Mandates by market rules and long-term analysis of renewable penetration in large-scale electric grid are also the focus of researchers in recent years. We advocate a method for high-wind resources penetration study on large-scale electric grid operations. PMU is a geographical positioning system (GPS) based device, which provides immediate and precise measurements of voltage angle in a high-voltage transmission system. PMUs can update the status of a transmission line and related measurements (e.g., voltage magnitude and voltage phase angle) more frequently. Every second, a PMU can provide 30 samples of measurements compared to traditional systems (e.g., supervisory control and data acquisition [SCADA] system), which provides one sample of measurement every 2 to 5 seconds. Because PMUs provide more measurement data samples, PMU can improve electric grid reliability and observability. (Abstract shortened by UMI.)

  16. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  17. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  18. Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities

    NASA Astrophysics Data System (ADS)

    Doster, F.; Celia, M. A.; Nordbotten, J. M.

    2012-12-01

    Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.

  19. A high throughput geocomputing system for remote sensing quantitative retrieval and a case study

    NASA Astrophysics Data System (ADS)

    Xue, Yong; Chen, Ziqiang; Xu, Hui; Ai, Jianwen; Jiang, Shuzheng; Li, Yingjie; Wang, Ying; Guang, Jie; Mei, Linlu; Jiao, Xijuan; He, Xingwei; Hou, Tingting

    2011-12-01

    The quality and accuracy of remote sensing instruments have been improved significantly, however, rapid processing of large-scale remote sensing data becomes the bottleneck for remote sensing quantitative retrieval applications. The remote sensing quantitative retrieval is a data-intensive computation application, which is one of the research issues of high throughput computation. The remote sensing quantitative retrieval Grid workflow is a high-level core component of remote sensing Grid, which is used to support the modeling, reconstruction and implementation of large-scale complex applications of remote sensing science. In this paper, we intend to study middleware components of the remote sensing Grid - the dynamic Grid workflow based on the remote sensing quantitative retrieval application on Grid platform. We designed a novel architecture for the remote sensing Grid workflow. According to this architecture, we constructed the Remote Sensing Information Service Grid Node (RSSN) with Condor. We developed a graphic user interface (GUI) tools to compose remote sensing processing Grid workflows, and took the aerosol optical depth (AOD) retrieval as an example. The case study showed that significant improvement in the system performance could be achieved with this implementation. The results also give a perspective on the potential of applying Grid workflow practices to remote sensing quantitative retrieval problems using commodity class PCs.

  20. Grid cells form a global representation of connected environments.

    PubMed

    Carpenter, Francis; Manson, Daniel; Jeffery, Kate; Burgess, Neil; Barry, Caswell

    2015-05-04

    The firing patterns of grid cells in medial entorhinal cortex (mEC) and associated brain areas form triangular arrays that tessellate the environment [1, 2] and maintain constant spatial offsets to each other between environments [3, 4]. These cells are thought to provide an efficient metric for navigation in large-scale space [5-8]. However, an accurate and universal metric requires grid cell firing patterns to uniformly cover the space to be navigated, in contrast to recent demonstrations that environmental features such as boundaries can distort [9-11] and fragment [12] grid patterns. To establish whether grid firing is determined by local environmental cues, or provides a coherent global representation, we recorded mEC grid cells in rats foraging in an environment containing two perceptually identical compartments connected via a corridor. During initial exposures to the multicompartment environment, grid firing patterns were dominated by local environmental cues, replicating between the two compartments. However, with prolonged experience, grid cell firing patterns formed a single, continuous representation that spanned both compartments. Thus, we provide the first evidence that in a complex environment, grid cell firing can form the coherent global pattern necessary for them to act as a metric capable of supporting large-scale spatial navigation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Grid Cells Form a Global Representation of Connected Environments

    PubMed Central

    Carpenter, Francis; Manson, Daniel; Jeffery, Kate; Burgess, Neil; Barry, Caswell

    2015-01-01

    Summary The firing patterns of grid cells in medial entorhinal cortex (mEC) and associated brain areas form triangular arrays that tessellate the environment [1, 2] and maintain constant spatial offsets to each other between environments [3, 4]. These cells are thought to provide an efficient metric for navigation in large-scale space [5–8]. However, an accurate and universal metric requires grid cell firing patterns to uniformly cover the space to be navigated, in contrast to recent demonstrations that environmental features such as boundaries can distort [9–11] and fragment [12] grid patterns. To establish whether grid firing is determined by local environmental cues, or provides a coherent global representation, we recorded mEC grid cells in rats foraging in an environment containing two perceptually identical compartments connected via a corridor. During initial exposures to the multicompartment environment, grid firing patterns were dominated by local environmental cues, replicating between the two compartments. However, with prolonged experience, grid cell firing patterns formed a single, continuous representation that spanned both compartments. Thus, we provide the first evidence that in a complex environment, grid cell firing can form the coherent global pattern necessary for them to act as a metric capable of supporting large-scale spatial navigation. PMID:25913404

  2. The AgMIP GRIDded Crop Modeling Initiative (AgGRID) and the Global Gridded Crop Model Intercomparison (GGCMI)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Muller, Christoff

    2015-01-01

    Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).

  3. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE PAGES

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...

    2017-08-24

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  4. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  5. Load Balancing Strategies for Multi-Block Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.

  6. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  7. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  8. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  9. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  10. Challenges and Opportunities in Modeling of the Global Atmosphere

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko

    2016-04-01

    Modeling paradigms on global scales may need to be reconsidered in order to better utilize the power of massively parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. Note that the described scenario strongly favors horizontally local discretizations. This is relatively easy to achieve in regional models. However, the spherical geometry complicates the problem. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of a reasonable size. However, the polar filtering requires transpositions involving extra communications as well as more computations. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for application of spectral representation. With some variations, such techniques are currently dominating in global models. Unfortunately, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with polar filtering is a step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances, such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids, were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Relaxing the hydrostatic approximation requieres careful reformulation of the model dynamics and more computations and communications. The unified Non-hydrostatic Multi-scale Model (NMMB) will be briefly discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable without modifying their amplitudes. The model has been successfully tested on various scales. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models, and its computational efficiency on parallel computers is good.

  11. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  12. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guba, O.; Taylor, M. A.; Ullrich, P. A.

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  13. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  14. Uncertainty in gridded CO 2 emissions estimates

    DOE PAGES

    Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...

    2016-05-19

    We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less

  15. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  16. Dynamics of flows, fluctuations, and global instability under electrode biasing in a linear plasma device

    NASA Astrophysics Data System (ADS)

    Desjardins, T. R.; Gilmore, M.

    2016-05-01

    Grid biasing is utilized in a large-scale helicon plasma to modify an existing instability. It is shown both experimentally and with a linear stability analysis to be a hybrid drift-Kelvin-Helmholtz mode. At low magnetic field strengths, coherent fluctuations are present, while at high magnetic field strengths, the plasma is broad-band turbulent. Grid biasing is used to drive the once-coherent fluctuations to a broad-band turbulent state, as well as to suppress them. There is a corresponding change in the flow shear. When a high positive bias (10Te) is applied to the grid electrode, a large-scale ( n ˜/n ≈50 % ) is excited. This mode has been identified as the potential relaxation instability.

  17. Grid-Enabled High Energy Physics Research using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Mahmood, Akhtar

    2005-04-01

    At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  18. Power grid operation risk management: V2G deployment for sustainable development

    NASA Astrophysics Data System (ADS)

    Haddadian, Ghazale J.

    The production, transmission, and delivery of cost--efficient energy to supply ever-increasing peak loads along with a quest for developing a low-carbon economy require significant evolutions in the power grid operations. Lower prices of vast natural gas resources in the United States, Fukushima nuclear disaster, higher and more intense energy consumptions in China and India, issues related to energy security, and recent Middle East conflicts, have urged decisions makers throughout the world to look into other means of generating electricity locally. As the world look to combat climate changes, a shift from carbon-based fuels to non-carbon based fuels is inevitable. However, the variability of distributed generation assets in the electricity grid has introduced major reliability challenges for power grid operators. While spearheading sustainable and reliable power grid operations, this dissertation develops a multi-stakeholder approach to power grid operation design; aiming to address economic, security, and environmental challenges of the constrained electricity generation. It investigates the role of Electric Vehicle (EV) fleets integration, as distributed and mobile storage assets to support high penetrations of renewable energy sources, in the power grid. The vehicle-to-grid (V2G) concept is considered to demonstrate the bidirectional role of EV fleets both as a provider and consumer of energy in securing a sustainable power grid operation. The proposed optimization modeling is the application of Mixed-Integer Linear Programing (MILP) to large-scale systems to solve the hourly security-constrained unit commitment (SCUC) -- an optimal scheduling concept in the economic operation of electric power systems. The Monte Carlo scenario-based approach is utilized to evaluate different scenarios concerning the uncertainties in the operation of power grid system. Further, in order to expedite the real-time solution of the proposed approach for large-scale power systems, it considers a two-stage model using the Benders Decomposition (BD). The numerical simulation demonstrate that the utilization of smart EV fleets in power grid systems would ensure a sustainable grid operation with lower carbon footprints, smoother integration of renewable sources, higher security, and lower power grid operation costs. The results, additionally, illustrate the effectiveness of the proposed MILP approach and its potentials as an optimization tool for sustainable operation of large scale electric power systems.

  19. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  20. geoknife: Reproducible web-processing of large gridded datasets

    USGS Publications Warehouse

    Read, Jordan S.; Walker, Jordan I.; Appling, Alison P.; Blodgett, David L.; Read, Emily K.; Winslow, Luke A.

    2016-01-01

    Geoprocessing of large gridded data according to overlap with irregular landscape features is common to many large-scale ecological analyses. The geoknife R package was created to facilitate reproducible analyses of gridded datasets found on the U.S. Geological Survey Geo Data Portal web application or elsewhere, using a web-enabled workflow that eliminates the need to download and store large datasets that are reliably hosted on the Internet. The package provides access to several data subset and summarization algorithms that are available on remote web processing servers. Outputs from geoknife include spatial and temporal data subsets, spatially-averaged time series values filtered by user-specified areas of interest, and categorical coverage fractions for various land-use types.

  1. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  2. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  3. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  4. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  5. The cosmic web in CosmoGrid void regions

    NASA Astrophysics Data System (ADS)

    Rieder, Steven; van de Weygaert, Rien; Cautun, Marius; Beygu, Burcu; Portegies Zwart, Simon

    2016-10-01

    We study the formation and evolution of the cosmic web, using the high-resolution CosmoGrid ΛCDM simulation. In particular, we investigate the evolution of the large-scale structure around void halo groups, and compare this to observations of the VGS-31 galaxy group, which consists of three interacting galaxies inside a large void. The structure around such haloes shows a great deal of tenuous structure, with most of such systems being embedded in intra-void filaments and walls. We use the Nexus+} algorithm to detect walls and filaments in CosmoGrid, and find them to be present and detectable at every scale. The void regions embed tenuous walls, which in turn embed tenuous filaments. We hypothesize that the void galaxy group of VGS-31 formed in such an environment.

  6. Wind turbine wake interactions at field scale: An LES study of the SWiFT facility

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolei; Boomsma, Aaron; Barone, Matthew; Sotiropoulos, Fotis

    2014-06-01

    The University of Minnesota Virtual Wind Simulator (VWiS) code is employed to simulate turbine/atmosphere interactions in the Scaled Wind Farm Technology (SWiFT) facility developed by Sandia National Laboratories in Lubbock, TX, USA. The facility presently consists of three turbines and the simulations consider the case of wind blowing from South such that two turbines are in the free stream and the third turbine in the direct wake of one upstream turbine with separation of 5 rotor diameters. Large-eddy simulation (LES) on two successively finer grids is carried out to examine the sensitivity of the computed solutions to grid refinement. It is found that the details of the break-up of the tip vortices into small-scale turbulence structures can only be resolved on the finer grid. It is also shown that the power coefficient CP of the downwind turbine predicted on the coarse grid is somewhat higher than that obtained on the fine mesh. On the other hand, the rms (root-mean-square) of the CP fluctuations are nearly the same on both grids, although more small-scale turbulence structures are resolved upwind of the downwind turbine on the finer grid.

  7. Sensitivity simulations of superparameterised convection in a general circulation model

    NASA Astrophysics Data System (ADS)

    Rybka, Harald; Tost, Holger

    2015-04-01

    Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid variability (individual CRM cell output) is analysed in order to illustrate the importance of a highly varying atmospheric structure inside a single GCM grid box. Finally, the convective transport of Radon is observed comparing different transport procedures and their influence on the vertical tracer distribution.

  8. Tropical precipitation extremes: Response to SST-induced warming in aquaplanet simulations

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Ritthik; Bordoni, Simona; Teixeira, João.

    2017-04-01

    Scaling of tropical precipitation extremes in response to warming is studied in aquaplanet experiments using the global Weather Research and Forecasting (WRF) model. We show how the scaling of precipitation extremes is highly sensitive to spatial and temporal averaging: while instantaneous grid point extreme precipitation scales more strongly than the percentage increase (˜7% K-1) predicted by the Clausius-Clapeyron (CC) relationship, extremes for zonally and temporally averaged precipitation follow a slight sub-CC scaling, in agreement with results from Climate Model Intercomparison Project (CMIP) models. The scaling depends crucially on the employed convection parameterization. This is particularly true when grid point instantaneous extremes are considered. These results highlight how understanding the response of precipitation extremes to warming requires consideration of dynamic changes in addition to the thermodynamic response. Changes in grid-scale precipitation, unlike those in convective-scale precipitation, scale linearly with the resolved flow. Hence, dynamic changes include changes in both large-scale and convective-scale motions.

  9. Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System

    NASA Astrophysics Data System (ADS)

    Yue, Liu; Hang, Mend

    2018-01-01

    With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.

  10. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  11. WISDOM-II: screening against multiple targets implicated in malaria using computational grid infrastructures.

    PubMed

    Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent

    2009-05-01

    Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.

  12. Simulating large-scale crop yield by using perturbed-parameter ensemble method

    NASA Astrophysics Data System (ADS)

    Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.

    2010-12-01

    Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.

  13. Spectral nudging to eliminate the effects of domain position and geometry in regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan

    2004-07-01

    It is well known that regional climate simulations are sensitive to the size and position of the domain chosen for calculations. Here we study the physical mechanisms of this sensitivity. We conducted simulations with the Regional Atmospheric Modeling System (RAMS) for June 2000 over North America at 50 km horizontal resolution using a 7500 km × 5400 km grid and NCEP/NCAR reanalysis as boundary conditions. The position of the domain was displaced in several directions, always maintaining the U.S. in the interior, out of the buffer zone along the lateral boundaries. Circulation biases developed a large scale structure, organized by the Rocky Mountains, resulting from a systematic shifting of the synoptic wave trains that crossed the domain. The distortion of the large-scale circulation was produced by interaction of the modeled flow with the lateral boundaries of the nested domain and varied when the position of the grid was altered. This changed the large-scale environment among the different simulations and translated into diverse conditions for the development of the mesoscale processes that produce most of precipitation for the Great Plains in the summer season. As a consequence, precipitation results varied, sometimes greatly, among the experiments with the different grid positions. To eliminate the dependence of results on the position of the domain, we used spectral nudging of waves longer than 2500 km above the boundary layer. Moisture was not nudged at any level. This constrained the synoptic scales to follow reanalysis while allowing the model to develop the small-scale dynamics responsible for the rainfall. Nudging of the large scales successfully eliminated the variation of precipitation results when the grid was moved. We suggest that this technique is necessary for all downscaling studies with regional models with domain sizes of a few thousand kilometers and larger embedded in global models.

  14. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  15. The numerics of hydrostatic structured-grid coastal ocean models: State of the art and future perspectives

    NASA Astrophysics Data System (ADS)

    Klingbeil, Knut; Lemarié, Florian; Debreu, Laurent; Burchard, Hans

    2018-05-01

    The state of the art of the numerics of hydrostatic structured-grid coastal ocean models is reviewed here. First, some fundamental differences in the hydrodynamics of the coastal ocean, such as the large surface elevation variation compared to the mean water depth, are contrasted against large scale ocean dynamics. Then the hydrodynamic equations as they are used in coastal ocean models as well as in large scale ocean models are presented, including parameterisations for turbulent transports. As steps towards discretisation, coordinate transformations and spatial discretisations based on a finite-volume approach are discussed with focus on the specific requirements for coastal ocean models. As in large scale ocean models, splitting of internal and external modes is essential also for coastal ocean models, but specific care is needed when drying & flooding of intertidal flats is included. As one obvious characteristic of coastal ocean models, open boundaries occur and need to be treated in a way that correct model forcing from outside is transmitted to the model domain without reflecting waves from the inside. Here, also new developments in two-way nesting are presented. Single processes such as internal inertia-gravity waves, advection and turbulence closure models are discussed with focus on the coastal scales. Some overview on existing hydrostatic structured-grid coastal ocean models is given, including their extensions towards non-hydrostatic models. Finally, an outlook on future perspectives is made.

  16. Autonomous Energy Grids | Grid Modernization | NREL

    Science.gov Websites

    control themselves using advanced machine learning and simulation to create resilient, reliable, and affordable optimized energy systems. Current frameworks to monitor, control, and optimize large-scale energy of optimization theory, control theory, big data analytics, and complex system theory and modeling to

  17. Power Grid Data Analysis with R and Hadoop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Gibson, Tara D.; Kleese van Dam, Kerstin

    This book chapter presents an approach to analysis of large-scale time-series sensor information based on our experience with power grid data. We use the R-Hadoop Integrated Programming Environment (RHIPE) to analyze a 2TB data set and present code and results for this analysis.

  18. Simulation of Boundary-Layer Cumulus and Stratocumulus Clouds using a Cloud-Resolving Model With Low- and Third-Order Turbulence Closures

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Cheng, Anning

    2007-01-01

    The effects of subgrid-scale condensation and transport become more important as the grid spacings increase from those typically used in large-eddy simulation (LES) to those typically used in cloud-resolving models (CRMs). Incorporation of these effects can be achieved by a joint probability density function approach that utilizes higher-order moments of thermodynamic and dynamic variables. This study examines how well shallow cumulus and stratocumulus clouds are simulated by two versions of a CRM that is implemented with low-order and third-order turbulence closures (LOC and TOC) when a typical CRM horizontal resolution is used and what roles the subgrid-scale and resolved-scale processes play as the horizontal grid spacing of the CRM becomes finer. Cumulus clouds were mostly produced through subgrid-scale transport processes while stratocumulus clouds were produced through both subgrid-scale and resolved-scale processes in the TOC version of the CRM when a typical CRM grid spacing is used. The LOC version of the CRM relied upon resolved-scale circulations to produce both cumulus and stratocumulus clouds, due to small subgrid-scale transports. The mean profiles of thermodynamic variables, cloud fraction and liquid water content exhibit significant differences between the two versions of the CRM, with the TOC results agreeing better with the LES than the LOC results. The characteristics, temporal evolution and mean profiles of shallow cumulus and stratocumulus clouds are weakly dependent upon the horizontal grid spacing used in the TOC CRM. However, the ratio of the subgrid-scale to resolved-scale fluxes becomes smaller as the horizontal grid spacing decreases. The subcloud-layer fluxes are mostly due to the resolved scales when a grid spacing less than or equal to 1 km is used. The overall results of the TOC simulations suggest that a 1-km grid spacing is a good choice for CRM simulation of shallow cumulus and stratocumulus.

  19. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    EPA Science Inventory

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscal...

  20. Analytical Assessment of the Relationship between 100MWp Large-scale Grid-connected Photovoltaic Plant Performance and Meteorological Parameters

    NASA Astrophysics Data System (ADS)

    Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang

    2017-05-01

    This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.

  1. Challenges in Modeling of the Global Atmosphere

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom

    2015-04-01

    The massively parallel computer architectures require that some widely adopted modeling paradigms be reconsidered in order to utilize more productively the power of parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. However, the described scenario implies that the discretization used in the model is horizontally local. The spherical geometry further complicates the problem. Various grid topologies will be discussed and examples will be shown. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of decent size. However, the polar filtering requires transpositions involving extra communications. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for a wide application of the spectral representation. With some variations, these techniques are used in most major centers. However, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with a fast Fourier transform represents a significant step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.

  2. Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling

    NASA Astrophysics Data System (ADS)

    Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.

    2013-09-01

    Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows that water barriers are better preserved with the new method. This research confirms the idea that topographical information, mainly the boundary locations and object classes, can enrich the height grid for this hydrological application.

  3. Energy Storage for the Power Grid

    ScienceCinema

    Imhoff, Carl; Vaishnav, Dave; Wang, Wei

    2018-05-30

    The iron vanadium redox flow battery was developed by researchers at Pacific Northwest National Laboratory as a solution to large-scale energy storage for the power grid. This technology provides the energy industry and the nation with a reliable, stable, safe, and low-cost storage alternative for a cleaner, efficient energy future.

  4. Research of the application of the Low Power Wide Area Network in power grid

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Sui, Hong; Li, Jia; Yao, Jian

    2018-03-01

    Low Power Wide Area Network (LPWAN) technologies developed rapidly in recent years, but these technologies have not make large-scale applications in different application scenarios of power grid. LoRa is a mainstream LPWAN technology. This paper makes a comparison test of the signal coverage of LoRa and other traditional wireless communication technologies in typical signal environment of power grid. Based on the test results, this paper gives an application suggestion of LoRa in power grid services, which can guide the planning and construction of the LPWAN in power grid.

  5. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    NASA Astrophysics Data System (ADS)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  6. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  7. DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphrey, Marty, A

    The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less

  8. Merging Station Observations with Large-Scale Gridded Data to Improve Hydrological Predictions over Chile

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.; Verbist, K. M. J.

    2016-12-01

    Hydrological predictions at regional-to-global scales are often hampered by the lack of meteorological forcing data. The use of large-scale gridded meteorological data is able to overcome this limitation, but these data are subject to regional biases and unrealistic values at local scale. This is especially challenging in regions such as Chile, where climate exhibits high spatial heterogeneity as a result of long latitude span and dramatic elevation changes. However, regional station-based observational datasets are not fully exploited and have the potential of constraining biases and spatial patterns. This study aims at adjusting precipitation and temperature estimates from the Princeton University global meteorological forcing (PGF) gridded dataset to improve hydrological simulations over Chile, by assimilating 982 gauges from the Dirección General de Aguas (DGA). To merge station data with the gridded dataset, we use a state-space estimation method to produce optimal gridded estimates, considering both the error of the station measurements and the gridded PGF product. The PGF daily precipitation, maximum and minimum temperature at 0.25° spatial resolution are adjusted for the period of 1979-2010. Precipitation and temperature gauges with long and continuous records (>70% temporal coverage) are selected, while the remaining stations are used for validation. The leave-one-out cross validation verifies the robustness of this data assimilation approach. The merged dataset is then used to force the Variable Infiltration Capacity (VIC) hydrological model over Chile at daily time step which are compared to the observations of streamflow. Our initial results show that the station-merged PGF precipitation effectively captures drizzle and the spatial pattern of storms. Overall the merged dataset has significant improvements compared to the original PGF with reduced biases and stronger inter-annual variability. The invariant spatial pattern of errors between the station data and the gridded product opens up the possibility of merging real-time satellite and intermittent gauge observations to produce more accurate real-time hydrological predictions.

  9. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  10. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  11. Unstructured grid modelling of offshore wind farm impacts on seasonally stratified shelf seas

    NASA Astrophysics Data System (ADS)

    Cazenave, Pierre William; Torres, Ricardo; Allen, J. Icarus

    2016-06-01

    Shelf seas comprise approximately 7% of the world's oceans and host enormous economic activity. Development of energy installations (e.g. Offshore Wind Farms (OWFs), tidal turbines) in response to increased demand for renewable energy requires a careful analysis of potential impacts. Recent remote sensing observations have identified kilometre-scale impacts from OWFs. Existing modelling evaluating monopile impacts has fallen into two camps: small-scale models with individually resolved turbines looking at local effects; and large-scale analyses but with sub-grid scale turbine parameterisations. This work straddles both scales through a 3D unstructured grid model (FVCOM): wind turbine monopiles in the eastern Irish Sea are explicitly described in the grid whilst the overall grid domain covers the south-western UK shelf. Localised regions of decreased velocity extend up to 250 times the monopile diameter away from the monopile. Shelf-wide, the amplitude of the M2 tidal constituent increases by up to 7%. The turbines enhance localised vertical mixing which decreases seasonal stratification. The spatial extent of this extends well beyond the turbines into the surrounding seas. With significant expansion of OWFs on continental shelves, this work highlights the importance of how OWFs may impact coastal (e.g. increased flooding risk) and offshore (e.g. stratification and nutrient cycling) areas.

  12. Satellite radar altimetry over ice. Volume 4: Users' guide for Antarctica elevation data from Seasat

    NASA Technical Reports Server (NTRS)

    Zwally, H. Jay; Major, Judith A.; Brenner, Anita C.; Bindschadler, Robert A.; Martin, Thomas V.

    1990-01-01

    A gridded surface-elevation data set and a geo-referenced data base for the Seasat radar altimeter data over Greenland are described. This is a user guide to accompany the data provided to data centers and other users. The grid points are on a polar stereographic projection with a nominal spacing of 20 km. The gridded elevations are derived from the elevation data in the geo-referenced data base by a weighted fitting of a surface in the neighborhood of each grid point. The gridded elevations are useful for the creating of large-scale contour maps, and the geo-referenced data base is useful for regridding, creating smaller-scale contour maps, and examinating individual elevation measurements in specific geographic areas. Tape formats are described, and a FORTRAN program for reading the data tape is listed and provided on the tape.

  13. Local Fitting of the Kohn-Sham Density in a Gaussian and Plane Waves Scheme for Large-Scale Density Functional Theory Simulations.

    PubMed

    Golze, Dorothea; Iannuzzi, Marcella; Hutter, Jürg

    2017-05-09

    A local resolution-of-the-identity (LRI) approach is introduced in combination with the Gaussian and plane waves (GPW) scheme to enable large-scale Kohn-Sham density functional theory calculations. In GPW, the computational bottleneck is typically the description of the total charge density on real-space grids. Introducing the LRI approximation, the linear scaling of the GPW approach with respect to system size is retained, while the prefactor for the grid operations is reduced. The density fitting is an O(N) scaling process implemented by approximating the atomic pair densities by an expansion in one-center fit functions. The computational cost for the grid-based operations becomes negligible in LRIGPW. The self-consistent field iteration is up to 30 times faster for periodic systems dependent on the symmetry of the simulation cell and on the density of grid points. However, due to the overhead introduced by the local density fitting, single point calculations and complete molecular dynamics steps, including the calculation of the forces, are effectively accelerated by up to a factor of ∼10. The accuracy of LRIGPW is assessed for different systems and properties, showing that total energies, reaction energies, intramolecular and intermolecular structure parameters are well reproduced. LRIGPW yields also high quality results for extended condensed phase systems such as liquid water, ice XV, and molecular crystals.

  14. Compounded effects of heat waves and droughts over the Western Electricity Grid: spatio-temporal scales of impacts and predictability toward mitigation and adaptation.

    NASA Astrophysics Data System (ADS)

    Voisin, N.; Kintner-Meyer, M.; Skaggs, R.; Xie, Y.; Wu, D.; Nguyen, T. B.; Fu, T.; Zhou, T.

    2016-12-01

    Heat waves and droughts are projected to be more frequent and intense. We have seen in the past the effects of each of those extreme climate events on electricity demand and constrained electricity generation, challenging power system operations. Our aim here is to understand the compounding effects under historical conditions. We present a benchmark of Western US grid performance under 55 years of historical climate, and including droughts, using 2010-level of water demand and water management infrastructure, and 2010-level of electricity grid infrastructure and operations. We leverage CMIP5 historical hydrology simulations and force a large scale river routing- reservoir model with 2010-level sectoral water demands. The regulated flow at each water-dependent generating plants is processed to adjust water-dependent electricity generation parameterization in a production cost model, that represents 2010-level power system operations with hourly energy demand of 2010. The resulting benchmark includes a risk distribution of several grid performance metrics (unserved energy, production cost, carbon emission) as a function of inter-annual variability in regional water availability and predictability using large scale climate oscillations. In the second part of the presentation, we describe an approach to map historical heat waves onto this benchmark grid performance using a building energy demand model. The impact of the heat waves, combined with the impact of droughts, is explored at multiple scales to understand the compounding effects. Vulnerabilities of the power generation and transmission systems are highlighted to guide future adaptation.

  15. A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert

    1996-01-01

    The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.

  16. Iron-Air Rechargeable Battery: A Robust and Inexpensive Iron-Air Rechargeable Battery for Grid-Scale Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-10-01

    GRIDS Project: USC is developing an iron-air rechargeable battery for large-scale energy storage that could help integrate renewable energy sources into the electric grid. Iron-air batteries have the potential to store large amounts of energy at low cost—iron is inexpensive and abundant, while oxygen is freely obtained from the air we breathe. However, current iron-air battery technologies have suffered from low efficiency and short life spans. USC is working to dramatically increase the efficiency of the battery by placing chemical additives on the battery’s iron-based electrode and restructuring the catalysts at the molecular level on the battery’s air-based electrode. Thismore » can help the battery resist degradation and increase life span. The goal of the project is to develop a prototype iron-air battery at significantly cost lower than today’s best commercial batteries.« less

  17. Demonstration of Essential Reliability Services by a 300-MW Solar Photovoltaic Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loutan, Clyde; Klauer, Peter; Chowdhury, Sirajul

    The California Independent System Operator (CAISO), First Solar, and the National Renewable Energy Laboratory (NREL) conducted a demonstration project on a large utility-scale photovoltaic (PV) power plant in California to test its ability to provide essential ancillary services to the electric grid. With increasing shares of solar- and wind-generated energy on the electric grid, traditional generation resources equipped with automatic governor control (AGC) and automatic voltage regulation controls -- specifically, fossil thermal -- are being displaced. The deployment of utility-scale, grid-friendly PV power plants that incorporate advanced capabilities to support grid stability and reliability is essential for the large-scale integrationmore » of PV generation into the electric power grid, among other technical requirements. A typical PV power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. In this way, PV power plants can be used to mitigate the impact of variability on the grid, a role typically reserved for conventional generators. In August 2016, testing was completed on First Solar's 300-MW PV power plant, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to use grid-friendly controls to provide essential reliability services. These data showed how the development of advanced power controls can enable PV to become a provider of a wide range of grid services, including spinning reserves, load following, voltage support, ramping, frequency response, variability smoothing, and frequency regulation to power quality. Specifically, the tests conducted included various forms of active power control such as AGC and frequency regulation; droop response; and reactive power, voltage, and power factor controls. This project demonstrated that advanced power electronics and solar generation can be controlled to contribute to system-wide reliability. It was shown that the First Solar plant can provide essential reliability services related to different forms of active and reactive power controls, including plant participation in AGC, primary frequency control, ramp rate control, and voltage regulation. For AGC participation in particular, by comparing the PV plant testing results to the typical performance of individual conventional technologies, we showed that regulation accuracy by the PV plant is 24-30 points better than fast gas turbine technologies. The plant's ability to provide volt-ampere reactive control during periods of extremely low power generation was demonstrated as well. The project team developed a pioneering demonstration concept and test plan to show how various types of active and reactive power controls can leverage PV generation's value from being a simple variable energy resource to a resource that provides a wide range of ancillary services. With this project's approach to a holistic demonstration on an actual, large, utility-scale, operational PV power plant and dissemination of the obtained results, the team sought to close some gaps in perspectives that exist among various stakeholders in California and nationwide by providing real test data.« less

  18. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  19. Information Power Grid (IPG) Tutorial 2003

    NASA Technical Reports Server (NTRS)

    Meyers, George

    2003-01-01

    For NASA and the general community today Grid middleware: a) provides tools to access/use data sources (databases, instruments, ...); b) provides tools to access computing (unique and generic); c) Is an enabler of large scale collaboration. Dynamically responding to needs is a key selling point of a grid. Independent resources can be joined as appropriate to solve a problem. Provide tools to enable the building of a frameworks for application. Provide value added service to the NASA user base for utilizing resources on the grid in new and more efficient ways. Provides tools for development of Frameworks.

  20. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-12-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  1. Observing the Cosmic Microwave Background Polarization with Variable-delay Polarization Modulators for the Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; CLASS Collaboration

    2018-01-01

    The search for inflationary primordial gravitational waves and the optical depth to reionization, both through their imprint on the large angular scale correlations in the polarization of the cosmic microwave background (CMB), has created the need for high sensitivity measurements of polarization across large fractions of the sky at millimeter wavelengths. These measurements are subjected to instrumental and atmospheric 1/f noise, which has motivated the development of polarization modulators to facilitate the rejection of these large systematic effects.Variable-delay polarization modulators (VPMs) are used in the Cosmology Large Angular Scale Surveyor (CLASS) telescopes as the first element in the optical chain to rapidly modulate the incoming polarization. VPMs consist of a linearly polarizing wire grid in front of a moveable flat mirror; varying the distance between the grid and the mirror produces a changing phase shift between polarization states parallel and perpendicular to the grid which modulates Stokes U (linear polarization at 45°) and Stokes V (circular polarization). The reflective and scalable nature of the VPM enables its placement as the first optical element in a reflecting telescope. This simultaneously allows a lock-in style polarization measurement and the separation of sky polarization from any instrumental polarization farther along in the optical chain.The Q-Band CLASS VPM was the first VPM to begin observing the CMB full time in 2016. I will be presenting its design and characterization as well as demonstrating how modulating polarization significantly rejects atmospheric and instrumental long time scale noise.

  2. Dynamic subfilter-scale stress model for large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Rouhi, A.; Piomelli, U.; Geurts, B. J.

    2016-08-01

    We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.

  3. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  4. Studies of Shock Wave Interactions with Homogeneous and Isotropic Turbulence

    NASA Technical Reports Server (NTRS)

    Briassulis, G.; Agui, J.; Watkins, C. B.; Andreopoulos, Y.

    1998-01-01

    A nearly homogeneous nearly isotropic compressible turbulent flow interacting with a normal shock wave has been studied experimentally in a large shock tube facility. Spatial resolution of the order of 8 Kolmogorov viscous length scales was achieved in the measurements of turbulence. A variety of turbulence generating grids provide a wide range of turbulence scales. Integral length scales were found to substantially decrease through the interaction with the shock wave in all investigated cases with flow Mach numbers ranging from 0.3 to 0.7 and shock Mach numbers from 1.2 to 1.6. The outcome of the interaction depends strongly on the state of compressibility of the incoming turbulence. The length scales in the lateral direction are amplified at small Mach numbers and attenuated at large Mach numbers. Even at large Mach numbers amplification of lateral length scales has been observed in the case of fine grids. In addition to the interaction with the shock the present work has documented substantial compressibility effects in the incoming homogeneous and isotropic turbulent flow. The decay of Mach number fluctuations was found to follow a power law similar to that describing the decay of incompressible isotropic turbulence. It was found that the decay coefficient and the decay exponent decrease with increasing Mach number while the virtual origin increases with increasing Mach number. A mechanism possibly responsible for these effects appears to be the inherently low growth rate of compressible shear layers emanating from the cylindrical rods of the grid.

  5. A scale-invariant cellular-automata model for distributed seismicity

    NASA Technical Reports Server (NTRS)

    Barriere, Benoit; Turcotte, Donald L.

    1991-01-01

    In the standard cellular-automata model for a fault an element of stress is randomly added to a grid of boxes until a box has four elements, these are then redistributed to the adjacent boxes on the grid. The redistribution can result in one or more of these boxes having four or more elements in which case further redistributions are required. On the average added elements are lost from the edges of the grid. The model is modified so that the boxes have a scale-invariant distribution of sizes. The objective is to model a scale-invariant distribution of fault sizes. When a redistribution from a box occurs it is equivalent to a characteristic earthquake on the fault. A redistribution from a small box (a foreshock) can trigger an instability in a large box (the main shock). A redistribution from a large box always triggers many instabilities in the smaller boxes (aftershocks). The frequency-size statistics for both main shocks and aftershocks satisfy the Gutenberg-Richter relation with b = 0.835 for main shocks and b = 0.635 for aftershocks. Model foreshocks occur 28 percent of the time.

  6. OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Chan, William M.

    2012-01-01

    Structured grid solvers such as NASA's OVERFLOW compressible Navier-Stokes flow solver can generate large data files that contain convergence histories for flow equation residuals, turbulence model equation residuals, component forces and moments, and component relative motion dynamics variables. Most of today's large-scale problems can extend to hundreds of grids, and over 100 million grid points. However, due to the lack of efficient tools, only a small fraction of information contained in these files is analyzed. OVERSMART (OVERFLOW Solution Monitoring And Reporting Tool) provides a comprehensive report of solution convergence of flow computations over large, complex grid systems. It produces a one-page executive summary of the behavior of flow equation residuals, turbulence model equation residuals, and component forces and moments. Under the automatic option, a matrix of commonly viewed plots such as residual histograms, composite residuals, sub-iteration bar graphs, and component forces and moments is automatically generated. Specific plots required by the user can also be prescribed via a command file or a graphical user interface. Output is directed to the user s computer screen and/or to an html file for archival purposes. The current implementation has been targeted for the OVERFLOW flow solver, which is used to obtain a flow solution on structured overset grids. The OVERSMART framework allows easy extension to other flow solvers.

  7. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  8. Research on unit commitment with large-scale wind power connected power system

    NASA Astrophysics Data System (ADS)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  9. Research on large-scale wind farm modeling

    NASA Astrophysics Data System (ADS)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  10. Large temporal scale and capacity subsurface bulk energy storage with CO2

    NASA Astrophysics Data System (ADS)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  11. Development of a Regional Structured and Unstructured Grid Methodology for Chemically Reactive Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Stefanski, Douglas Lawrence

    A finite volume method for solving the Reynolds Averaged Navier-Stokes (RANS) equations on unstructured hybrid grids is presented. Capabilities for handling arbitrary mixtures of reactive gas species within the unstructured framework are developed. The modeling of turbulent effects is carried out via the 1998 Wilcox k -- o model. This unstructured solver is incorporated within VULCAN -- a multi-block structured grid code -- as part of a novel patching procedure in which non-matching interfaces between structured blocks are replaced by transitional unstructured grids. This approach provides a fully-conservative alternative to VULCAN's non-conservative patching methods for handling such interfaces. In addition, the further development of the standalone unstructured solver toward large-eddy simulation (LES) applications is also carried out. Dual time-stepping using a Crank-Nicholson formulation is added to recover time-accuracy, and modeling of sub-grid scale effects is incorporated to provide higher fidelity LES solutions for turbulent flows. A switch based on the work of Ducros, et al., is implemented to transition from a monotonicity-preserving flux scheme near shocks to a central-difference method in vorticity-dominated regions in order to better resolve small-scale turbulent structures. The updated unstructured solver is used to carry out large-eddy simulations of a supersonic constrained mixing layer.

  12. On the wavelet optimized finite difference method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1994-01-01

    When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.

  13. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  14. Small vulnerable sets determine large network cascades in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  15. Small vulnerable sets determine large network cascades in power grids

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-11-17

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  16. LES-based generation of high-frequency fluctuation in wind turbulence obtained by meteorological model

    NASA Astrophysics Data System (ADS)

    Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao

    2017-11-01

    The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.

  17. The construction of power grid operation index system considering the risk of maintenance

    NASA Astrophysics Data System (ADS)

    Tang, Jihong; Wang, Canlin; Jiang, Xinfan; Ye, Jianhui; Pan, Feilai

    2018-02-01

    In recent years, large-scale blackout occurred at home and abroad caused widespread concern about the operation of the grid in the world, and the maintenance risk is an important indicator of grid safety. The barrier operation of the circuit breaker exists in the process of overhaul of the power grid. The operation of the different barrier is of great significance to the change of the power flow, thus affecting the safe operation of the system. Most of the grid operating status evaluation index system did not consider the risk of maintenance, to this end, this paper from the security, economy, quality and cleanliness of the four angles, build the power grid operation index system considering the risk of maintenance.

  18. Energy transfers in large-scale and small-scale dynamos

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  19. Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.

    Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less

  20. Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals

    DOE PAGES

    Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.; ...

    2018-02-09

    Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less

  1. AC HTS Transmission Cable for Integration into the Future EHV Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future grid must be capable to transmit all the connected power. Power generation will be more decentralized like for instance wind parks connected to the grid. Furthermore, future large scale production units are expected to be installed near coastal regions. This creates some potential grid issues, such as: large power amounts to be transmitted to consumers from west to east and grid stability. High temperature superconductors (HTS) can help solving these grid problems. Advantages to integrate HTS components at Extra High Voltage (EHV) and High Voltage (HV) levels are numerous: more power with less losses and less emissions, intrinsic fault current limiting capability, better control of power flow, reduced footprint, etc. Today's main obstacle is the relatively high price of HTS. Nevertheless, as the price goes down, initial market penetration for several HTS components is expected by year 2015 (e.g.: cables, fault current limiters). In this paper we present a design of intrinsically compensated EHV HTS cable for future grid integration. Discussed are the parameters of such cable providing an optimal power transmission in the future network.

  2. THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS

    PubMed Central

    Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel

    2010-01-01

    Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618

  3. Supercapacitors specialities - Technology review

    NASA Astrophysics Data System (ADS)

    Münchgesang, Wolfram; Meisner, Patrick; Yushin, Gleb

    2014-06-01

    Commercial electrochemical capacitors (supercapacitors) are not limited to mobile electronics anymore, but have reached the field of large-scale applications, like smart grid, wind turbines, power for large scale ground, water and aerial transportation, energy-efficient industrial equipment and others. This review gives a short overview of the current state-of-the-art of electrochemical capacitors, their commercial applications and the impact of technological development on performance.

  4. Supercapacitors specialities - Technology review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Münchgesang, Wolfram; Meisner, Patrick; Yushin, Gleb

    2014-06-16

    Commercial electrochemical capacitors (supercapacitors) are not limited to mobile electronics anymore, but have reached the field of large-scale applications, like smart grid, wind turbines, power for large scale ground, water and aerial transportation, energy-efficient industrial equipment and others. This review gives a short overview of the current state-of-the-art of electrochemical capacitors, their commercial applications and the impact of technological development on performance.

  5. New Markets for Solar Photovoltaic Power Systems

    NASA Astrophysics Data System (ADS)

    Thomas, Chacko; Jennings, Philip; Singh, Dilawar

    2007-10-01

    Over the past five years solar photovoltaic (PV) power supply systems have matured and are now being deployed on a much larger scale. The traditional small-scale remote area power supply systems are still important and village electrification is also a large and growing market but large scale, grid-connected systems and building integrated systems are now being deployed in many countries. This growth has been aided by imaginative government policies in several countries and the overall result is a growth rate of over 40% per annum in the sales of PV systems. Optimistic forecasts are being made about the future of PV power as a major source of sustainable energy. Plans are now being formulated by the IEA for very large-scale PV installations of more than 100 MW peak output. The Australian Government has announced a subsidy for a large solar photovoltaic power station of 154 MW in Victoria, based on the concentrator technology developed in Australia. In Western Australia a proposal has been submitted to the State Government for a 2 MW photovoltaic power system to provide fringe of grid support at Perenjori. This paper outlines the technologies, designs, management and policies that underpin these exciting developments in solar PV power.

  6. Linking Satellite-Derived Fire Counts to Satellite-Derived Weather Data in Fire Prediction Models to Forecast Extreme Fires in Siberia

    NASA Astrophysics Data System (ADS)

    Westberg, David; Soja, Amber; Stackhouse, Paul, Jr.

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Boreal systems contain the largest pool of terrestrial carbon, and Russia holds 2/3 of the global boreal forests. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under climate change scenarios. Meteorological parameters influence fire danger and fire is a catalyst for ecosystem change. Therefore to predict fire weather and ecosystem change, we must understand the factors that influence fire regimes and at what scale these are viable. Our data consists of NASA Langley Research Center (LaRC)-derived fire weather indices (FWI) and National Climatic Data Center (NCDC) surface station-derived FWI on a domain from 50°N-80°N latitude and 70°E-170°W longitude and the fire season from April through October for the years of 1999, 2002, and 2004. Both of these are calculated using the Canadian Forest Service (CFS) FWI, which is based on local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. The large-scale (1°) LaRC product uses NASA Goddard Earth Observing System version 4 (GEOS-4) reanalysis and NASA Global Precipitation Climatology Project (GEOS-4/GPCP) data to calculate FWI. CFS Natural Resources Canada uses Geographic Information Systems (GIS) to interpolate NCDC station data and calculate FWI. We compare the LaRC GEOS- 4/GPCP FWI and CFS NCDC FWI based on their fraction of 1° grid boxes that contain satellite-derived fire counts and area burned to the domain total number of 1° grid boxes with a common FWI category (very low to extreme). These are separated by International Geosphere-Biosphere Programme (IGBP) 1°x1° resolution vegetation types to determine and compare fire regimes in each FWI/ecosystem class and to estimate the fraction of each of the 18 IGBP ecosystems burned, which are dependent on the FWI. On days with fire counts, the domain total of 1°x1° grid boxes with and without daily fire counts and area burned are totaled. The fraction of 1° grid boxes with fire counts and area burned to the total number of 1° grid boxes having common FWI category and vegetation type are accumulated, and a daily mean for the burning season is calculated. The mean fire counts and mean area burned plots appear to be well related. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to assess fire weather danger and fire regimes, so these data can be confidently used to predict future fire regimes using large-scale fire weather data. Specifically, we related large-scale fire weather, area burned, and the amount of fire-induced ecosystem change. Both the LaRC and CFS FWI showed gradual linear increase in fraction of grid boxes with fire counts and area burned with increasing FWI category, with an exponential increase in the higher FWI categories in some cases, for the majority of the vegetation types. Our analysis shows a direct correlation between increased fire activity and increased FWI, independent of time or the severity of the fire season. During normal and extreme fire seasons, we noticed the fraction of fire counts and area burned per 1° grid box increased with increasing FWI rating. Given this analysis, we are confident large-scale weather and climate data, in this case from the GEOS-4 reanalysis and the GPCP data sets, can be used to accurately assess future fire potential. This increases confidence in the ability of large-scale IPCC weather and climate scenarios to predict future fire regimes in boreal regions.

  7. Formation of Virtual Organizations in Grids: A Game-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Carroll, Thomas E.; Grosu, Daniel

    The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.

  8. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  9. Measurements of the Influence of Integral Length Scale on Stagnation Region Heat Transfer

    NASA Technical Reports Server (NTRS)

    Vanfossen, G. James; Ching, Chang Y.

    1994-01-01

    The purpose was twofold: first, to determine if a length scale existed that would cause the greatest augmentation in stagnation region heat transfer for a given turbulence intensity and second, to develop a prediction tool for stagnation heat transfer in the presence of free stream turbulence. Toward this end, a model with a circular leading edge was fabricated with heat transfer gages in the stagnation region. The model was qualified in a low turbulence wind tunnel by comparing measurements with Frossling's solution for stagnation region heat transfer in a laminar free stream. Five turbulence generating grids were fabricated; four were square mesh, biplane grids made from square bars. Each had identical mesh to bar width ratio but different bar widths. The fifth grid was an array of fine parallel wires that were perpendicular to the axis of the cylindrical leading edge. Turbulence intensity and integral length scale were measured as a function of distance from the grids. Stagnation region heat transfer was measured at various distances downstream of each grid. Data were taken at cylinder Reynolds numbers ranging from 42,000 to 193,000. Turbulence intensities were in the range 1.1 to 15.9 percent while the ratio of integral length scale to cylinder diameter ranged from 0.05 to 0.30. Stagnation region heat transfer augmentation increased with decreasing length scale. An optimum scale was not found. A correlation was developed that fit heat transfer data for the square bar grids to within +4 percent. The data from the array of wires were not predicted by the correlation; augmentation was higher for this case indicating that the degree of isotropy in the turbulent flow field has a large effect on stagnation heat transfer. The data of other researchers are also compared with the correlation.

  10. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less

  11. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE PAGES

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans; ...

    2016-11-09

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  12. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  13. Quadtree of TIN: a new algorithm of dynamic LOD

    NASA Astrophysics Data System (ADS)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  14. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGES

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  15. Using Grid Cells for Navigation

    PubMed Central

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-01-01

    Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860

  16. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  17. An Ag-grid/graphene hybrid structure for large-scale, transparent, flexible heaters.

    PubMed

    Kang, Junmo; Jang, Yonghee; Kim, Youngsoo; Cho, Seung-Hyun; Suhr, Jonghwan; Hong, Byung Hee; Choi, Jae-Boong; Byun, Doyoung

    2015-04-21

    Recently, carbon materials such as carbon nanotubes and graphene have been proposed as alternatives to indium tin oxide (ITO) for fabricating transparent conducting materials. However, obtaining low sheet resistance and high transmittance of these carbon materials has been challenging due to the intrinsic properties of the materials. In this paper, we introduce highly transparent and flexible conductive films based on a hybrid structure of graphene and an Ag-grid. Electrohydrodynamic (EHD) jet printing was used to produce a micro-scale grid consisting of Ag lines less than 10 μm wide. We were able to directly write the Ag-grid on a large-area graphene/flexible substrate due to the high conductivity of graphene. The hybrid electrode could be fabricated using hot pressing transfer and EHD jet printing in a non-vacuum, maskless, and low-temperature environment. The hybrid electrode offers an effective and simple route for achieving a sheet resistance as low as ∼4 Ω per square with ∼78% optical transmittance. Finally, we demonstrate that transparent flexible heaters based on the hybrid conductive films could be used in a vehicle or a smart window system.

  18. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    NASA Astrophysics Data System (ADS)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  19. Webinar July 28: H2@Scale - A Potential Opportunity | News | NREL

    Science.gov Websites

    role of hydrogen at the grid scale and the efforts of a large, national lab team assembled to evaluate the potential of hydrogen to play a critical role in our energy future. Presenters will share facts

  20. The impact of mesoscale convective systems on global precipitation: A modeling study

    NASA Astrophysics Data System (ADS)

    Tao, Wei-Kuo

    2017-04-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.

  1. Downscaling Aerosols and the Impact of Neglected Subgrid Processes on Direct Aerosol Radiative Forcing for a Representative Global Climate Model Grid Spacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Qian, Yun; Fast, Jerome D.

    2011-07-13

    Recent improvements to many global climate models include detailed, prognostic aerosol calculations intended to better reproduce the observed climate. However, the trace gas and aerosol fields are treated at the grid-cell scale with no attempt to account for sub-grid impacts on the aerosol fields. This paper begins to quantify the error introduced by the neglected sub-grid variability for the shortwave aerosol radiative forcing for a representative climate model grid spacing of 75 km. An analysis of the value added in downscaling aerosol fields is also presented to give context to the WRF-Chem simulations used for the sub-grid analysis. We foundmore » that 1) the impact of neglected sub-grid variability on the aerosol radiative forcing is strongest in regions of complex topography and complicated flow patterns, and 2) scale-induced differences in emissions contribute strongly to the impact of neglected sub-grid processes on the aerosol radiative forcing. The two of these effects together, when simulated at 75 km vs. 3 km in WRF-Chem, result in an average daytime mean bias of over 30% error in top-of-atmosphere shortwave aerosol radiative forcing for a large percentage of central Mexico during the MILAGRO field campaign.« less

  2. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    NASA Technical Reports Server (NTRS)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.

  3. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  4. Cascading failures in ac electricity grids.

    PubMed

    Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan

    2016-09-01

    Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.

  5. Elementary dispersion analysis of some mimetic discretizations on triangular C-grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korn, P., E-mail: peter.korn@mpimet.mpg.de; Danilov, S.; A.M. Obukhov Institute of Atmospheric Physics, Moscow

    2017-02-01

    Spurious modes supported by triangular C-grids limit their application for modeling large-scale atmospheric and oceanic flows. Their behavior can be modified within a mimetic approach that generalizes the scalar product underlying the triangular C-grid discretization. The mimetic approach provides a discrete continuity equation which operates on an averaged combination of normal edge velocities instead of normal edge velocities proper. An elementary analysis of the wave dispersion of the new discretization for Poincaré, Rossby and Kelvin waves shows that, although spurious Poincaré modes are preserved, their frequency tends to zero in the limit of small wavenumbers, which removes the divergence noisemore » in this limit. However, the frequencies of spurious and physical modes become close on shorter scales indicating that spurious modes can be excited unless high-frequency short-scale motions are effectively filtered in numerical codes. We argue that filtering by viscous dissipation is more efficient in the mimetic approach than in the standard C-grid discretization. Lumping of mass matrices appearing with the velocity time derivative in the mimetic discretization only slightly reduces the accuracy of the wave dispersion and can be used in practice. Thus, the mimetic approach cures some difficulties of the traditional triangular C-grid discretization but may still need appropriately tuned viscosity to filter small scales and high frequencies in solutions of full primitive equations when these are excited by nonlinear dynamics.« less

  6. Grid scale drives the scale and long-term stability of place maps

    PubMed Central

    Mallory, Caitlin S; Hardcastle, Kiah; Bant, Jason S; Giocomo, Lisa M

    2018-01-01

    Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally-defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here, we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory. PMID:29335607

  7. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  8. Contribution of concentrator photovoltaic installations to grid stability and power quality

    NASA Astrophysics Data System (ADS)

    del Toro García, Xavier; Roncero-Sánchez, Pedro; Torres, Alfonso Parreño; Vázquez, Javier

    2012-10-01

    Large-scale integration of Photovoltaic (PV) generation systems, including Concentrator Photovoltaic (CPV) technologies, will require the contribution and support of these technologies to the management and stability of the grid. New regulations and grid codes for PV installations in countries such as Spain have recently included dynamic voltage control support during faults. The PV installation must stay connected to the grid during voltage dips and inject reactive power in order to enhance the stability of the system. The existing PV inverter technologies based on the Voltage-Source Converter (VSC) are in general well suited to provide advanced grid-support characteristics. Nevertheless, new advanced control schemes and monitoring techniques will be necessary to meet the most demanding requirements.

  9. Nested high-resolution large-eddy simulations in WRF to support wind power

    NASA Astrophysics Data System (ADS)

    Mirocha, J.; Kirkil, G.; Kosovic, B.; Lundquist, J. K.

    2009-12-01

    The WRF model’s grid nesting capability provides a potentially powerful framework for simulating flow over a wide range of scales. One such application is computation of realistic inflow boundary conditions for large eddy simulations (LES) by nesting LES domains within mesoscale domains. While nesting has been widely and successfully applied at GCM to mesoscale resolutions, the WRF model’s nesting behavior at the high-resolution (Δx < 1000m) end of the spectrum is less well understood. Nesting LES within msoscale domains can significantly improve turbulent flow prediction at the scale of a wind park, providing a basis for superior site characterization, or for improved simulation of turbulent inflows encountered by turbines. We investigate WRF’s grid nesting capability at high mesh resolutions using nested mesoscale and large-eddy simulations. We examine the spatial scales required for flow structures to equilibrate to the finer mesh as flow enters a nest, and how the process depends on several parameters, including grid resolution, turbulence subfilter stress models, relaxation zones at nest interfaces, flow velocities, surface roughnesses, terrain complexity and atmospheric stability. Guidance on appropriate domain sizes and turbulence models for LES in light of these results is provided This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-416482

  10. Self-Assembly of Large-Scale Shape-Controlled DNA Nano-Structures

    DTIC Science & Technology

    2014-12-16

    discharged carbon-coated TEM grids for 4 min and then stained for 1 min using a 2% aqueous uranyl formate solution containing 25 mM NaOH. Imaging was...temperature for 3 h in the dark. TEM imaging. For imaging, 2,5 pi annealed sample was adsorbed for 2 min onto glow- discharged , carbon-coated TEM grids...Imaging. For ’I’EM imaging, a 3.S //L sample (l—5 nM) was adsorbed onto glow discharged carbon-coated TEM grids for 4 min and then stained for 1 min or a

  11. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  12. Metal-Free Aqueous Flow Battery with Novel Ultrafiltered Lignin as Electrolyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhopadhyay, Alolika; Hamel, Jonathan; Katahira, Rui

    As the number of generation sources from intermittent renewable technologies on the electric grid increases, the need for large-scale energy storage devices is becoming essential to ensure grid stability. Flow batteries offer numerous advantages over conventional sealed batteries for grid storage. In this work, for the first time, we investigated lignin, the second most abundant wood derived biopolymer, as an anolyte for the aqueous flow battery. Lignosulfonate, a water-soluble derivative of lignin, is environmentally benign, low cost and abundant as it is obtained from the byproduct of paper and biofuel manufacturing. The lignosulfonate utilizes the redox chemistry of quinone tomore » store energy and undergoes a reversible redox reaction. Here, we paired lignosulfonate with Br2/Br-, and the full cell runs efficiently with high power density. Also, the large and complex molecular structure of lignin considerably reduces the electrolytic crossover, which ensures very high capacity retention. The flowcell was able to achieve current densities of up to 20 mA/cm2 and charge polarization resistance of 15 ohm cm2. This technology presents a unique opportunity for a low-cost, metal-free flow battery capable of large-scale sustainable energy storage.« less

  13. Thematic mapper-derived mineral distribution maps of Idaho, Nevada, and western Montana

    USGS Publications Warehouse

    Raines, Gary L.

    2006-01-01

    This report provides mineral distribution maps based on TM spectral information of minerals commonly associated with hydrothermal alteration in Nevada, Idaho, and western Montana. The product of the processing is provided as four ESRI GRID files with 30 m resolution by state. UTM Zone 11 projection is used for Nevada (grid clsnv) and western Idaho (grid clsid), UTM Zone 12 is used for eastern Idaho and western Montana (grid clsid_mt). A fourth grid with a special Albers projection is used for the Headwaters project covering Idaho and western Montana (grid crccls_hs). Symbolization for all four grids is stored in the ESRI layer or LYR files and color or CLR files. Objectives of the analyses were to cover a large area very quickly and to provide data that could be used at a scale of 1:100,000 or smaller. Thus, the image processing was standardized for speed while still achieving the desired 1:100,000-scale level of detail. Consequently, some subtle features of mineralogy may be missed. The hydrothermal alteration data were not field checked to separate mineral occurrences due to hydrothermal alteration from those due to other natural occurrences. The data were evaluated by overlaying the results with 1:100,000 scale topographic maps to confirm correlation with known mineralized areas. The data were also tested in the Battle Mountain area of north-central Nevada by a weights-of-evidence correlation analysis with metallic mineral sites from the USGS Mineral Resources Data System and were found to have significant spatial correlation. On the basis of on these analyses, the data are considered useful for regional studies at scales of 1:100,000.

  14. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  15. A Bayesian Analysis of Scale-Invariant Processes

    DTIC Science & Technology

    2012-01-01

    Earth Grid (EASE- Grid). The NED raster elevation data of one arc-second resolution (30 m) over the continental US are derived from multiple satellites ...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...empirical and ME distributions, yet ensuring computational efficiency. Instead of com- puting empirical histograms from large amount of data , only some

  16. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  17. Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  18. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids

    PubMed Central

    Boschitsch, Alexander H.; Fenley, Marcia O.

    2011-01-01

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent – analytical solutions are available for this case, thus allowing rigorous assessment of the solution accuracy; (ii) a pair of low dielectric charged spheres embedded in a ionic solvent to compute electrostatic interaction free energies as a function of the distance between sphere centers; (iii) surface potentials of proteins, nucleic acids and their larger-scale assemblies such as ribosomes; and (iv) electrostatic solvation free energies and their salt sensitivities – obtained with both linear and nonlinear Poisson-Boltzmann equation – for a large set of proteins. These latter results along with timings can serve as benchmarks for comparing the performance of different PBE solvers. PMID:21984876

  19. On the Subgrid-Scale Modeling of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle; Zeman, Otto

    1990-01-01

    A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.

  20. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  1. Demonstration of Active Power Controls by Utility-Scale PV Power Plant in an Island Grid: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorgian, Vahan; O'Neill, Barbara

    The National Renewable Energy Laboratory (NREL), AES, and the Puerto Rico Electric Power Authority conducted a demonstration project on a utility-scale photovoltaic (PV) plant to test the viability of providing important ancillary services from this facility. As solar generation increases globally, there is a need for innovation and increased operational flexibility. A typical PV power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. In this way, it may mitigate the impact of its variability on the grid and contribute to important system requirements more like traditional generators. In 2015,more » testing was completed on a 20-MW AES plant in Puerto Rico, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to provide various types of new grid-friendly controls. This data showed how active power controls can leverage PV's value from being simply an intermittent energy resource to providing additional ancillary services for an isolated island grid. Specifically, the tests conducted included PV plant participation in automatic generation control, provision of droop response, and fast frequency response.« less

  2. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE PAGES

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...

    2017-09-14

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  3. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  4. An Analysis of Performance Enhancement Techniques for Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  5. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  6. Assessing species distribution using Google Street View: a pilot study with the Pine Processionary Moth.

    PubMed

    Rousselet, Jérôme; Imbert, Charles-Edouard; Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre

    2013-01-01

    Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant.

  7. Assessing Species Distribution Using Google Street View: A Pilot Study with the Pine Processionary Moth

    PubMed Central

    Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre

    2013-01-01

    Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google street view could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google street view. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google street view were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google street view network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant. PMID:24130675

  8. A LES-Langevin model for turbulence

    NASA Astrophysics Data System (ADS)

    Dolganov, Rostislav; Dubrulle, Bérengère; Laval, Jean-Philippe

    2006-11-01

    The rationale for Large Eddy Simulation is rooted in our inability to handle all degrees of freedom (N˜10^16 for Re˜10^7). ``Deterministic'' models based on eddy-viscosity seek to reproduce the intensification of the energy transport. However, they fail to reproduce backward energy transfer (backscatter) from small to large scale, which is an essentiel feature of the turbulence near wall or in boundary layer. To capture this backscatter, ``stochastic'' strategies have been developed. In the present talk, we shall discuss such a strategy, based on a Rapid Distorsion Theory (RDT). Specifically, we first divide the small scale contribution to the Reynolds Stress Tensor in two parts: a turbulent viscosity and the pseudo-Lamb vector, representing the nonlinear cross terms of resolved and sub-grid scales. We then estimate the dynamics of small-scale motion by the RDT applied to Navier-Stockes equation. We use this to model the cross term evolution by a Langevin equation, in which the random force is provided by sub-grid pressure terms. Our LES model is thus made of a truncated Navier-Stockes equation including the turbulent force and a generalized Langevin equation for the latter, integrated on a twice-finer grid. The backscatter is automatically included in our stochastic model of the pseudo-Lamb vector. We apply this model to the case of homogeneous isotropic turbulence and turbulent channel flow.

  9. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  10. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  11. GridWise Standards Mapping Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosquet, Mia L.

    ''GridWise'' is a concept of how advanced communications, information and controls technology can transform the nation's energy system--across the spectrum of large scale, central generation to common consumer appliances and equipment--into a collaborative network, rich in the exchange of decision making information and an abundance of market-based opportunities (Widergren and Bosquet 2003) accompanying the electric transmission and distribution system fully into the information and telecommunication age. This report summarizes a broad review of standards efforts which are related to GridWise--those which could ultimately contribute significantly to advancements toward the GridWise vision, or those which represent today's current technological basis uponmore » which this vision must build.« less

  12. LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis

    DTIC Science & Technology

    2016-05-23

    LLMapReduce works with several schedulers such as SLURM, Grid Engine and LSF. Keywords—LLMapReduce; map-reduce; performance; scheduler; Grid Engine ...SLURM; LSF I. INTRODUCTION Large scale computing is currently dominated by four ecosystems: supercomputing, database, enterprise , and big data [1...interconnects [6]), High performance math libraries (e.g., BLAS [7, 8], LAPACK [9], ScaLAPACK [10]) designed to exploit special processing hardware, High

  13. Grid-Independent Large-Eddy Simulation in Turbulent Channel Flow using Three-Dimensional Explicit Filtering

    NASA Technical Reports Server (NTRS)

    Gullbrand, Jessica

    2003-01-01

    In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.

  14. Circulation and multiple-scale variability in the Southern California Bight

    NASA Astrophysics Data System (ADS)

    Dong, Changming; Idica, Eileen Y.; McWilliams, James C.

    2009-09-01

    The oceanic circulation in the Southern California Bight (SCB) is influenced by the large-scale California Current offshore, tropical remote forcing through the coastal wave guide alongshore, and local atmospheric forcing. The region is characterized by local complexity in the topography and coastline. All these factors engender variability in the circulation on interannual, seasonal, and intraseasonal time scales. This study applies the Regional Oceanic Modeling System (ROMS) to the SCB circulation and its multiple-scale variability. The model is configured in three levels of nested grids with the parent grid covering the whole US West Coast. The first child grid covers a large southern domain, and the third grid zooms in on the SCB region. The three horizontal grid resolutions are 20 km, 6.7 km, and 1 km, respectively. The external forcings are momentum, heat, and freshwater flux at the surface and adaptive nudging to gyre-scale SODA reanalysis fields at the boundaries. The momentum flux is from a three-hourly reanalysis mesoscale MM5 wind with a 6 km resolution for the finest grid in the SCB. The oceanic model starts in an equilibrium state from a multiple-year cyclical climatology run, and then it is integrated from years 1996 through 2003. In this paper, the 8-year simulation at the 1 km resolution is analyzed and assessed against extensive observational data: High-Frequency (HF) radar data, current meters, Acoustic Doppler Current Profilers (ADCP) data, hydrographic measurements, tide gauges, drifters, altimeters, and radiometers. The simulation shows that the domain-scale surface circulation in the SCB is characterized by the Southern California Cyclonic Gyre, comprised of the offshore equatorward California Current System and the onshore poleward Southern California Countercurrent. The simulation also exhibits three subdomain-scale, persistent ( i.e., standing), cyclonic eddies related to the local topography and wind forcing: the Santa Barbara Channel Eddy, the Central-SCB Eddy, and the Catalina-Clemente Eddy. Comparisons with observational data reveal that ROMS reproduces a realistic mean state of the SCB oceanic circulation, as well as its interannual (mainly as a local manifestation of an ENSO event), seasonal, and intraseasonal (eddy-scale) variations. We find high correlations of the wind curl with both the alongshore pressure gradient (APG) and the eddy kinetic energy level in their variations on time scales of seasons and longer. The geostrophic currents are much stronger than the wind-driven Ekman flows at the surface. The model exhibits intrinsic eddy variability with strong topographically related heterogeneity, westward-propagating Rossby waves, and poleward-propagating coastally-trapped waves (albeit with smaller amplitude than observed due to missing high-frequency variations in the southern boundary conditions).

  15. Nonvacuum, maskless fabrication of a flexible metal grid transparent conductor by low-temperature selective laser sintering of nanoparticle ink.

    PubMed

    Hong, Sukjoon; Yeo, Junyeob; Kim, Gunho; Kim, Dongkyu; Lee, Habeom; Kwon, Jinhyeong; Lee, Hyungman; Lee, Phillip; Ko, Seung Hwan

    2013-06-25

    We introduce a facile approach to fabricate a metallic grid transparent conductor on a flexible substrate using selective laser sintering of metal nanoparticle ink. The metallic grid transparent conductors with high transmittance (>85%) and low sheet resistance (30 Ω/sq) are readily produced on glass and polymer substrates at large scale without any vacuum or high-temperature environment. Being a maskless direct writing method, the shape and the parameters of the grid can be easily changed by CAD data. The resultant metallic grid also showed a superior stability in terms of adhesion and bending. This transparent conductor is further applied to the touch screen panel, and it is confirmed that the final device operates firmly under continuous mechanical stress.

  16. Modal Analysis for Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less

  17. Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1997-01-01

    Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.

  18. An interactive grid generation procedure for axial and radial flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Beach, Timothy A.

    1989-01-01

    A combination algebraic/elliptic technique is presented for the generation of three dimensional grids about turbo-machinery blade rows for both axial and radial flow machinery. The technique is built around use of an advanced engineering workstation to construct several two dimensional grids interactively on predetermined blade-to-blade surfaces. A three dimensional grid is generated by interpolating these surface grids onto an axisymmetric grid. On each blade-to-blade surface, a grid is created using algebraic techniques near the blade to control orthogonality within the boundary layer region and elliptic techniques in the mid-passage to achieve smoothness. The interactive definition of bezier curves as internal boundaries is the key to simple construction. This procedure lends itself well to zonal grid construction, an important example being the tip clearance region. Calculations done to date include a space shuttle main engine turbopump blade, a radial inflow turbine blade, and the first stator of the United Technologies Research Center large scale rotating rig. A finite Navier-Stokes solver was used in each case.

  19. Occurrence and countermeasures of urban power grid accident

    NASA Astrophysics Data System (ADS)

    Wei, Wang; Tao, Zhang

    2018-03-01

    With the advance of technology, the development of network communication and the extensive use of power grids, people can get to know power grid accidents around the world through the network timely. Power grid accidents occur frequently. Large-scale power system blackout and casualty accidents caused by electric shock are also fairly commonplace. All of those accidents have seriously endangered the property and personal safety of the country and people, and the development of society and economy is severely affected by power grid accidents. Through the researches on several typical cases of power grid accidents at home and abroad in recent years and taking these accident cases as the research object, this paper will analyze the three major factors that cause power grid accidents at present. At the same time, combining with various factors and impacts caused by power grid accidents, the paper will put forward corresponding solutions and suggestions to prevent the occurrence of the accident and lower the impact of the accident.

  20. The impact of the topology on cascading failures in a power grid model

    NASA Astrophysics Data System (ADS)

    Koç, Yakup; Warnier, Martijn; Mieghem, Piet Van; Kooij, Robert E.; Brazier, Frances M. T.

    2014-05-01

    Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly assessed by purely topological approaches, that fail to capture the essence of electric power flow. This paper proposes a metric, the effective graph resistance, to relate the topology of a power grid to its robustness against cascading failures by deliberate attacks, while also taking the fundamental characteristics of the electric power grid into account such as power flow allocation according to Kirchhoff laws. Experimental verification on synthetic power systems shows that the proposed metric reflects the grid robustness accurately. The proposed metric is used to optimize a grid topology for a higher level of robustness. To demonstrate its applicability, the metric is applied on the IEEE 118 bus power system to improve its robustness against cascading failures.

  1. NeuroGrid: recording action potentials from the surface of the brain.

    PubMed

    Khodagholy, Dion; Gelinas, Jennifer N; Thesen, Thomas; Doyle, Werner; Devinsky, Orrin; Malliaras, George G; Buzsáki, György

    2015-02-01

    Recording from neural networks at the resolution of action potentials is critical for understanding how information is processed in the brain. Here, we address this challenge by developing an organic material-based, ultraconformable, biocompatible and scalable neural interface array (the 'NeuroGrid') that can record both local field potentials(LFPs) and action potentials from superficial cortical neurons without penetrating the brain surface. Spikes with features of interneurons and pyramidal cells were simultaneously acquired by multiple neighboring electrodes of the NeuroGrid, allowing for the isolation of putative single neurons in rats. Spiking activity demonstrated consistent phase modulation by ongoing brain oscillations and was stable in recordings exceeding 1 week's duration. We also recorded LFP-modulated spiking activity intraoperatively in patients undergoing epilepsy surgery. The NeuroGrid constitutes an effective method for large-scale, stable recording of neuronal spikes in concert with local population synaptic activity, enhancing comprehension of neural processes across spatiotemporal scales and potentially facilitating diagnosis and therapy for brain disorders.

  2. Using Grid Cells for Navigation.

    PubMed

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-08-05

    Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this "vector navigation" relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  4. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  5. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  6. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  7. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  8. Numerical methods for large-scale, time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1979-01-01

    A survey of numerical methods for time dependent partial differential equations is presented. The emphasis is on practical applications to large scale problems. A discussion of new developments in high order methods and moving grids is given. The importance of boundary conditions is stressed for both internal and external flows. A description of implicit methods is presented including generalizations to multidimensions. Shocks, aerodynamics, meteorology, plasma physics and combustion applications are also briefly described.

  9. Distributed Energy Systems: Security Implications of the Grid of the Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamber, Kevin L.; Kelic, Andjelka; Taylor, Robert A.

    2017-01-01

    Distributed Energy Resources (DER) are being added to the nation's electric grid, and as penetration of these resources increases, they have the potential to displace or offset large-scale, capital-intensive, centralized generation. Integration of DER into operation of the traditional electric grid requires automated operational control and communication of DER elements, from system measurement to control hardware and software, in conjunction with a utility's existing automated and human-directed control of other portions of the system. Implementation of DER technologies suggests a number of gaps from both a security and a policy perspective. This page intentionally left blank.

  10. A Scalable proxy cache for Grid Data Access

    NASA Astrophysics Data System (ADS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan

    2012-12-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  11. Utility-Scale Solar Power Converter: Agile Direct Grid Connect Medium Voltage 4.7-13.8 kV Power Converter for PV Applications Utilizing Wide Band Gap Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Solar ADEPT Project: Satcon is developing a compact, lightweight power conversion device that is capable of taking utility-scale solar power and outputting it directly into the electric utility grid at distribution voltage levels—eliminating the need for large transformers. Transformers “step up” the voltage of the power that is generated by a solar power system so it can be efficiently transported through transmission lines and eventually “stepped down” to usable voltages before it enters homes and businesses. Power companies step up the voltage because less electricity is lost along transmission lines when the voltage is high and current is low. Satcon’smore » new power conversion devices will eliminate these heavy transformers and connect a utility-scale solar power system directly to the grid. Satcon’s modular devices are designed to ensure reliability—if one device fails it can be bypassed and the system can continue to run.« less

  12. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  13. Multi-Resolution Unstructured Grid-Generation for Geophysical Applications on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2015-01-01

    An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.

  14. Simulations of turbulent rotating flows using a subfilter scale stress model derived from the partially integrated transport modeling method

    NASA Astrophysics Data System (ADS)

    Chaouat, Bruno

    2012-04-01

    The partially integrated transport modeling (PITM) method [B. Chaouat and R. Schiestel, "A new partially integrated transport model for subgrid-scale stresses and dissipation rate for turbulent developing flows," Phys. Fluids 17, 065106 (2005), 10.1063/1.1928607; R. Schiestel and A. Dejoan, "Towards a new partially integrated transport model for coarse grid and unsteady turbulent flow simulations," Theor. Comput. Fluid Dyn. 18, 443 (2005), 10.1007/s00162-004-0155-z; B. Chaouat and R. Schiestel, "From single-scale turbulence models to multiple-scale and subgridscale models by Fourier transform," Theor. Comput. Fluid Dyn. 21, 201 (2007), 10.1007/s00162-007-0044-3; B. Chaouat and R. Schiestel, "Progress in subgrid-scale transport modelling for continuous hybrid non-zonal RANS/LES simulations," Int. J. Heat Fluid Flow 30, 602 (2009), 10.1016/j.ijheatfluidflow.2009.02.021] viewed as a continuous approach for hybrid RANS/LES (Reynolds averaged Navier-Stoke equations/large eddy simulations) simulations with seamless coupling between RANS and LES regions is used to derive a subfilter scale stress model in the framework of second-moment closure applicable in a rotating frame of reference. This present subfilter scale model is based on the transport equations for the subfilter stresses and the dissipation rate and appears well appropriate for simulating unsteady flows on relatively coarse grids or flows with strong departure from spectral equilibrium because the cutoff wave number can be located almost anywhere inside the spectrum energy. According to the spectral theory developed in the wave number space [B. Chaouat and R. Schiestel, "From single-scale turbulence models to multiple-scale and subgrid-scale models by Fourier transform," Theor. Comput. Fluid Dyn. 21, 201 (2007), 10.1007/s00162-007-0044-3], the coefficients used in this model are no longer constants but they are some analytical functions of a dimensionless parameter controlling the spectral distribution of turbulence. The pressure-strain correlation term encompassed in this model is inspired from the nonlinear SSG model [C. G. Speziale, S. Sarkar, and T. B. Gatski, "Modelling the pressure-strain correlation of turbulence: an invariant dynamical systems approach," J. Fluid Mech. 227, 245 (1991), 10.1017/S0022112091000101] developed initially for homogeneous rotating flows in RANS methodology. It is modeled in system rotation using the principle of objectivity. Its modeling is especially extended in a low Reynolds number version for handling non-homogeneous wall flows. The present subfilter scale stress model is then used for simulating large scales of rotating turbulent flows on coarse and medium grids at moderate, medium, and high rotation rates. It is also applied to perform a simulation on a refined grid at the highest rotation rate. As a result, it is found that the PITM simulations reproduce fairly well the mean features of rotating channel flows allowing a drastic reduction of the computational cost in comparison with the one required for performing highly resolved LES. Overall, the mean velocities and turbulent stresses are found to be in good agreement with the data of highly resolved LES [E. Lamballais, O. Metais, and M. Lesieur, "Spectral-dynamic model for large-eddy simulations of turbulent rotating flow," Theor. Comput. Fluid Dyn. 12, 149 (1998)]. The anisotropy character of the flow resulting from the rotation effects is also well reproduced in accordance with the reference data. Moreover, the PITM2 simulations performed on the medium grid predict qualitatively well the three-dimensional flow structures as well as the longitudinal roll cells which appear in the anticyclonic wall-region of the rotating flows. As expected, the PITM3 simulation performed on the refined grid reverts to highly resolved LES. The present model based on a rational formulation appears to be an interesting candidate for tackling a large variety of engineering flows subjected to rotation.

  15. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  16. The Sensitivity of Numerical Simulations of Cloud-Topped Boundary Layers to Cross-Grid Flow

    NASA Astrophysics Data System (ADS)

    Wyant, Matthew C.; Bretherton, Christopher S.; Blossey, Peter N.

    2018-02-01

    In mesoscale and global atmospheric simulations with large horizontal domains, strong horizontal flow across the grid is often unavoidable, but its effects on cloud-topped boundary layers have received comparatively little study. Here the effects of cross-grid flow on large-eddy simulations of stratocumulus and trade-cumulus marine boundary layers are studied across a range of grid resolutions (horizontal × vertical) between 500 m × 20 m and 35 m × 5 m. Three cases are simulated: DYCOMS nocturnal stratocumulus, BOMEX trade cumulus, and a GCSS stratocumulus-to-trade cumulus case. Simulations are performed with a stationary grid (with 4-8 m s-1 horizontal winds blowing through the cyclic domain) and a moving grid (equivalent to subtracting off a fixed vertically uniform horizontal wind) approximately matching the mean boundary-layer wind speed. For stratocumulus clouds, cross-grid flow produces two primary effects on stratocumulus clouds: a filtering of fine-scale resolved turbulent eddies, which reduces stratocumulus cloud-top entrainment, and a vertical broadening of the stratocumulus-top inversion which enhances cloud-top entrainment. With a coarse (20 m) vertical grid, the former effect dominates and leads to strong increases in cloud cover and LWP, especially as horizontal resolution is coarsened. With a finer (5 m) vertical grid, the latter effect is stronger and leads to small reductions in cloud cover and LWP. For the BOMEX trade cumulus case, cross-grid flow tends to produce fewer and larger clouds with higher LWP, especially for coarser vertical grid spacing. The results presented are robust to choice of scalar advection scheme and Courant number.

  17. A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration

    2017-11-01

    An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.

  18. Numerical simulations of the convective flame in white dwarfs

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1993-01-01

    A first step toward better understanding of the mechanism driving convective flames in exploding white dwarfs is presented. The propagation of the convective flame is examined using a two-dimensional implicit hydrodynamical code. The large scales of the instability are captured by the grid while the scales that are smaller than the grid resolution are approximated by a mixing-length approximation. It is found that largescale perturbations (of order of the pressure scale height) do grow significantly during the expansion, leading to a very nonspherical burning front. The combustion rate is strongly enhanced (compared to the unperturbed case) during the first second, but later the expansion of the star suppresses the flame speed, leading to only partial incineration of the nuclear fuel. Our results imply that large-scale perturbations by themselves are not enough to explain the mechanism by which convective flames are driven, and a study of the whole spectrum of relevant perturbations is needed. The implications of these preliminary results on future simulations, in the context of current models for Type Ia supernovae, are discussed.

  19. Horizontal Variability of Water and Its Relationship to Cloud Fraction near the Tropical Tropopause: Using Aircraft Observations of Water Vapor to Improve the Representation of Grid-scale Cloud Formation in GEOS-5

    NASA Technical Reports Server (NTRS)

    Selkirk, Henry B.; Molod, Andrea M.

    2014-01-01

    Large-scale models such as GEOS-5 typically calculate grid-scale fractional cloudiness through a PDF parameterization of the sub-gridscale distribution of specific humidity. The GEOS-5 moisture routine uses a simple rectangular PDF varying in height that follows a tanh profile. While below 10 km this profile is informed by moisture information from the AIRS instrument, there is relatively little empirical basis for the profile above that level. ATTREX provides an opportunity to refine the profile using estimates of the horizontal variability of measurements of water vapor, total water and ice particles from the Global Hawk aircraft at or near the tropopause. These measurements will be compared with estimates of large-scale cloud fraction from CALIPSO and lidar retrievals from the CPL on the aircraft. We will use the variability measurements to perform studies of the sensitivity of the GEOS-5 cloud-fraction to various modifications to the PDF shape and to its vertical profile.

  20. Achieving a 100% Renewable Grid: Operating Electric Power Systems with Extremely High Levels of Variable Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin; Johnson, Brian; Zhang, Yingchen

    What does it mean to achieve a 100% renewable grid? Several countries already meet or come close to achieving this goal. Iceland, for example, supplies 100% of its electricity needs with either geothermal or hydropower. Other countries that have electric grids with high fractions of renewables based on hydropower include Norway (97%), Costa Rica (93%), Brazil (76%), and Canada (62%). Hydropower plants have been used for decades to create a relatively inexpensive, renewable form of energy, but these systems are limited by natural rainfall and geographic topology. Around the world, most good sites for large hydropower resources have already beenmore » developed. So how do other areas achieve 100% renewable grids? Variable renewable energy (VRE), such as wind and solar photovoltaic (PV) systems, will be a major contributor, and with the reduction in costs for these technologies during the last five years, large-scale deployments are happening around the world.« less

  1. Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane

    2017-06-02

    Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less

  2. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  3. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  4. Performance of a Heterogeneous Grid Partitioner for N-body Applications

    NASA Technical Reports Server (NTRS)

    Harvey, Daniel J.; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    An important characteristic of distributed grids is that they allow geographically separated multicomputers to be tied together in a transparent virtual environment to solve large-scale computational problems. However, many of these applications require effective runtime load balancing for the resulting solutions to be viable. Recently, we developed a latency tolerant partitioner, called MinEX, specifically for use in distributed grid environments. This paper compares the performance of MinEX to that of METIS, a popular multilevel family of partitioners, using simulated heterogeneous grid configurations. A solver for the classical N-body problem is implemented to provide a framework for the comparisons. Experimental results show that MinEX provides superior quality partitions while being competitive to METIS in speed of execution.

  5. Organic light-emitting diodes using novel embedded al gird transparent electrodes

    NASA Astrophysics Data System (ADS)

    Peng, Cuiyun; Chen, Changbo; Guo, Kunping; Tian, Zhenghao; Zhu, Wenqing; Xu, Tao; Wei, Bin

    2017-03-01

    This work demonstrates a novel transparent electrode using embedded Al grids fabricated by a simple and cost-effective approach using photolithography and wet etching. The optical and electrical properties of Al grids versus grid geometry have been systematically investigated, it was found that Al grids exhibited a low sheet resistance of 70 Ω □-1 and a light transmission of 69% at 550 nm with advantages in terms of processing conditions and material cost as well as potential to large scale fabrication. Indium Tin Oxide-free green organic light-emitting diodes (OLED) based on Al grids transparent electrodes was demonstrated, yielding a power efficiency >15 lm W-1 and current efficiency >39 cd A-1 at a brightness of 2396 cd m-2. Furthermore, a reduced efficiency roll-off and higher brightness have been achieved compared with ITO-base device.

  6. Turbulence decay downstream of an active grid

    NASA Astrophysics Data System (ADS)

    Bewley, Gregory; Bodenschatz, Eberhard

    2015-11-01

    A grid in a wind tunnel stirs up turbulence that has a certain large-scale structure. The moving parts in a so-called ``active grid'' can be programmed to produce different structures. We use a special active grid in which each of 129 paddles on the grid has its own position-controlled servomotor that can move independently of the others. We observe among other things that the anisotropy in the amplitude of the velocity fluctuations and in the correlation lengths can be set and varied with an algorithm that oscillates the paddles in a specified way. The variation in the anisotropies that we observe can be explained by our earlier analysis of anisotropic ``soccer ball'' turbulence (Bewley, Chang and Bodenschatz 2012, Phys. Fluids). We define the influence of this variation in structure on the downstream evolution of the turbulence. with Eberhard Bodenschatz and others.

  7. The Impact of Simulated Mesoscale Convective Systems on Global Precipitation: A Multiscale Modeling Study

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, Jiun-Dar

    2017-01-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.

  8. Eddy-driven low-frequency variability: physics and observability through altimetry

    NASA Astrophysics Data System (ADS)

    Penduff, Thierry; Sérazin, Guillaume; Arbic, Brian; Mueller, Malte; Richman, James G.; Shriver, Jay F.; Morten, Andrew J.; Scott, Robert B.

    2015-04-01

    Model studies have revealed the propensity of the eddying ocean circulation to generate strong low-frequency variability (LFV) intrinsically, i.e. without low-frequency atmospheric variability. In the present study, gridded satellite altimeter products, idealized quasi-geostrophic (QG) turbulent simulations, and realistic high-resolution global ocean simulations are used to study the spontaneous tendency of mesoscale (relatively high frequency and high wavenumber) kinetic energy to non-linearly cascade towards larger time and space scales. The QG model reveals that large-scale variability, arising from the well-known spatial inverse cascade, is associated with low frequencies. Low-frequency, low-wavenumber energy is maintained primarily by nonlinearities in the QG model, with forcing (by large-scale shear) and friction playing secondary roles. In realistic simulations, nonlinearities also generally drive kinetic energy to low frequencies and low wavenumbers. In some, but not all, regions of the gridded altimeter product, surface kinetic energy is also found to cascade toward low frequencies. Exercises conducted with the realistic model suggest that the spatial and temporal filtering inherent in the construction of gridded satellite altimeter maps may contribute to the discrepancies seen in some regions between the direction of frequency cascade in models versus gridded altimeter maps. Finally, the range of frequencies that are highly energized and engaged these cascades appears much greater than the range of highly energized and engaged wavenumbers. Global eddying simulations, performed in the context of the CHAOCEAN project in collaboration with the CAREER project, provide estimates of the range of timescales that these oceanic nonlinearities are likely to feed without external variability.

  9. Renewable Fuels-to-Grid Integration | Energy Systems Integration Facility |

    Science.gov Websites

    hydrogen, other than electrolysis. Read more about this research. Partnerships Photo of a polymer electrolyte membrane stack in a laboratory Giner NREL helped evaluate a large-scale polymer electrolyte

  10. Elimination of artificial grid distortion and hourglass-type motions by means of Lagrangian subzonal masses and pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caramana, E.J.; Shashkov, M.J.

    1997-12-31

    The bane of Lagrangian hydrodynamics calculations is premature breakdown of the grid topology that results in severe degradation of accuracy and run termination often long before the assumption of Lagrangian zonal mass ceased to be valid. At short spatial grid scales this is usually referred to by the terms hourglass mode or keystone motion associated in particular with underconstrained grids such as quadrilaterals and hexahedrons in two and three dimensions, respectively. At longer spatial scales relative to the grid spacing there is what is referred to ubiquitously as spurious vorticity, or the long-thin zone problem. In both cases the resultmore » is anomalous grid distortion and tangling that has nothing to do with the actual solution, as would be the case for turbulent flow. In this work the authors show how such motions can be eliminated by the proper use of subzonal Lagrangian masses, and associated densities and pressures. These subzonal masses arise in a natural way from the fact that they require the mass associated with the nodal grid point to be constant in time. This is addition to the usual assumption of constant, Lagrangian zonal mass in staggered grid hydrodynamics scheme. The authors show that with proper discretization of subzonal forces resulting from subzonal pressures, hourglass motion and spurious vorticity can be eliminated for a very large range of problems. Finally the authors are presenting results of calculations of many test problems.« less

  11. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  12. Flow characteristics and scaling past highly porous wall-mounted fences

    NASA Astrophysics Data System (ADS)

    Rodríguez-López, Eduardo; Bruce, Paul J. K.; Buxton, Oliver R. H.

    2017-07-01

    An extensive characterization of the flow past wall-mounted highly porous fences based on single- and multi-scale geometries has been performed using hot-wire anemometry in a low-speed wind tunnel. Whilst drag properties (estimated from the time-averaged momentum equation) seem to be mostly dependent on the grids' blockage ratio; wakes of different size and orientation bars seem to generate distinct behaviours regarding turbulence properties. Far from the near-grid region, the flow is dominated by the presence of two well-differentiated layers: one close to the wall dominated by the near-wall behaviour and another one corresponding to the grid's wake and shear layer, originating from between this and the freestream. It is proposed that the effective thickness of the wall layer can be inferred from the wall-normal profile of root-mean-square streamwise velocity or, alternatively, from the wall-normal profile of streamwise velocity correlation. Using these definitions of wall-layer thickness enables us to collapse different trends of the turbulence behaviour inside this layer. In particular, the root-mean-square level of the wall shear stress fluctuations, longitudinal integral length scale, and spanwise turbulent structure is shown to display a satisfactory scaling with this thickness rather than with the whole thickness of the grid's wake. Moreover, it is shown that certain grids destroy the spanwise arrangement of large turbulence structures in the logarithmic region, which are then re-formed after a particular streamwise extent. It is finally shown that for fences subject to a boundary layer of thickness comparable to their height, the effective thickness of the wall layer scales with the incoming boundary layer thickness. Analogously, it is hypothesized that the growth rate of the internal layer is also partly dependent on the incoming boundary layer thickness.

  13. Using Unsupervised Learning to Unlock the Potential of Hydrologic Similarity

    NASA Astrophysics Data System (ADS)

    Chaney, N.; Newman, A. J.

    2017-12-01

    By clustering environmental data into representative hydrologic response units (HRUs), hydrologic similarity aims to harness the covariance between a system's physical environment and its hydrologic response to create reduced-order models. This is the primary approach through which sub-grid hydrologic processes are represented in large-scale models (e.g., Earth System Models). Although the possibilities of hydrologic similarity are extensive, its practical implementations have been limited to 1-d bins of oversimplistic metrics of hydrologic response (e.g., topographic index)—this is a missed opportunity. In this presentation we will show how unsupervised learning is unlocking the potential of hydrologic similarity; clustering methods enable generalized frameworks to effectively and efficiently harness the petabytes of global environmental data to robustly characterize sub-grid heterogeneity in large-scale models. To illustrate the potential that unsupervised learning has towards advancing hydrologic similarity, we introduce a hierarchical clustering algorithm (HCA) that clusters very high resolution (30-100 meters) elevation, soil, climate, and land cover data to assemble a domain's representative HRUs. These HRUs are then used to parameterize the sub-grid heterogeneity in land surface models; for this study we use the GFDL LM4 model—the land component of the GFDL Earth System Model. To explore HCA and its impacts on the hydrologic system we use a ¼ grid cell in southeastern California as a test site. HCA is used to construct an ensemble of 9 different HRU configurations—each configuration has a different number of HRUs; for each ensemble member LM4 is run between 2002 and 2014 with a 26 year spinup. The analysis of the ensemble of model simulations show that: 1) clustering the high-dimensional environmental data space leads to a robust representation of the role of the physical environment in the coupled water, energy, and carbon cycles at a relatively low number of HRUs; 2) the reduced-order model with around 300 HRUs effectively reproduces the fully distributed model simulation (30 meters) with less than 1/1000 of computational expense; 3) assigning each grid cell of the fully distributed grid to an HRU via HCA enables novel visualization methods for large-scale models—this has significant implications for how these models are applied and evaluated. We will conclude by outlining the potential that this work has within operational prediction systems including numerical weather prediction, Earth System models, and Early Warning systems.

  14. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  15. The Surface Pressure Response of a NACA 0015 Airfoil Immersed in Grid Turbulence. Volume 1; Characteristics of the Turbulence

    NASA Technical Reports Server (NTRS)

    Bereketab, Semere; Wang, Hong-Wei; Mish, Patrick; Devenport, William J.

    2000-01-01

    Two grids have been developed for the Virginia Tech 6 ft x 6 ft Stability wind tunnel for the purpose of generating homogeneous isotropic turbulent flows for the study of unsteady airfoil response. The first, a square bi-planar grid with a 12" mesh size and an open area ratio of 69.4%, was mounted in the wind tunnel contraction. The second grid, a metal weave with a 1.2 in. mesh size and an open area ratio of 68.2% was mounted in the tunnel test section. Detailed statistical and spectral measurements of the turbulence generated by the two grids are presented for wind tunnel free stream speeds of 10, 20, 30 and 40 m/s. These measurements show the flows to be closely homogeneous and isotropic. Both grids produce flows with a turbulence intensity of about 4% at the location planned for the airfoil leading edge. Turbulence produced by the large grid has an integral scale of some 3.2 inches here. Turbulence produced by the small grid is an order of magnitude smaller. For wavenumbers below the upper limit of the inertial subrange, the spectra and correlations measured with both grids at all speeds can be represented using the von Karman interpolation formula with a single velocity and length scale. The spectra maybe accurately represented over the entire wavenumber range by a modification of the von Karman interpolation formula that includes the effects of dissipation. These models are most accurate at the higher speeds (30 and 40 m/s).

  16. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  17. Scaling effects on spring phenology detections from MODIS data at multiple spatial resolutions over the contiguous United States

    NASA Astrophysics Data System (ADS)

    Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin

    2017-10-01

    Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.

  18. Assessment of zero-equation SGS models for simulating indoor environment

    NASA Astrophysics Data System (ADS)

    Taghinia, Javad; Rahman, Md Mizanur; Tse, Tim K. T.

    2016-12-01

    The understanding of air-flow in enclosed spaces plays a key role to designing ventilation systems and indoor environment. The computational fluid dynamics aspects dictate that the large eddy simulation (LES) offers a subtle means to analyze complex flows with recirculation and streamline curvature effects, providing more robust and accurate details than those of Reynolds-averaged Navier-Stokes simulations. This work assesses the performance of two zero-equation sub-grid scale models: the Rahman-Agarwal-Siikonen-Taghinia (RAST) model with a single grid-filter and the dynamic Smagorinsky model with grid-filter and test-filter scales. This in turn allows a cross-comparison of the effect of two different LES methods in simulating indoor air-flows with forced and mixed (natural + forced) convection. A better performance against experiments is indicated with the RAST model in wall-bounded non-equilibrium indoor air-flows; this is due to its sensitivity toward both the shear and vorticity parameters.

  19. Vehicle-to-grid power implementation: From stabilizing the grid to supporting large-scale renewable energy

    NASA Astrophysics Data System (ADS)

    Kempton, Willett; Tomić, Jasna

    Vehicle-to-grid power (V2G) uses electric-drive vehicles (battery, fuel cell, or hybrid) to provide power for specific electric markets. This article examines the systems and processes needed to tap energy in vehicles and implement V2G. It quantitatively compares today's light vehicle fleet with the electric power system. The vehicle fleet has 20 times the power capacity, less than one-tenth the utilization, and one-tenth the capital cost per prime mover kW. Conversely, utility generators have 10-50 times longer operating life and lower operating costs per kWh. To tap V2G is to synergistically use these complementary strengths and to reconcile the complementary needs of the driver and grid manager. This article suggests strategies and business models for doing so, and the steps necessary for the implementation of V2G. After the initial high-value, V2G markets saturate and production costs drop, V2G can provide storage for renewable energy generation. Our calculations suggest that V2G could stabilize large-scale (one-half of US electricity) wind power with 3% of the fleet dedicated to regulation for wind, plus 8-38% of the fleet providing operating reserves or storage for wind. Jurisdictions more likely to take the lead in adopting V2G are identified.

  20. Parallelization Issues and Particle-In Codes.

    NASA Astrophysics Data System (ADS)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.

  1. A grid of MHD models for stellar mass loss and spin-down rates of solar analogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, O.; Drake, J. J.

    2014-03-01

    Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less

  2. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  3. Integrating Residential Photovoltaics With Power Lines

    NASA Technical Reports Server (NTRS)

    Borden, C. S.

    1985-01-01

    Report finds rooftop solar-cell arrays feed excess power to electric-utility grid for fee are potentially attractive large-scale application of photovoltaic technology. Presents assessment of breakeven costs of these arrays under variety of technological and economic assumptions.

  4. The global gridded crop model intercomparison: Data and modeling protocols for Phase 1 (v1.0)

    DOE PAGES

    Elliott, J.; Müller, C.; Deryng, D.; ...

    2015-02-11

    We present protocols and input data for Phase 1 of the Global Gridded Crop Model Intercomparison, a project of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The project consist of global simulations of yields, phenologies, and many land-surface fluxes using 12–15 modeling groups for many crops, climate forcing data sets, and scenarios over the historical period from 1948 to 2012. The primary outcomes of the project include (1) a detailed comparison of the major differences and similarities among global models commonly used for large-scale climate impact assessment, (2) an evaluation of model and ensemble hindcasting skill, (3) quantification ofmore » key uncertainties from climate input data, model choice, and other sources, and (4) a multi-model analysis of the agricultural impacts of large-scale climate extremes from the historical record.« less

  5. Large-Eddy Simulation of Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Sochacki, James S.

    1999-01-01

    This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.

  6. Sandia and NJ TRANSIT Authority Developing Resilient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanley, Charles J.; Ellis, Abraham

    2014-11-01

    Through the memorandum of understanding between the Depratment of Energy (DOE), the New Jersey Transit Authority (NJ Transit), and the New Jersey Board of Public Utilities, Sandia National Labs is assisting NJ Transit in developing NJ TransitGrid: an electric microgrid that will include a large-scale gas-fired generation facility and distributed energy resources (photovoltaics [PV], energy storage, electric vehicles, combined heat and power [CHP]) to supply reliable power during storms or other times of significant power failure. The NJ TransitGrid was awarded $410M from the Department of Transportation to develop a first-of-its-kind electric microgrid capable of supplying highly-reliable power.

  7. Tools and Techniques for Measuring and Improving Grid Performance

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.

  8. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  9. Scaling between reanalyses and high-resolution land-surface modelling in mountainous areas - enabling better application and testing of reanalyses in heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Gruber, S.; Fiddes, J.

    2013-12-01

    In mountainous topography, the difference in scale between atmospheric reanalyses (typically tens of kilometres) and relevant processes and phenomena near the Earth surface, such as permafrost or snow cover (meters to tens of meters) is most obvious. This contrast of scales is one of the major obstacles to using reanalysis data for the simulation of surface phenomena and to confronting reanalyses with independent observation. At the example of modelling permafrost in mountain areas (but simple to generalise to other phenomena and heterogeneous environments), we present and test methods against measurements for (A) scaling atmospheric data from the reanalysis to the ground level and (B) smart sampling of the heterogeneous landscape in order to set up a lumped model simulation that represents the high-resolution land surface. TopoSCALE (Part A, see http://dx.doi.org/10.5194/gmdd-6-3381-2013) is a scheme, which scales coarse-grid climate fields to fine-grid topography using pressure level data. In addition, it applies necessary topographic corrections e.g. those variables required for computation of radiation fields. This provides the necessary driving fields to the LSM. Tested against independent ground data, this scheme has been shown to improve the scaling and distribution of meteorological parameters in complex terrain, as compared to conventional methods, e.g. lapse rate based approaches. TopoSUB (Part B, see http://dx.doi.org/10.5194/gmd-5-1245-2012) is a surface pre-processor designed to sample a fine-grid domain (defined by a digital elevation model) along important topographical (or other) dimensions through a clustering scheme. This allows constructing a lumped model representing the main sources of fine-grid variability and applying a 1D LSM efficiently over large areas. Results can processed to derive (i) summary statistics at coarse-scale re-analysis grid resolution, (ii) high-resolution data fields spatialized to e.g., the fine-scale digital elevation model grid, or (iii) validation products for locations at which measurements exist, only. The ability of TopoSUB to approximate results simulated by a 2D distributed numerical LSM at a factor of ~10,000 less computations is demonstrated by comparison of 2D and lumped simulations. Successful application of the combined scheme in the European Alps is reported and based on its results, open issues for future research are outlined.

  10. Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1999-01-01

    The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.

  11. Decay of grid turbulence in superfluid helium-4: Mesh dependence

    NASA Astrophysics Data System (ADS)

    Yang, J.; Ihas, G. G.

    2018-03-01

    Temporal decay of grid turbulence is experimentally studied in superfluid 4He in a large square channel. The second sound attenuation method is used to measure the turbulent vortex line density (L) with a phase locked tracking technique to minimize frequency shift effects induced by temperature fluctuations. Two different grids (0.8 mm and 3.0 mm mesh) are pulled to generate turbulence. Different power laws for decaying behavior are predicted by a theory. According to this theory, L should decay as t‑11/10 when the length scale of energy containing eddies grows from the grid mesh size to the size of the channel. At later time, after the energy containing eddy size becomes comparable to the channel, L should follow t‑3/2. Our recent experimental data exhibit evidence for t‑11/10 during the early time and t‑2 instead of t‑3/2 for later time. Moreover, a consistent bump/plateau feature is prominent between the two decay regimes for smaller (0.8 mm) grid mesh holes but absent with a grid mesh hole of 3.0 mm. This implies that in the large channel different types of turbulence are generated, depending on mesh hole size (mesh Reynolds number) compared to channel Reynolds number.

  12. Energy Spectra of Higher Reynolds Number Turbulence by the DNS with up to 122883 Grid Points

    NASA Astrophysics Data System (ADS)

    Ishihara, Takashi; Kaneda, Yukio; Morishita, Koji; Yokokawa, Mitsuo; Uno, Atsuya

    2014-11-01

    Large-scale direct numerical simulations (DNS) of forced incompressible turbulence in a periodic box with up to 122883 grid points have been performed using K computer. The maximum Taylor-microscale Reynolds number Rλ, and the maximum Reynolds number Re based on the integral length scale are over 2000 and 105, respectively. Our previous DNS with Rλ up to 1100 showed that the energy spectrum has a slope steeper than - 5 / 3 (the Kolmogorov scaling law) by factor 0 . 1 at the wavenumber range (kη < 0 . 03). Here η is the Kolmogorov length scale. Our present DNS at higher resolutions show that the energy spectra with different Reynolds numbers (Rλ > 1000) are well normalized not by the integral length-scale but by the Kolmogorov length scale, at the wavenumber range of the steeper slope. This result indicates that the steeper slope is not inherent character in the inertial subrange, and is affected by viscosity.

  13. [Analysis on difference of richness of traditional Chinese medicine resources in Chongqing based on grid technology].

    PubMed

    Zhang, Xiao-Bo; Qu, Xian-You; Li, Meng; Wang, Hui; Jing, Zhi-Xian; Liu, Xiang; Zhang, Zhi-Wei; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    After the end of the national and local medicine resources census work, a large number of Chinese medicine resources and distribution of data will be summarized. The species richness between the regions is a valid indicator for objective reflection of inter-regional resources of Chinese medicine. Due to the large difference in the size of the county area, the assessment of the intercropping of the resources of the traditional Chinese medicine by the county as a statistical unit will lead to the deviation of the regional abundance statistics. Based on the rule grid or grid statistical methods, the size of the statistical unit due to different can be reduced, the differences in the richness of traditional Chinese medicine resources are caused. Taking Chongqing as an example, based on the existing survey data, the difference of richness of traditional Chinese medicine resources under different grid scale were compared and analyzed. The results showed that the 30 km grid could be selected and the richness of Chinese medicine resources in Chongqing could reflect the objective situation of intercropping resources richness in traditional Chinese medicine better. Copyright© by the Chinese Pharmaceutical Association.

  14. How well do terrestrial biosphere models simulate coarse-scale runoff in the contiguous United States?

    DOE PAGES

    Schwalm, C.; Huntzinger, Deborah N.; Cook, Robert B.; ...

    2015-03-11

    Significant changes in the water cycle are expected under current global environmental change. Robust assessment of present-day water cycle dynamics at continental to global scales is confounded by shortcomings in the observed record. Modeled assessments also yield conflicting results which are linked to differences in model structure and simulation protocol. Here we compare simulated gridded (1 spatial resolution) runoff from six terrestrial biosphere models (TBMs), seven reanalysis products, and one gridded surface station product in the contiguous United States (CONUS) from 2001 to 2005. We evaluate the consistency of these 14 estimates with stream gauge data, both as depleted flowmore » and corrected for net withdrawals (2005 only), at the CONUS and water resource region scale, as well as examining similarity across TBMs and reanalysis products at the grid cell scale. Mean runoff across all simulated products and regions varies widely (range: 71 to 356 mm yr(-1)) relative to observed continental-scale runoff (209 or 280 mm yr(-1) when corrected for net withdrawals). Across all 14 products 8 exhibit Nash-Sutcliffe efficiency values in excess of 0.8 and three are within 10% of the observed value. Region-level mismatch exhibits a weak pattern of overestimation in western and underestimation in eastern regions although two products are systematically biased across all regions and largely scales with water use. Although gridded composite TBM and reanalysis runoff show some regional similarities, individual product values are highly variable. At the coarse scales used here we find that progress in better constraining simulated runoff requires standardized forcing data and the explicit incorporation of human effects (e.g., water withdrawals by source, fire, and land use change). (C) 2015 Elsevier B.V. All rights reserved.« less

  15. A manganese-hydrogen battery with potential for grid-scale energy storage

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Li, Guodong; Pei, Allen; Li, Yuzhang; Liao, Lei; Wang, Hongxia; Wan, Jiayu; Liang, Zheng; Chen, Guangxu; Zhang, Hao; Wang, Jiangyan; Cui, Yi

    2018-05-01

    Batteries including lithium-ion, lead-acid, redox-flow and liquid-metal batteries show promise for grid-scale storage, but they are still far from meeting the grid's storage needs such as low cost, long cycle life, reliable safety and reasonable energy density for cost and footprint reduction. Here, we report a rechargeable manganese-hydrogen battery, where the cathode is cycled between soluble Mn2+ and solid MnO2 with a two-electron reaction, and the anode is cycled between H2 gas and H2O through well-known catalytic reactions of hydrogen evolution and oxidation. This battery chemistry exhibits a discharge voltage of 1.3 V, a rate capability of 100 mA cm-2 (36 s of discharge) and a lifetime of more than 10,000 cycles without decay. We achieve a gravimetric energy density of 139 Wh kg-1 (volumetric energy density of 210 Wh l-1), with the theoretical gravimetric energy density of 174 Wh kg-1 (volumetric energy density of 263 Wh l-1) in a 4 M MnSO4 electrolyte. The manganese-hydrogen battery involves low-cost abundant materials and has the potential to be scaled up for large-scale energy storage.

  16. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  17. The Center for Multiscale Plasma Dynamics, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gombosi, Tamas I.

    The University of Michigan participated in the joint UCLA/Maryland fusion science center focused on plasma physics problems for which the traditional separation of the dynamics into microscale and macroscale processes breaks down. These processes involve large scale flows and magnetic fields tightly coupled to the small scale, kinetic dynamics of turbulence, particle acceleration and energy cascade. The interaction between these vastly disparate scales controls the evolution of the system. The enormous range of temporal and spatial scales associated with these problems renders direct simulation intractable even in computations that use the largest existing parallel computers. Our efforts focused on twomore » main problems: the development of Hall MHD solvers on solution adaptive grids and the development of solution adaptive grids using generalized coordinates so that the proper geometry of inertial confinement can be taken into account and efficient refinement strategies can be obtained.« less

  18. A high-resolution European dataset for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta

    2013-04-01

    There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as inputs to the hydrological calibration and validation of EFAS as well as for establishing long-term discharge "proxy" climatologies which can then in turn be used for statistical analysis to derive return periods or other time series derivatives. In addition, this dataset will be used to assess climatological trends in Europe. Unfortunately, to date no baseline dataset at the European scale exists to test the quality of the herein presented data. Hence, a comparison against other existing datasets can therefore only be an indication of data quality. Due to availability, a comparison was made for precipitation and temperature only, arguably the most important meteorological drivers for hydrologic models. A variety of analyses was undertaken at country scale against data reported to EUROSTAT and E-OBS datasets. The comparison revealed that while the datasets showed overall similar temporal and spatial patterns, there were some differences in magnitudes especially for precipitation. It is not straightforward to define the specific cause for these differences. However, in most cases the comparatively low observation station density appears to be the principal reason for the differences in magnitude.

  19. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A 3-D code for viscous cascade flow prediction was developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full multigrid method. The Baldwin-Lomax eddy viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  20. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A three-dimensional code for viscous cascade flow prediction has been developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four-stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full-multigrid method. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large-scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  1. High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2): Large Eddy Simulation Study Over Germany

    NASA Astrophysics Data System (ADS)

    Dipankar, A.; Stevens, B. B.; Zängl, G.; Pondkule, M.; Brdar, S.

    2014-12-01

    The effect of clouds on large scale dynamics is represented in climate models through parameterization of various processes, of which the parameterization of shallow and deep convection are particularly uncertain. The atmospheric boundary layer, which controls the coupling to the surface, and which defines the scale of shallow convection, is typically 1 km in depth. Thus, simulations on a O(100 m) grid largely obviate the need for such parameterizations. By crossing this threshold of O(100m) grid resolution one can begin thinking of large-eddy simulation (LES), wherein the sub-grid scale parameterization have a sounder theoretical foundation. Substantial initiatives have been taken internationally to approach this threshold. For example, Miura et al., 2007 and Mirakawa et al., 2014 approach this threshold by doing global simulations, with (gradually) decreasing grid resolution, to understand the effect of cloud-resolving scales on the general circulation. Our strategy, on the other hand, is to take a big leap forward by fixing the resolution at O(100 m), and gradually increasing the domain size. We believe that breaking this threshold would greatly help in improving the parameterization schemes and reducing the uncertainty in climate predictions. To take this forward, the German Federal Ministry of Education and Research has initiated a project on HD(CP)2 that aims for a limited area LES at resolution O(100 m) using the new unified modeling system ICON (Zängl et al., 2014). In the talk, results from the HD(CP)2 evaluation simulation will be shown that targets high resolution simulation over a small domain around Jülich, Germany. This site is chosen because high resolution HD(CP)2 Observational Prototype Experiment took place in this region from 1.04.2013 to 31.05.2013, in order to critically evaluate the model. Nesting capabilities of ICON is used to gradually increase the resolution from the outermost domain, which is forced from the COSMO-DE data, to the innermost and finest resolution domain centered around Jülich (see Fig. 1 top panel). Furthermore, detailed analyses of the simulation results against the observation data will be presented. A reprsentative figure showing time series of column integrated water vapor (IWV) for both model and observation on 24.04.2013 is shown in bottom panel of Fig. 1.

  2. GridPix detectors: Production and beam test results

    NASA Astrophysics Data System (ADS)

    Koppert, W. J. C.; van Bakel, N.; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; van der Graaf, H.; Hartjes, F.; Hessey, N. P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.

    2013-12-01

    The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip.

  3. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  4. Research on wind power grid-connected operation and dispatching strategies of Liaoning power grid

    NASA Astrophysics Data System (ADS)

    Han, Qiu; Qu, Zhi; Zhou, Zhi; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Li, Jinze; Ling, Zhaowei

    2018-02-01

    As a kind of clean energy, wind power has gained rapid development in recent years. Liaoning Province has abundant wind resources and the total installed capacity of wind power is in the forefront. With the large-scale wind power grid-connected operation, the contradiction between wind power utilization and peak load regulation of power grid has been more prominent. To this point, starting with the power structure and power grid installation situation of Liaoning power grid, the distribution and the space-time output characteristics of wind farm, the prediction accuracy, the curtailment and the off-grid situation of wind power are analyzed. Based on the deep analysis of the seasonal characteristics of power network load, the composition and distribution of main load are presented. Aiming at the problem between the acceptance of wind power and power grid adjustment, the scheduling strategies are given, including unit maintenance scheduling, spinning reserve, energy storage equipment settings by the analysis of the operation characteristics and the response time of thermal power units and hydroelectric units, which can meet the demand of wind power acceptance and provide a solution to improve the level of power grid dispatching.

  5. Large-Eddy Simulations of Atmospheric Flows Over Complex Terrain Using the Immersed-Boundary Method in the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Ma, Yulong; Liu, Heping

    2017-12-01

    Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.

  6. Influence of topographic heterogeneity on the abandance of larch forest in eastern Siberia

    NASA Astrophysics Data System (ADS)

    Sato, H.; Kobayashi, H.

    2016-12-01

    In eastern Siberia, larches (Larix spp.) often exist in pure stands, constructing the world's largest coniferous forest, of which changes can significantly affect the earth's albedo and the global carbon balance. We have conducted simulation studies for this vegetation, aiming to forecast its structures and functions under changing climate (1, 2). In previous studies of simulating vegetation at large geographical scales, the examining area is divided into coarse grid cells such as 0.5 * 0.5 degree resolution, and topographical heterogeneities within each grid cell are just ignored. However, in Siberian larch area, which is located on the environmental edge of existence of forest ecosystem, abundance of larch trees largely depends on topographic condition at the scale of tens to hundreds meters. We, therefore, analyzed patterns of within-grid-scale heterogeneity of larch LAI as a function of topographic condition, and examined its underlying reason. For this analysis, larch LAI was estimated for each 1/112 degree from the SPOT-VEGETATION data, and topographic properties such as angularity and aspect direction were estimated form the ASTER-GDEM data. Through this analysis, we found that, for example, sign of correlation between angularity and larch LAI depends on hydrological condition on the grid cell. We then refined the hydrological sub-model of our vegetation model SEIB-DGVM, and validated whether the modified model can reconstruct these patterns, and examined its impact on the estimation of biomass and vegetation productivity of entire larch region. -- References --1. Sato, H., et al. (2010). "Simulation study of the vegetation structure and function in eastern Siberian larch forests using the individual-based vegetation model SEIB-DGVM." Forest Ecology and Management 259(3): 301-311.2. Sato, H., et al. (2016). "Endurance of larch forest ecosystems in eastern Siberia under warming trends." Ecology and Evolution

  7. Water balance model for Kings Creek

    NASA Technical Reports Server (NTRS)

    Wood, Eric F.

    1990-01-01

    Particular attention is given to the spatial variability that affects the representation of water balance at the catchment scale in the context of macroscale water-balance modeling. Remotely sensed data are employed for parameterization, and the resulting model is developed so that subgrid spatial variability is preserved and therefore influences the grid-scale fluxes of the model. The model permits the quantitative evaluation of the surface-atmospheric interactions related to the large-scale hydrologic water balance.

  8. Building continental-scale 3D subsurface layers in the Digital Crust project: constrained interpolation and uncertainty estimation.

    NASA Astrophysics Data System (ADS)

    Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.

    2015-12-01

    The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.

  9. Integration of HTS Cables in the Future Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future transmission grid will obtain electrical power generated by decentralized renewable sources, together with large scale generation units located at the coastal region. In this way electrical power has to be distributed and transmitted over longer distances from generation to end user. Potential grid issues like: amount of distributed power, grid stability and electrical loss dissipation merit particular attention. High temperature superconductors (HTS) can play an important role in solving these grid problems. Advantages to integrate HTS components at transmission voltages are numerous: more transmittable power together with less emissions, intrinsic fault current limiting capability, lower ac loss, better control of power flow, reduced footprint, less magnetic field emissions, etc. The main obstacle at present is the relatively high price of HTS conductor. However as the price goes down, initial market penetration of several HTS components (e.g.: cables, fault current limiters) is expected by year 2015. In the full paper we present selected ways to integrate EHV AC HTS cables depending on a particular future grid scenario in the Netherlands.

  10. A secure and efficiently searchable health information architecture.

    PubMed

    Yasnoff, William A

    2016-06-01

    Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Regional climates in the GISS general circulation model: Surface air temperature

    NASA Technical Reports Server (NTRS)

    Hewitson, Bruce

    1994-01-01

    One of the more viable research techniques into global climate change for the purpose of understanding the consequent environmental impacts is based on the use of general circulation models (GCMs). However, GCMs are currently unable to reliably predict the regional climate change resulting from global warming, and it is at the regional scale that predictions are required for understanding human and environmental responses. Regional climates in the extratropics are in large part governed by the synoptic-scale circulation and the feasibility of using this interscale relationship is explored to provide a way of moving to grid cell and sub-grid cell scales in the model. The relationships between the daily circulation systems and surface air temperature for points across the continental United States are first developed in a quantitative form using a multivariate index based on principal components analysis (PCA) of the surface circulation. These relationships are then validated by predicting daily temperature using observed circulation and comparing the predicted values with the observed temperatures. The relationships predict surface temperature accurately over the major portion of the country in winter, and for half the country in summer. These relationships are then applied to the surface synoptic circulation of the Goddard Institute for Space Studies (GISS) GCM control run, and a set of surface grid cell temperatures are generated. These temperatures, based on the larger-scale validated circulation, may now be used with greater confidence at the regional scale. The generated temperatures are compared to those of the model and show that the model has regional errors of up to 10 C in individual grid cells.

  12. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.

    2008-12-01

    NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  13. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.

    2009-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinnon, Archibald D.; Thompson, Seth R.; Doroshchuk, Ruslan A.

    mart grid technologies are transforming the electric power grid into a grid with bi-directional flows of both power and information. Operating millions of new smart meters and smart appliances will significantly impact electric distribution systems resulting in greater efficiency. However, the scale of the grid and the new types of information transmitted will potentially introduce several security risks that cannot be addressed by traditional, centralized security techniques. We propose a new bio-inspired cyber security approach. Social insects, such as ants and bees, have developed complex-adaptive systems that emerge from the collective application of simple, light-weight behaviors. The Digital Ants frameworkmore » is a bio-inspired framework that uses mobile light-weight agents. Sensors within the framework use digital pheromones to communicate with each other and to alert each other of possible cyber security issues. All communication and coordination is both localized and decentralized thereby allowing the framework to scale across the large numbers of devices that will exist in the smart grid. Furthermore, the sensors are light-weight and therefore suitable for implementation on devices with limited computational resources. This paper will provide a brief overview of the Digital Ants framework and then present results from test bed-based demonstrations that show that Digital Ants can identify a cyber attack scenario against smart meter deployments.« less

  15. On the Uses of Full-Scale Schlieren Flow Visualization

    NASA Astrophysics Data System (ADS)

    Settles, G. S.; Miller, J. D.; Dodson-Dreibelbis, L. J.

    2000-11-01

    A lens-and-grid-type schlieren system using a very large grid as a light source was described at earlier APS/DFD meetings. With a field-of-view of 2.3x2.9 m (7.5x9.5 feet), it is the largest indoor schlieren system in the world. Still and video examples of several full-scale airflows and heat-transfer problems visualized thus far will be shown. These include: heating and ventilation airflows, flows due to appliances and equipment, the thermal plumes of people, the aerodynamics of an explosive trace detection portal, gas leak detection, shock wave motion associated with aviation security problems, and heat transfer from live crops. Planned future projects include visualizing fume-hood and grocery display freezer airflows and studying the dispersion of insect repellent plumes at full scale.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  17. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  18. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  19. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  20. Modelling tidal current energy extraction in large area using a three-dimensional estuary model

    NASA Astrophysics Data System (ADS)

    Chen, Yaling; Lin, Binliang; Lin, Jie

    2014-11-01

    This paper presents a three-dimensional modelling study for simulating tidal current energy extraction in large areas, with a momentum sink term being added into the momentum equations. Due to the limits of computational capacity, the grid size of the numerical model is generally much larger than the turbine rotor diameter. Two models, i.e. a local grid refinement model and a coarse grid model, are employed and an idealized estuary is set up. The local grid refinement model is constructed to simulate the power generation of an isolated turbine and its impacts on hydrodynamics. The model is then used to determine the deployment of turbine farm and quantify a combined thrust coefficient for multiple turbines located in a grid element of coarse grid model. The model results indicate that the performance of power extraction is affected by array deployment, with more power generation from outer rows than inner rows due to velocity deficit influence of upstream turbines. Model results also demonstrate that the large-scale turbine farm has significant effects on the hydrodynamics. The tidal currents are attenuated within the turbine swept area, and both upstream and downstream of the array. While the currents are accelerated above and below turbines, which is contributed to speeding up the wake mixing process behind the arrays. The water levels are heightened in both low and high water levels as the turbine array spanning the full width of estuary. The magnitude of water level change is found to increase with the array expansion, especially at the low water level.

  1. Three-dimensional constrained variational analysis: Approach and application to analysis of atmospheric diabatic heating and derivative fields during an ARM SGP intensive observational period

    NASA Astrophysics Data System (ADS)

    Tang, Shuaiqi; Zhang, Minghua

    2015-08-01

    Atmospheric vertical velocities and advective tendencies are essential large-scale forcing data to drive single-column models (SCMs), cloud-resolving models (CRMs), and large-eddy simulations (LESs). However, they cannot be directly measured from field measurements or easily calculated with great accuracy. In the Atmospheric Radiation Measurement Program (ARM), a constrained variational algorithm (1-D constrained variational analysis (1DCVA)) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). The 1DCVA algorithm is now extended into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data, diabatic heating sources (Q1), and moisture sinks (Q2). Results are presented for a midlatitude cyclone case study on 3 March 2000 at the ARM Southern Great Plains site. These results are used to evaluate the diabatic heating fields in the available products such as Rapid Update Cycle, ERA-Interim, National Centers for Environmental Prediction Climate Forecast System Reanalysis, Modern-Era Retrospective Analysis for Research and Applications, Japanese 55-year Reanalysis, and North American Regional Reanalysis. We show that although the analysis/reanalysis generally captures the atmospheric state of the cyclone, their biases in the derivative terms (Q1 and Q2) at regional scale of a few hundred kilometers are large and all analyses/reanalyses tend to underestimate the subgrid-scale upward transport of moist static energy in the lower troposphere. The 3DCVA-gridded large-scale forcing data are physically consistent with the spatial distribution of surface and TOA measurements of radiation, precipitation, latent and sensible heat fluxes, and clouds that are better suited to force SCMs, CRMs, and LESs. Possible applications of the 3DCVA are discussed.

  2. Need for speed: An optimized gridding approach for spatially explicit disease simulations.

    PubMed

    Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom

    2018-04-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.

  3. Need for speed: An optimized gridding approach for spatially explicit disease simulations

    PubMed Central

    Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom

    2018-01-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574

  4. A Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF)

    NASA Astrophysics Data System (ADS)

    Trotta, Francesco; Fenu, Elisa; Pinardi, Nadia; Bruciaferri, Diego; Giacomelli, Luca; Federico, Ivan; Coppini, Giovanni

    2016-11-01

    We present a numerical platform named Structured and Unstructured grid Relocatable ocean platform for Forecasting (SURF). The platform is developed for short-time forecasts and is designed to be embedded in any region of the large-scale Mediterranean Forecasting System (MFS) via downscaling. We employ CTD data collected during a campaign around the Elba island to calibrate and validate SURF. The model requires an initial spin up period of a few days in order to adapt the initial interpolated fields and the subsequent solutions to the higher-resolution nested grids adopted by SURF. Through a comparison with the CTD data, we quantify the improvement obtained by SURF model compared to the coarse-resolution MFS model.

  5. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  6. Very-high-Reynolds-number vortex dynamics via Coherent-vorticity-Preserving (CvP) Large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Chapelier, Jean-Baptiste; Wasistho, Bono; Scalo, Carlo

    2017-11-01

    A new approach to Large-Eddy Simulation (LES) is introduced, where subgrid-scale (SGS) dissipation is applied proportionally to the degree of local spectral broadening, hence mitigated in regions dominated by large-scale vortical motion. The proposed CvP-LES methodology is based on the evaluation of the ratio of the test-filtered to resolved (or grid-filtered) enstrophy: σ = ξ ∧ / ξ . Values of σ = 1 indicate low sub-test-filter turbulent activity, justifying local deactivation of any subgrid-scale model. Values of σ < 1 span conditions ranging from incipient spectral broadening σ <= 1 , to equilibrium turbulence σ =σeq < 1 , where σeq is solely as a function of the test-to-grid filter-width ratio Δ ∧ / Δ , derived assuming a Kolmogorov's spectrum. Eddy viscosity is fully restored for σ <=σeq . The proposed approach removes unnecessary SGS dissipation, can be applied to any eddy-viscosity model, is algorithmically simple and computationally inexpensive. A CvP-LES of a pair of unstable helical vortices, representative of rotor-blade wake dynamics, show the ability of the method to sort the coherent motion from the small-scale dynamics. This work is funded by subcontract KSC-17-001 between Purdue University and Kord Technologies, Inc (Huntsville), under the US Navy Contract N68335-17-C-0159 STTR-Phase II, Purdue Proposal No. 00065007, Topic N15A-T002.

  7. Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    PubMed Central

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486

  8. Solar Energy Grid Integration Systems (SEGIS): adding functionality while maintaining reliability and economics

    NASA Astrophysics Data System (ADS)

    Bower, Ward

    2011-09-01

    An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.

  9. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    PubMed

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  10. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  11. Divide-and-conquer density functional theory on hierarchical real-space grids: Parallel implementation and applications

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2008-02-01

    A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.

  12. Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation.

    PubMed

    Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B

    2006-04-15

    Modeling air pollutant transport and dispersion in urban environments is especially challenging due to complex ground topography. In this study, we describe a large eddy simulation (LES) tool including a new dynamic subgrid closure and boundary treatment to model urban dispersion problems. The numerical model is developed, validated, and extended to a realistic urban layout. In such applications fairly coarse grids must be used in which each building can be represented using relatively few grid-points only. By carrying out LES of flow around a square cylinder and of flow over surface-mounted cubes, the coarsest resolution required to resolve the bluff body's cross section while still producing meaningful results is established. Specifically, we perform grid refinement studies showing that at least 6-8 grid points across the bluff body are required for reasonable results. The performance of several subgrid models is also compared. Although effects of the subgrid models on the mean flow are found to be small, dynamic Lagrangian models give a physically more realistic subgrid-scale (SGS) viscosity field. When scale-dependence is taken into consideration, these models lead to more realistic resolved fluctuating velocities and spectra. These results set the minimum grid resolution and subgrid model requirements needed to apply LES in simulations of neutral atmospheric boundary layer flow and scalar transport over a realistic urban geometry. The results also illustrate the advantages of LES over traditional modeling approaches, particularly its ability to take into account the complex boundary details and the unsteady nature of atmospheric boundary layer flow. Thus LES can be used to evaluate probabilities of extreme events (such as probabilities of exceeding threshold pollutant concentrations). Some comments about computer resources required for LES are also included.

  13. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  14. Bridging the scales in a eulerian air quality model to assess megacity export of pollution

    NASA Astrophysics Data System (ADS)

    Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.

    2013-08-01

    In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.

  15. DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.

    2010-03-01

    Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.

  16. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    PubMed Central

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-01-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a two-layer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm. PMID:27879895

  17. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  18. Air Pollution Monitoring and Mining Based on Sensor Grid in London.

    PubMed

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-06-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a twolayer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm.

  19. ADMS Evaluation Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2018-01-23

    Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.

  20. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  1. A tool for optimization of the production and user analysis on the Grid, C. Grigoras for the ALICE Collaboration

    NASA Astrophysics Data System (ADS)

    Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores

    2011-12-01

    With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.

  2. Anisotropy of Observed and Simulated Turbulence in Marine Stratocumulus

    NASA Astrophysics Data System (ADS)

    Pedersen, J. G.; Ma, Y.-F.; Grabowski, W. W.; Malinowski, S. P.

    2018-02-01

    Anisotropy of turbulence near the top of the stratocumulus-topped boundary layer (STBL) is studied using large-eddy simulation (LES) and measurements from the POST and DYCOMS-II field campaigns. Focusing on turbulence ˜100 m below the cloud top, we see remarkable similarity between daytime and nocturnal flight data covering different inversion strengths and free-tropospheric conditions. With λ denoting wavelength and zt cloud-top height, we find that turbulence at λ/zt≃0.01 is weakly dominated by horizontal fluctuations, while turbulence at λ/zt>1 becomes strongly dominated by horizontal fluctuations. Between are scales at which vertical fluctuations dominate. Typical-resolution LES of the STBL (based on POST flight 13 and DYCOMS-II flight 1) captures observed characteristics of below-cloud-top turbulence reasonably well. However, using a fixed vertical grid spacing of 5 m, decreasing the horizontal grid spacing and increasing the subgrid-scale mixing length leads to increased dominance of vertical fluctuations, increased entrainment velocity, and decreased liquid water path. Our analysis supports the notion that entrainment parameterizations (e.g., in climate models) could potentially be improved by accounting more accurately for anisotropic deformation of turbulence in the cloud-top region. While LES has the potential to facilitate improved understanding of anisotropic cloud-top turbulence, sensitivity to grid spacing, grid-box aspect ratio, and subgrid-scale model needs to be addressed.

  3. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  4. A principle of economy predicts the functional architecture of grid cells.

    PubMed

    Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay

    2015-09-03

    Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Thomas M.; Boudreau, Marie-Claude; Helsen, Lieve

    Recent advances in information and communications technology (ICT) have initiated development of a smart electrical grid and smart buildings. Buildings consume a large portion of the total electricity production worldwide, and to fully develop a smart grid they must be integrated with that grid. Buildings can now be 'prosumers' on the grid (both producers and consumers), and the continued growth of distributed renewable energy generation is raising new challenges in terms of grid stability over various time scales. Buildings can contribute to grid stability by managing their overall electrical demand in response to current conditions. Facility managers must balance demandmore » response requests by grid operators with energy needed to maintain smooth building operations. For example, maintaining thermal comfort within an occupied building requires energy and, thus an optimized solution balancing energy use with indoor environmental quality (adequate thermal comfort, lighting, etc.) is needed. Successful integration of buildings and their systems with the grid also requires interoperable data exchange. However, the adoption and integration of newer control and communication technologies into buildings can be problematic with older legacy HVAC and building control systems. Public policy and economic structures have not kept up with the technical developments that have given rise to the budding smart grid, and further developments are needed in both technical and non-technical areas.« less

  6. Heating and Large Scale Dynamics of the Solar Corona

    NASA Technical Reports Server (NTRS)

    Schnack, Dalton D.

    2000-01-01

    The effort was concentrated in the areas: coronal heating mechanism, unstructured adaptive grid algorithms, numerical modeling of magnetic reconnection in the MRX experiment: effect of toroidal magnetic field and finite pressure, effect of OHMIC heating and vertical magnetic field, effect of dynamic MESH adaption.

  7. Transforming Power Systems; 21st Century Power Partnership

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-05-20

    The 21st Century Power Partnership - a multilateral effort of the Clean Energy Ministerial - serves as a platform for public-private collaboration to advance integrated solutions for the large-scale deployment of renewable energy in combination with deep energy ef?ciency and smart grid solutions.

  8. Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains

    USDA-ARS?s Scientific Manuscript database

    The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which i...

  9. A Review of Control Strategy of the Large-scale of Electric Vehicles Charging and Discharging Behavior

    NASA Astrophysics Data System (ADS)

    Kong, Lingyu; Han, Jiming; Xiong, Wenting; Wang, Hao; Shen, Yaqi; Li, Ying

    2017-05-01

    Large scale access of electric vehicles will bring huge challenges to the safe operation of the power grid, and it’s important to control the charging and discharging of the electric vehicle. First of all, from the electric quality and network loss, this paper points out the influence on the grid caused by electric vehicle charging behaviour. Besides, control strategy of electric vehicle charging and discharging has carried on the induction and the summary from the direct and indirect control. Direct control strategy means control the electric charging behaviour by controlling its electric vehicle charging and discharging power while the indirect control strategy by means of controlling the price of charging and discharging. Finally, for the convenience of the reader, this paper also proposed a complete idea of the research methods about how to study the control strategy, taking the adaptability and possibility of failure of electric vehicle control strategy into consideration. Finally, suggestions on the key areas for future research are put up.

  10. FDTD method for laser absorption in metals for large scale problems.

    PubMed

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  11. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-02-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  12. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-06-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  13. Vehicle-to-Grid Automatic Load Sharing with Driver Preference in Micro-Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yubo; Nazaripouya, Hamidreza; Chu, Chi-Cheng

    Integration of Electrical Vehicles (EVs) with power grid not only brings new challenges for load management, but also opportunities for distributed storage and generation. This paper comprehensively models and analyzes distributed Vehicle-to-Grid (V2G) for automatic load sharing with driver preference. In a micro-grid with limited communications, V2G EVs need to decide load sharing based on their own power and voltage profile. A droop based controller taking into account driver preference is proposed in this paper to address the distributed control of EVs. Simulations are designed for three fundamental V2G automatic load sharing scenarios that include all system dynamics of suchmore » applications. Simulation results demonstrate that active power sharing is achieved proportionally among V2G EVs with consideration of driver preference. In additional, the results also verify the system stability and reactive power sharing analysis in system modelling, which sheds light on large scale V2G automatic load sharing in more complicated cases.« less

  14. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  15. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    NASA Astrophysics Data System (ADS)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  16. Characterization of an Ionization Readout Tile for nEXO

    DOE PAGES

    Jewell, M.; Schubert, A.; Cen, W. R.; ...

    2018-01-10

    Here, a new design for the anode of a time projection chamber, consisting of a charge-detecting "tile", is investigated for use in large scale liquid xenon detectors. The tile is produced by depositing 60 orthogonal metal charge-collecting strips, 3 mm wide, on a 10 cm × 10 cm fused-silica wafer. These charge tiles may be employed by large detectors, such as the proposed tonne-scale nEXO experiment to search for neutrinoless double-beta decay. Modular by design, an array of tiles can cover a sizable area. The width of each strip is small compared to the size of the tile, so amore » Frisch grid is not required. A grid-less, tiled anode design is beneficial for an experiment such as nEXO, where a wire tensioning support structure and Frisch grid might contribute radioactive backgrounds and would have to be designed to accommodate cycling to cryogenic temperatures. The segmented anode also reduces some degeneracies in signal reconstruction that arise in large-area crossed-wire time projection chambers. A prototype tile was tested in a cell containing liquid xenon. Very good agreement is achieved between the measured ionization spectrum of a 207Bi source and simulations that include the microphysics of recombination in xenon and a detailed modeling of the electrostatic field of the detector. An energy resolution σ/ E=5.5% is observed at 570 keV, comparable to the best intrinsic ionization-only resolution reported in literature for liquid xenon at 936 V/cm.« less

  17. Characterization of an Ionization Readout Tile for nEXO

    NASA Astrophysics Data System (ADS)

    Jewell, M.; Schubert, A.; Cen, W. R.; Dalmasson, J.; DeVoe, R.; Fabris, L.; Gratta, G.; Jamil, A.; Li, G.; Odian, A.; Patel, M.; Pocar, A.; Qiu, D.; Wang, Q.; Wen, L. J.; Albert, J. B.; Anton, G.; Arnquist, I. J.; Badhrees, I.; Barbeau, P.; Beck, D.; Belov, V.; Bourque, F.; Brodsky, J. P.; Brown, E.; Brunner, T.; Burenkov, A.; Cao, G. F.; Cao, L.; Chambers, C.; Charlebois, S. A.; Chiu, M.; Cleveland, B.; Coon, M.; Craycraft, A.; Cree, W.; Côté, M.; Daniels, T.; Daugherty, S. J.; Daughhetee, J.; Delaquis, S.; Der Mesrobian-Kabakian, A.; Didberidze, T.; Dilling, J.; Ding, Y. Y.; Dolinski, M. J.; Dragone, A.; Fairbank, W.; Farine, J.; Feyzbakhsh, S.; Fontaine, R.; Fudenberg, D.; Giacomini, G.; Gornea, R.; Hansen, E. V.; Harris, D.; Hasan, M.; Heffner, M.; Hoppe, E. W.; House, A.; Hufschmidt, P.; Hughes, M.; Hößl, J.; Ito, Y.; Iverson, A.; Jiang, X. S.; Johnston, S.; Karelin, A.; Kaufman, L. J.; Koffas, T.; Kravitz, S.; Krücken, R.; Kuchenkov, A.; Kumar, K. S.; Lan, Y.; Leonard, D. S.; Li, S.; Li, Z.; Licciardi, C.; Lin, Y. H.; MacLellan, R.; Michel, T.; Mong, B.; Moore, D.; Murray, K.; Newby, R. J.; Ning, Z.; Njoya, O.; Nolet, F.; Odgers, K.; Oriunno, M.; Orrell, J. L.; Ostrovskiy, I.; Overman, C. T.; Ortega, G. S.; Parent, S.; Piepke, A.; Pratte, J.-F.; Radeka, V.; Raguzin, E.; Rao, T.; Rescia, S.; Retiere, F.; Robinson, A.; Rossignol, T.; Rowson, P. C.; Roy, N.; Saldanha, R.; Sangiorgio, S.; Schmidt, S.; Schneider, J.; Sinclair, D.; Skarpaas, K.; Soma, A. K.; St-Hilaire, G.; Stekhanov, V.; Stiegler, T.; Sun, X. L.; Tarka, M.; Todd, J.; Tolba, T.; Tsang, R.; Tsang, T.; Vachon, F.; Veeraraghavan, V.; Visser, G.; Vuilleumier, J.-L.; Wagenpfeil, M.; Weber, M.; Wei, W.; Wichoski, U.; Wrede, G.; Wu, S. X.; Wu, W. H.; Yang, L.; Yen, Y.-R.; Zeldovich, O.; Zhang, X.; Zhao, J.; Zhou, Y.; Ziegler, T.

    2018-01-01

    A new design for the anode of a time projection chamber, consisting of a charge-detecting "tile", is investigated for use in large scale liquid xenon detectors. The tile is produced by depositing 60 orthogonal metal charge-collecting strips, 3 mm wide, on a 10 cm × 10 cm fused-silica wafer. These charge tiles may be employed by large detectors, such as the proposed tonne-scale nEXO experiment to search for neutrinoless double-beta decay. Modular by design, an array of tiles can cover a sizable area. The width of each strip is small compared to the size of the tile, so a Frisch grid is not required. A grid-less, tiled anode design is beneficial for an experiment such as nEXO, where a wire tensioning support structure and Frisch grid might contribute radioactive backgrounds and would have to be designed to accommodate cycling to cryogenic temperatures. The segmented anode also reduces some degeneracies in signal reconstruction that arise in large-area crossed-wire time projection chambers. A prototype tile was tested in a cell containing liquid xenon. Very good agreement is achieved between the measured ionization spectrum of a 207Bi source and simulations that include the microphysics of recombination in xenon and a detailed modeling of the electrostatic field of the detector. An energy resolution σ/E=5.5% is observed at 570 keV, comparable to the best intrinsic ionization-only resolution reported in literature for liquid xenon at 936 V/cm.

  18. Characterization of an Ionization Readout Tile for nEXO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jewell, M.; Schubert, A.; Cen, W. R.

    Here, a new design for the anode of a time projection chamber, consisting of a charge-detecting "tile", is investigated for use in large scale liquid xenon detectors. The tile is produced by depositing 60 orthogonal metal charge-collecting strips, 3 mm wide, on a 10 cm × 10 cm fused-silica wafer. These charge tiles may be employed by large detectors, such as the proposed tonne-scale nEXO experiment to search for neutrinoless double-beta decay. Modular by design, an array of tiles can cover a sizable area. The width of each strip is small compared to the size of the tile, so amore » Frisch grid is not required. A grid-less, tiled anode design is beneficial for an experiment such as nEXO, where a wire tensioning support structure and Frisch grid might contribute radioactive backgrounds and would have to be designed to accommodate cycling to cryogenic temperatures. The segmented anode also reduces some degeneracies in signal reconstruction that arise in large-area crossed-wire time projection chambers. A prototype tile was tested in a cell containing liquid xenon. Very good agreement is achieved between the measured ionization spectrum of a 207Bi source and simulations that include the microphysics of recombination in xenon and a detailed modeling of the electrostatic field of the detector. An energy resolution σ/ E=5.5% is observed at 570 keV, comparable to the best intrinsic ionization-only resolution reported in literature for liquid xenon at 936 V/cm.« less

  19. Grid Computing for Disaster Mitigation

    NASA Astrophysics Data System (ADS)

    Koh, Hock Lye; Teh, Su Yean; Majid, Taksiah A.; Aziz, Hamidi Abdul

    The infamous 2004 Andaman tsunami has highlighted the need to be prepared and to be resilient to such disasters. Further, recent episodes of infectious disease epidemics worldwide underline the urgency to control and manage infectious diseases. Universiti Sains Malaysia (USM) has recently formed the Disaster Research Nexus (DRN) within the School of Civil Engineering to spearhead research and development in natural disaster mitigation programs to mitigate the adverse effects of natural disasters. This paper presents a brief exposition on the aspirations of DRN towards achieving resilience in communities affected by these natural disasters. A brief review of the simulations of the 2004 Andaman tsunami, with grid application is presented. Finally, the application of grid technology in large scale simulations of disease transmission dynamics is discussed.

  20. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  1. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  2. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE PAGES

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  3. Large-scale data analysis of power grid resilience across multiple US service regions

    NASA Astrophysics Data System (ADS)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  4. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  5. Filter and Grid Resolution in DG-LES

    NASA Astrophysics Data System (ADS)

    Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman

    2017-11-01

    The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.

  6. Technical Analysis Feasibility Study on Smart Microgrid System in Sekolah Tinggi Teknik PLN

    NASA Astrophysics Data System (ADS)

    Suyanto, Heri

    2018-02-01

    Nowadays application of new and renewable energy as main resource of power plant has greatly increased. High penetration of renewable energy into the grid will influence the quality and reliability of the electricity system, due to the intermittent characteristic of new and renewable energy resources. Smart grid or microgrid technology has the ability to deal with this intermittent characteristic especially if these renewable energy resources integrated to grid in large scale, so it can improve the reliability and efficiency of the grid. We plan to implement smart microgrid system at Sekolah Tinggi Teknik PLN as a pilot project. Before the pilot project start, the feasibility study must be conducted. In this feasibility study, the renewable energy resources and load characteristic at the site will be measured. Then the technical aspect of this feasibility study will be analyzed. This paper explains that analysis of ths feasibility study.

  7. Barriers to Achieving Textbook Multigrid Efficiency (TME) in CFD

    NASA Technical Reports Server (NTRS)

    Brandt, Achi

    1998-01-01

    As a guide to attaining this optimal performance for general CFD problems, the table below lists every foreseen kind of computational difficulty for achieving that goal, together with the possible ways for resolving that difficulty, their current state of development, and references. Included in the table are staggered and nonstaggered, conservative and nonconservative discretizations of viscous and inviscid, incompressible and compressible flows at various Mach numbers, as well as a simple (algebraic) turbulence model and comments on chemically reacting flows. The listing of associated computational barriers involves: non-alignment of streamlines or sonic characteristics with the grids; recirculating flows; stagnation points; discretization and relaxation on and near shocks and boundaries; far-field artificial boundary conditions; small-scale singularities (meaning important features, such as the complete airplane, which are not visible on some of the coarse grids); large grid aspect ratios; boundary layer resolution; and grid adaption.

  8. National Offshore Wind Energy Grid Interconnection Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, John P.; Liu, Shu; Ibanez, Eduardo

    2014-07-30

    The National Offshore Wind Energy Grid Interconnection Study (NOWEGIS) considers the availability and potential impacts of interconnecting large amounts of offshore wind energy into the transmission system of the lower 48 contiguous United States. A total of 54GW of offshore wind was assumed to be the target for the analyses conducted. A variety of issues are considered including: the anticipated staging of offshore wind; the offshore wind resource availability; offshore wind energy power production profiles; offshore wind variability; present and potential technologies for collection and delivery of offshore wind energy to the onshore grid; potential impacts to existing utility systemsmore » most likely to receive large amounts of offshore wind; and regulatory influences on offshore wind development. The technologies considered the reliability of various high-voltage ac (HVAC) and high-voltage dc (HVDC) technology options and configurations. The utility system impacts of GW-scale integration of offshore wind are considered from an operational steady-state perspective and from a regional and national production cost perspective.« less

  9. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    NASA Astrophysics Data System (ADS)

    Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.

    2012-02-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  10. Temporally structured replay of neural activity in a model of entorhinal cortex, hippocampus and postsubiculum

    PubMed Central

    Hasselmo, Michael E.

    2008-01-01

    The spiking activity of hippocampal neurons during REM sleep exhibits temporally structured replay of spiking occurring during previously experienced trajectories (Louie and Wilson, 2001). Here, temporally structured replay of place cell activity during REM sleep is modeled in a large-scale network simulation of grid cells, place cells and head direction cells. During simulated waking behavior, the movement of the simulated rat drives activity of a population of head direction cells that updates the activity of a population of entorhinal grid cells. The population of grid cells drives the activity of place cells coding individual locations. Associations between location and movement direction are encoded by modification of excitatory synaptic connections from place cells to speed modulated head direction cells. During simulated REM sleep, the population of place cells coding an experienced location activates the head direction cells coding the associated movement direction. Spiking of head direction cells then causes frequency shifts within the population of entorhinal grid cells to update a phase representation of location. Spiking grid cells then activate new place cells that drive new head direction activity. In contrast to models that perform temporally compressed sequence retrieval similar to sharp wave activity, this model can simulate data on temporally structured replay of hippocampal place cell activity during REM sleep at time scales similar to those observed during waking. These mechanisms could be important for episodic memory of trajectories. PMID:18973557

  11. The Mass-loss Return from Evolved Stars to the Large Magellanic Cloud. IV. Construction and Validation of a Grid of Models for Oxygen-rich AGB Stars, Red Supergiants, and Extreme AGB Stars

    NASA Astrophysics Data System (ADS)

    Sargent, Benjamin A.; Srinivasan, S.; Meixner, M.

    2011-02-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the "Grid of Red supergiant and Asymptotic giant branch star ModelS" (GRAMS). This model grid explores four parameters—stellar effective temperature from 2100 K to 4700 K luminosity from 103 to 106 L sun; dust shell inner radii of 3, 7, 11, and 15 R star; and 10.0 μm optical depth from 10-4 to 26. From an initial grid of ~1200 2Dust models, we create a larger grid of ~69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  12. ''A Parallel Adaptive Simulation Tool for Two Phase Steady State Reacting Flows in Industrial Boilers and Furnaces''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael J. Bockelie

    2002-01-04

    This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requiresmore » a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International; -mesh adaption, data management, and parallelization software and technology being developed by users of the BoxLib library at LBNL; and -solution methods for problems formulated on block structured grids that were being developed in collaboration with technical staff members at the University of Utah Center for High Performance Computing (CHPC) and at LBNL. The combustion modeling software used by Reaction Engineering International represents an investment of over fifty man-years of development, conducted over a period of twenty years. Thus, it was impractical to achieve our objective by starting from scratch. The research program resulted in an adaptive grid, reacting CFD flow solver that can be used only on limited problems. In current form the code is appropriate for use on academic problems with simplified geometries. The new solver is not sufficiently robust or sufficiently general to be used in a ''production mode'' for industrial applications. The principle difficulty lies with the multi-level solver technology. The use of multi-level solvers on adaptive grids with embedded boundaries is not yet a mature field and there are many issues that remain to be resolved. From the lessons learned in this SBIR program, we have started work on a new flow solver with an AMR capability. The new code is based on a conventional cell-by-cell mesh refinement strategy used in unstructured grid solvers that employ hexahedral cells. The new solver employs several of the concepts and solution strategies developed within this research program. The formulation of the composite grid problem for the new solver has been designed to avoid the embedded boundary complications encountered in this SBIR project. This follow-on effort will result in a reacting flow CFD solver with localized mesh capability that can be used to perform engineering calculations on industrial problems in a production mode.« less

  13. Downscaling RCP8.5 daily temperatures and precipitation in Ontario using localized ensemble optimal interpolation (EnOI) and bias correction

    NASA Astrophysics Data System (ADS)

    Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping

    2017-10-01

    A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.

  14. Interaction of grid generated turbulence with expansion waves

    NASA Astrophysics Data System (ADS)

    Xanthos, Savvas Steliou

    2004-11-01

    The interaction of traveling expansion waves with grid-generated turbulence was investigated in a large-scale shock tube research facility. The incident shock and the induced flow behind it passed through a rectangular grid, which generated a nearly homogeneous and nearly isotropic turbulent flow. As the shock wave exited the open end of the shock tube, a system of expansion waves was generated which traveled upstream and interacted with the grid-generated turbulence. The Mach number of the incoming flows investigated was about 0.3 hence interactions are considered as interactions with an almost incompressible flow. Mild interactions with expansion waves, which generated expansion ratios of the order of 1.8, were achieved in the present investigations. In that respect the compressibility effects started to become important during the interaction. A custom designed vorticity probe was used to measure for the first time the rate-of-strain, the rate-of-rotation and the velocity-gradient tensors in several of the present flows. Custom made x-hotwire probes were initially used to measure the flow quantities simultaneously at different locations inside the flow field. Although the strength of the generated expansion waves was mild, S = 6U6x EW = 50 to 100 s-1, the effect on damping fluctuations of turbulence was clear. Vorticity fluctuations were reduced dramatically more than velocity or pressure fluctuations. Attenuation of longitudinal velocity fluctuations has been observed in all experiments. It appears that the attenuation increases in interactions with higher Reynolds number. The data of velocity fluctuations in the lateral directions show no consistent behavior change or some minor attenuation through the interaction. The present results clearly show that in most of the cases, attenuation occurs at large xM distances where length scales of the incoming flow are high and turbulence intensities are low. Thus large in size eddies with low velocity fluctuations are affected the most by the interaction with the expansion waves. Spectral analysis indicated that spectral energy is shifted after the interaction to lower wave numbers suggesting that the typical length scales of turbulence are increased after the interaction.

  15. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  16. Spatial Variability of Snowpack Properties On Small Slopes

    NASA Astrophysics Data System (ADS)

    Pielmeier, C.; Kronholm, K.; Schneebeli, M.; Schweizer, J.

    The spatial variability of alpine snowpacks is created by a variety of parameters like deposition, wind erosion, sublimation, melting, temperature, radiation and metamor- phism of the snow. Spatial variability is thought to strongly control the avalanche initi- ation and failure propagation processes. Local snowpack measurements are currently the basis for avalanche warning services and there exist contradicting hypotheses about the spatial continuity of avalanche active snow layers and interfaces. Very little about the spatial variability of the snowpack is known so far, therefore we have devel- oped a systematic and objective method to measure the spatial variability of snowpack properties, layering and its relation to stability. For a complete coverage, the analysis of the spatial variability has to entail all scales from mm to km. In this study the small to medium scale spatial variability is investigated, i.e. the range from centimeters to tenths of meters. During the winter 2000/2001 we took systematic measurements in lines and grids on a flat snow test field with grid distances from 5 cm to 0.5 m. Fur- thermore, we measured systematic grids with grid distances between 0.5 m and 2 m in undisturbed flat fields and on small slopes above the tree line at the Choerbschhorn, in the region of Davos, Switzerland. On 13 days we measured the spatial pattern of the snowpack stratigraphy with more than 110 snow micro penetrometer measure- ments at slopes and flat fields. Within this measuring grid we placed 1 rutschblock and 12 stuffblock tests to measure the stability of the snowpack. With the large num- ber of measurements we are able to use geostatistical methods to analyse the spatial variability of the snowpack. Typical correlation lengths are calculated from semivari- ograms. Discerning the systematic trends from random spatial variability is analysed using statistical models. Scale dependencies are shown and recurring scaling patterns are outlined. The importance of the small and medium scale spatial variability for the larger (kilometer) scale spatial variability as well as for the avalanche formation are discussed. Finally, an outlook on spatial models for the snowpack variability is given.

  17. The impact of simulated mesoscale convective systems on global precipitation: A multiscale modeling study

    NASA Astrophysics Data System (ADS)

    Tao, Wei-Kuo; Chern, Jiun-Dar

    2017-06-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multiscale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. The impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE, a CRM) model and Goddard MMF that uses the GCEs as its embedded CRMs. Both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the Goddard MMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feedback are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.

  18. An engineering closure for heavily under-resolved coarse-grid CFD in large applications

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Yu, Fujiang; Jordan, Thomas

    2016-11-01

    Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.

  19. Spatio-Temporal Variability of Groundwater Storage in India

    NASA Technical Reports Server (NTRS)

    Bhanja, Soumendra; Rodell, Matthew; Li, Bailing; Mukherjee, Abhijit

    2016-01-01

    Groundwater level measurements from 3907 monitoring wells, distributed within 22 major river basins of India, are assessed to characterize their spatial and temporal variability. Ground water storage (GWS) anomalies (relative to the long-term mean) exhibit strong seasonality, with annual maxima observed during the monsoon season and minima during pre-monsoon season. Spatial variability of GWS anomalies increases with the extent of measurements, following the power law relationship, i.e., log-(spatial variability) is linearly dependent on log-(spatial extent).In addition, the impact of well spacing on spatial variability and the power law relationship is investigated. We found that the mean GWS anomaly sampled at a 0.25 degree grid scale closes to unweighted average over all wells. The absolute error corresponding to each basin grows with increasing scale, i.e., from 0.25 degree to 1 degree. It was observed that small changes in extent could create very large changes in spatial variability at large grid scales. Spatial variability of GWS anomaly has been found to vary with climatic conditions. To our knowledge, this is the first study of the effects of well spacing on groundwater spatial variability. The results may be useful for interpreting large scale groundwater variations from unevenly spaced or sparse groundwater well observations or for siting and prioritizing wells in a network for groundwater management. The output of this study could be used to maintain a cost effective groundwater monitoring network in the study region and the approach can also be used in other parts of the globe.

  20. Spatio-temporal variability of groundwater storage in India.

    PubMed

    Bhanja, Soumendra N; Rodell, Matthew; Li, Bailing; Mukherjee, Abhijit

    2017-01-01

    Groundwater level measurements from 3907 monitoring wells, distributed within 22 major river basins of India, are assessed to characterize their spatial and temporal variability. Groundwater storage (GWS) anomalies (relative to the long-term mean) exhibit strong seasonality, with annual maxima observed during the monsoon season and minima during pre-monsoon season. Spatial variability of GWS anomalies increases with the extent of measurements, following the power law relationship, i.e., log-(spatial variability) is linearly dependent on log-(spatial extent). In addition, the impact of well spacing on spatial variability and the power law relationship is investigated. We found that the mean GWS anomaly sampled at a 0.25 degree grid scale closes to unweighted average over all wells. The absolute error corresponding to each basin grows with increasing scale, i.e., from 0.25 degree to 1 degree. It was observed that small changes in extent could create very large changes in spatial variability at large grid scales. Spatial variability of GWS anomaly has been found to vary with climatic conditions. To our knowledge, this is the first study of the effects of well spacing on groundwater spatial variability. The results may be useful for interpreting large scale groundwater variations from unevenly spaced or sparse groundwater well observations or for siting and prioritizing wells in a network for groundwater management. The output of this study could be used to maintain a cost effective groundwater monitoring network in the study region and the approach can also be used in other parts of the globe.

  1. A principle of economy predicts the functional architecture of grid cells

    PubMed Central

    Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay

    2015-01-01

    Grid cells in the brain respond when an animal occupies a periodic lattice of ‘grid fields’ during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths. DOI: http://dx.doi.org/10.7554/eLife.08362.001 PMID:26335200

  2. Uncertainty quantification in LES of channel flow

    DOE PAGES

    Safta, Cosmin; Blaylock, Myra; Templeton, Jeremy; ...

    2016-07-12

    Here, in this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence andmore » are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for.« less

  3. Conceptual Design of the Everglades Depth Estimation Network (EDEN) Grid

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    INTRODUCTION The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). Ground elevation data for the greater Everglades and the digital ground elevation models derived from them form the foundation for all EDEN water depth and associated ecologic/hydrologic modeling (Jones, 2004, Jones and Price, 2007). To use EDEN water depth and duration information most effectively, it is important to be able to view and manipulate information on elevation data quality and other land cover and habitat characteristics across the Everglades region. These requirements led to the development of the geographic data layer described in this techniques and methods report. Relying on extensive experience in GIS data development, distribution, and analysis, a great deal of forethought went into the design of the geographic data layer used to index elevation and other surface characteristics for the Greater Everglades region. To allow for simplicity of design and use, the EDEN area was broken into a large number of equal-sized rectangles ('Cells') that in total are referred to here as the 'grid'. Some characteristics of this grid, such as the size of its cells, its origin, the area of Florida it is designed to represent, and individual grid cell identifiers, could not be changed once the grid database was developed. Therefore, these characteristics were selected to design as robust a grid as possible and to ensure the grid's long-term utility. It is desirable to include all pertinent information known about elevation and elevation data collection as grid attributes. Also, it is very important to allow for efficient grid post-processing, sub-setting, analysis, and distribution. This document details the conceptual design of the EDEN grid spatial parameters and cell attribute-table content.

  4. GLOFRIM v1.0 - A globally applicable computational framework for integrated hydrological-hydrodynamic modelling

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.

    2017-10-01

    We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.

  5. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    NASA Astrophysics Data System (ADS)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  6. NREL Supercomputer Tackles Grid Challenges | News | NREL

    Science.gov Websites

    traditional database processes. Photo by Dennis Schroeder, NREL "Big data" is playing an imagery, and large-scale simulation data. Photo by Dennis Schroeder, NREL "Peregrine provides much . Photo by Dennis Schroeder, NREL Collaboration is key, and it is hard-wired into the ESIF's core. NREL

  7. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1981-01-01

    Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.

  8. : “Developing Regional Modeling Techniques Applicable for Simulating Future Climate Conditions in the Carolinas”

    EPA Science Inventory

    Global climate models (GCMs) are currently used to obtain information about future changes in the large-scale climate. However, such simulations are typically done at coarse spatial resolutions, with model grid boxes on the order of 100 km on a horizontal side. Therefore, techniq...

  9. Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.

    PubMed

    Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L

    2002-09-01

    We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.

  10. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    NASA Astrophysics Data System (ADS)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  11. A critical remark on the applicability of E-OBS European gridded temperature data set for validating control climate simulations

    NASA Astrophysics Data System (ADS)

    Kyselý, Jan; Plavcová, Eva

    2010-12-01

    The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.

  12. Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations

    NASA Astrophysics Data System (ADS)

    Linders, Viktor; Kupiainen, Marco; Nordström, Jan

    2017-07-01

    We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.

  13. Spatiotemporal Variability of Turbulence Kinetic Energy Budgets in the Convective Boundary Layer over Both Simple and Complex Terrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Raj K.; Berg, Larry K.; Pekour, Mikhail

    The assumption of sub-grid scale (SGS) horizontal homogeneity within a model grid cell, which forms the basis of SGS turbulence closures used by mesoscale models, becomes increasingly tenuous as grid spacing is reduced to a few kilometers or less, such as in many emerging high-resolution applications. Herein, we use the turbulence kinetic energy (TKE) budget equation to study the spatio-temporal variability in two types of terrain—complex (Columbia Basin Wind Energy Study [CBWES] site, north-eastern Oregon) and flat (ScaledWind Farm Technologies [SWiFT] site, west Texas) using the Weather Research and Forecasting (WRF) model. In each case six-nested domains (three domains eachmore » for mesoscale and large-eddy simulation [LES]) are used to downscale the horizontal grid spacing from 10 km to 10 m using the WRF model framework. The model output was used to calculate the values of the TKE budget terms in vertical and horizontal planes as well as the averages of grid cells contained in the four quadrants (a quarter area) of the LES domain. The budget terms calculated along the planes and the mean profile of budget terms show larger spatial variability at CBWES site than at the SWiFT site. The contribution of the horizontal derivative of the shear production term to the total production shear was found to be 45% and 15% of the total shear, at the CBWES and SWiFT sites, respectively, indicating that the horizontal derivatives applied in the budget equation should not be ignored in mesoscale model parameterizations, especially for cases with complex terrain with <10 km scale.« less

  14. Insights into the physico-chemical evolution of pyrogenic organic carbon emissions from biomass burning using coupled Lagrangian-Eulerian simulations

    NASA Astrophysics Data System (ADS)

    Suciu, L. G.; Griffin, R. J.; Masiello, C. A.

    2017-12-01

    Wildfires and prescribed burning are important sources of particulate and gaseous pyrogenic organic carbon (PyOC) emissions to the atmosphere. These emissions impact atmospheric chemistry, air quality and climate, but the spatial and temporal variabilities of these impacts are poorly understood, primarily because small and fresh fire plumes are not well predicted by three-dimensional Eulerian chemical transport models due to their coarser grid size. Generally, this results in underestimation of downwind deposition of PyOC, hydroxyl radical reactivity, secondary organic aerosol formation and ozone (O3) production. However, such models are very good for simulation of multiple atmospheric processes that could affect the lifetimes of PyOC emissions over large spatiotemporal scales. Finer resolution models, such as Lagrangian reactive plumes models (or plume-in-grid), could be used to trace fresh emissions at the sub-grid level of the Eulerian model. Moreover, Lagrangian plume models need background chemistry predicted by the Eulerian models to accurately simulate the interactions of the plume material with the background air during plume aging. Therefore, by coupling the two models, the physico-chemical evolution of the biomass burning plumes can be tracked from local to regional scales. In this study, we focus on the physico-chemical changes of PyOC emissions from sub-grid to grid levels using an existing chemical mechanism. We hypothesize that finer scale Lagrangian-Eulerian simulations of several prescribed burns in the U.S. will allow more accurate downwind predictions (validated by airborne observations from smoke plumes) of PyOC emissions (i.e., submicron particulate matter, organic aerosols, refractory black carbon) as well as O3 and other trace gases. Simulation results could be used to optimize the implementation of additional PyOC speciation in the existing chemical mechanism.

  15. First look at changes in flood hazard in the Inter-Sectoral Impact Model Intercomparison Project ensemble

    PubMed Central

    Dankers, Rutger; Arnell, Nigel W.; Clark, Douglas B.; Falloon, Pete D.; Fekete, Balázs M.; Gosling, Simon N.; Heinke, Jens; Kim, Hyungjun; Masaki, Yoshimitsu; Satoh, Yusuke; Stacke, Tobias; Wada, Yoshihide; Wisser, Dominik

    2014-01-01

    Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20–45%) of the global land grid points, particularly in areas where the hydrograph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5–30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies. PMID:24344290

  16. Towards retrieving critical relative humidity from ground-based remote sensing observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Weverberg, Kwinten; Boutle, Ian; Morcrette, Cyril J.

    2016-08-22

    Nearly all parameterisations of large-scale cloud require the specification of the critical relative humidity (RHcrit). This is the gridbox-mean relative humidity at which the subgrid fluctuations in temperature and water vapour become so large that part of a subsaturated gridbox becomes saturated and cloud starts to form. Until recently, the lack of high-resolution observations of temperature and moisture variability has hindered a reasonable estimate of the RHcrit from observations. However, with the advent of ground-based measurements from Raman lidar, it becomes possible to obtain long records of temperature and moisture (co-)variances with sub-minute sample rates. Lidar observations are inherently noisymore » and any analysis of higher-order moments will be very dependent on the ability to quantify and remove this noise. We present an exporatory study aimed at understanding whether current noise levels of lidar-retrieved temperature and water vapour are sufficient to obtain a reasonable estimate of the RHcrit. We show that vertical profiles of RHcrit can be derived for a gridbox length of up to about 30 km (120) with an uncertainty of about 4 % (2 %). RHcrit tends to be smallest near the scale height and seems to be fairly insensitive to the horizontal grid spacing at the scales investigated here (30 - 120 km). However, larger sensitivity was found to the vertical grid spacing. As the grid spacing decreases from 400 to 100 m, RHcrit is observed to increase by about 6 %, which is more than the uncertainty in the RHcrit retrievals.« less

  17. Differences in Visual-Spatial Input May Underlie Different Compression Properties of Firing Fields for Grid Cell Modules in Medial Entorhinal Cortex

    PubMed Central

    Raudies, Florian; Hasselmo, Michael E.

    2015-01-01

    Firing fields of grid cells in medial entorhinal cortex show compression or expansion after manipulations of the location of environmental barriers. This compression or expansion could be selective for individual grid cell modules with particular properties of spatial scaling. We present a model for differences in the response of modules to barrier location that arise from different mechanisms for the influence of visual features on the computation of location that drives grid cell firing patterns. These differences could arise from differences in the position of visual features within the visual field. When location was computed from the movement of visual features on the ground plane (optic flow) in the ventral visual field, this resulted in grid cell spatial firing that was not sensitive to barrier location in modules modeled with small spacing between grid cell firing fields. In contrast, when location was computed from static visual features on walls of barriers, i.e. in the more dorsal visual field, this resulted in grid cell spatial firing that compressed or expanded based on the barrier locations in modules modeled with large spacing between grid cell firing fields. This indicates that different grid cell modules might have differential properties for computing location based on visual cues, or the spatial radius of sensitivity to visual cues might differ between modules. PMID:26584432

  18. Mechanics of Flapping Flight: Analytical Formulations of Unsteady Aerodynamics, Kinematic Optimization, Flight Dynamics, and Control

    NASA Astrophysics Data System (ADS)

    Taneja, Jayant Kumar

    Electricity is an indispensable commodity to modern society, yet it is delivered via a grid architecture that remains largely unchanged over the past century. A host of factors are conspiring to topple this dated yet venerated design: developments in renewable electricity generation technology, policies to reduce greenhouse gas emissions, and advances in information technology for managing energy systems. Modern electric grids are emerging as complex distributed systems in which a portfolio of power generation resources, often incorporating fluctuating renewable resources such as wind and solar, must be managed dynamically to meet uncontrolled, time-varying demand. Uncertainty in both supply and demand makes control of modern electric grids fundamentally more challenging, and growing portfolios of renewables exacerbate the challenge. We study three electricity grids: the state of California, the province of Ontario, and the country of Germany. To understand the effects of increasing renewables, we develop a methodology to scale renewables penetration. Analyzing these grids yields key insights about rigid limits to renewables penetration and their implications in meeting long-term emissions targets. We argue that to achieve deep penetration of renewables, the operational model of the grid must be inverted, changing the paradigm from load-following supplies to supply-following loads. To alleviate the challenge of supply-demand matching on deeply renewable grids, we first examine well-known techniques, including altering management of existing supply resources, employing utility-scale energy storage, targeting energy efficiency improvements, and exercising basic demand-side management. Then, we create several instantiations of supply-following loads -- including refrigerators, heating and cooling systems, and laptop computers -- by employing a combination of sensor networks, advanced control techniques, and enhanced energy storage. We examine the capacity of each load for supply-following and study the behaviors of populations of these loads, assessing their potential at various levels of deployment throughout the California electricity grid. Using combinations of supply-following strategies, we can reduce peak natural gas generation by 19% on a model of the California grid with 60% renewables. We then assess remaining variability on this deeply renewable grid incorporating supply-following loads, characterizing additional capabilities needed to ensure supply-demand matching in future sustainable electricity grids.

  19. VisIVO: A Library and Integrated Tools for Large Astrophysical Dataset Exploration

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Costa, A.; Ersotelos, N.; Krokos, M.; Massimino, P.; Petta, C.; Vitello, F.

    2012-09-01

    VisIVO provides an integrated suite of tools and services that can be used in many scientific fields. VisIVO development starts in the Virtual Observatory framework. VisIVO allows users to visualize meaningfully highly-complex, large-scale datasets and create movies of these visualizations based on distributed infrastructures. VisIVO supports high-performance, multi-dimensional visualization of large-scale astrophysical datasets. Users can rapidly obtain meaningful visualizations while preserving full and intuitive control of the relevant parameters. VisIVO consists of VisIVO Desktop - a stand-alone application for interactive visualization on standard PCs, VisIVO Server - a platform for high performance visualization, VisIVO Web - a custom designed web portal, VisIVOSmartphone - an application to exploit the VisIVO Server functionality and the latest VisIVO features: VisIVO Library allows a job running on a computational system (grid, HPC, etc.) to produce movies directly with the code internal data arrays without the need to produce intermediate files. This is particularly important when running on large computational facilities, where the user wants to have a look at the results during the data production phase. For example, in grid computing facilities, images can be produced directly in the grid catalogue while the user code is running in a system that cannot be directly accessed by the user (a worker node). The deployment of VisIVO on the DG and gLite is carried out with the support of EDGI and EGI-Inspire projects. Depending on the structure and size of datasets under consideration, the data exploration process could take several hours of CPU for creating customized views and the production of movies could potentially last several days. For this reason an MPI parallel version of VisIVO could play a fundamental role in increasing performance, e.g. it could be automatically deployed on nodes that are MPI aware. A central concept in our development is thus to produce unified code that can run either on serial nodes or in parallel by using HPC oriented grid nodes. Another important aspect, to obtain as high performance as possible, is the integration of VisIVO processes with grid nodes where GPUs are available. We have selected CUDA for implementing a range of computationally heavy modules. VisIVO is supported by EGI-Inspire, EDGI and SCI-BUS projects.

  20. Western Wind and Solar Integration Study Phase 3A: Low Levels of Synchronous Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Nicholas W.; Leonardi, Bruno; D'Aquila, Robert

    The stability of the North American electric power grids under conditions of high penetrations of wind and solar is a significant concern and possible impediment to reaching renewable energy goals. The 33% wind and solar annual energy penetration considered in this study results in substantial changes to the characteristics of the bulk power system. This includes different power flow patterns, different commitment and dispatch of existing synchronous generation, and different dynamic behavior from wind and solar generation. The Western Wind and Solar Integration Study (WWSIS), sponsored by the U.S. Department of Energy, is one of the largest regional solar andmore » wind integration studies to date. In multiple phases, it has explored different aspects of the question: Can we integrate large amounts of wind and solar energy into the electric power system of the West? The work reported here focused on the impact of low levels of synchronous generation on the transient stability performance in one part of the region in which wind generation has displaced synchronous thermal generation under highly stressed, weak system conditions. It is essentially an extension of WWSIS-3. Transient stability, the ability of the power system to maintain synchronism among all elements following disturbances, is a major constraint on operations in many grids, including the western U.S. and Texas systems. These constraints primarily concern the performance of the large-scale bulk power system. But grid-wide stability concerns with high penetrations of wind and solar are still not thoroughly understood. This work focuses on 'traditional' fundamental frequency stability issues, such as maintaining synchronism, frequency, and voltage. The objectives of this study are to better understand the implications of low levels of synchronous generation and a weak grid on overall system performance by: 1) Investigating the Western Interconnection under conditions of both high renewable generation (e.g., wind and solar) and low synchronous generation (e.g., significant coal power plant decommitment or retirement); and 2) Analyzing both the large-scale stability of the Western Interconnection and regional stability issues driven by more geographically dispersed renewable generation interacting with a transmission grid that evolved with large, central station plants at key nodes. As noted above, the work reported here is an extension of the research performed in WWSIS-3.« less

  1. Utilizing data grid architecture for the backup and recovery of clinical image data.

    PubMed

    Liu, Brent J; Zhou, M Z; Documet, J

    2005-01-01

    Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models. However, there has been limited investigation into the impact of this emerging technology in medical imaging and informatics. In particular, PACS technology, an established clinical image repository system, while having matured significantly during the past ten years, still remains weak in the area of clinical image data backup. Current solutions are expensive or time consuming and the technology is far from foolproof. Many large-scale PACS archive systems still encounter downtime for hours or days, which has the critical effect of crippling daily clinical operations. In this paper, a review of current backup solutions will be presented along with a brief introduction to grid technology. Finally, research and development utilizing the grid architecture for the recovery of clinical image data, in particular, PACS image data, will be presented. The focus of this paper is centered on applying a grid computing architecture to a DICOM environment since DICOM has become the standard for clinical image data and PACS utilizes this standard. A federation of PACS can be created allowing a failed PACS archive to recover its image data from others in the federation in a seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid is composed of one research laboratory and two clinical sites. The Globus 3.0 Toolkit (Co-developed by the Argonne National Laboratory and Information Sciences Institute, USC) for developing the core and user level middleware is utilized to achieve grid connectivity. The successful implementation and evaluation of utilizing data grid architecture for clinical PACS data backup and recovery will provide an understanding of the methodology for using Data Grid in clinical image data backup for PACS, as well as establishment of benchmarks for performance from future grid technology improvements. In addition, the testbed can serve as a road map for expanded research into large enterprise and federation level data grids to guarantee CA (Continuous Availability, 99.999% up time) in a variety of medical data archiving, retrieval, and distribution scenarios.

  2. The Impact of Sika Deer on Vegetation in Japan: Setting Management Priorities on a National Scale

    NASA Astrophysics Data System (ADS)

    Ohashi, Haruka; Yoshikawa, Masato; Oono, Keiichi; Tanaka, Norihisa; Hatase, Yoriko; Murakami, Yuhide

    2014-09-01

    Irreversible shifts in ecosystems caused by large herbivores are becoming widespread around the world. We analyzed data derived from the 2009-2010 Sika Deer Impact Survey, which assessed the geographical distribution of deer impacts on vegetation through a questionnaire, on a scale of 5-km grid-cells. Our aim was to identify areas facing irreversible ecosystem shifts caused by deer overpopulation and in need of management prioritization. Our results demonstrated that the areas with heavy impacts on vegetation were widely distributed across Japan from north to south and from the coastal to the alpine areas. Grid-cells with heavy impacts are especially expanding in the southwestern part of the Pacific side of Japan. The intensity of deer impacts was explained by four factors: (1) the number of 5-km grid-cells with sika deer in neighboring 5 km-grid-cells in 1978 and 2003, (2) the year sika deer were first recorded in a grid-cell, (3) the number of months in which maximum snow depth exceeded 50 cm, and (4) the proportion of urban areas in a particular grid-cell. Based on our model, areas with long-persistent deer populations, short snow periods, and fewer urban areas were predicted to be the most vulnerable to deer impact. Although many areas matching these criteria already have heavy deer impact, there are some areas that remain only slightly impacted. These areas may need to be designated as having high management priority because of the possibility of a rapid intensification of deer impact.

  3. The impact of Sika deer on vegetation in Japan: setting management priorities on a national scale.

    PubMed

    Ohashi, Haruka; Yoshikawa, Masato; Oono, Keiichi; Tanaka, Norihisa; Hatase, Yoriko; Murakami, Yuhide

    2014-09-01

    Irreversible shifts in ecosystems caused by large herbivores are becoming widespread around the world. We analyzed data derived from the 2009-2010 Sika Deer Impact Survey, which assessed the geographical distribution of deer impacts on vegetation through a questionnaire, on a scale of 5-km grid-cells. Our aim was to identify areas facing irreversible ecosystem shifts caused by deer overpopulation and in need of management prioritization. Our results demonstrated that the areas with heavy impacts on vegetation were widely distributed across Japan from north to south and from the coastal to the alpine areas. Grid-cells with heavy impacts are especially expanding in the southwestern part of the Pacific side of Japan. The intensity of deer impacts was explained by four factors: (1) the number of 5-km grid-cells with sika deer in neighboring 5 km-grid-cells in 1978 and 2003, (2) the year sika deer were first recorded in a grid-cell, (3) the number of months in which maximum snow depth exceeded 50 cm, and (4) the proportion of urban areas in a particular grid-cell. Based on our model, areas with long-persistent deer populations, short snow periods, and fewer urban areas were predicted to be the most vulnerable to deer impact. Although many areas matching these criteria already have heavy deer impact, there are some areas that remain only slightly impacted. These areas may need to be designated as having high management priority because of the possibility of a rapid intensification of deer impact.

  4. Progress in the Development of a Global Quasi-3-D Multiscale Modeling Framework

    NASA Astrophysics Data System (ADS)

    Jung, J.; Konor, C. S.; Randall, D. A.

    2017-12-01

    The Quasi-3-D Multiscale Modeling Framework (Q3D MMF) is a second-generation MMF, which has following advances over the first-generation MMF: 1) The cloud-resolving models (CRMs) that replace conventional parameterizations are not confined to the large-scale dynamical-core grid cells, and are seamlessly connected to each other, 2) The CRMs sense the three-dimensional large- and cloud-scale environment, 3) Two perpendicular sets of CRM channels are used, and 4) The CRMs can resolve the steep surface topography along the channel direction. The basic design of the Q3D MMF has been developed and successfully tested in a limited-area modeling framework. Currently, global versions of the Q3D MMF are being developed for both weather and climate applications. The dynamical cores governing the large-scale circulation in the global Q3D MMF are selected from two cube-based global atmospheric models. The CRM used in the model is the 3-D nonhydrostatic anelastic Vector-Vorticity Model (VVM), which has been tested with the limited-area version for its suitability for this framework. As a first step of the development, the VVM has been reconstructed on the cubed-sphere grid so that it can be applied to global channel domains and also easily fitted to the large-scale dynamical cores. We have successfully tested the new VVM by advecting a bell-shaped passive tracer and simulating the evolutions of waves resulted from idealized barotropic and baroclinic instabilities. For improvement of the model, we also modified the tracer advection scheme to yield positive-definite results and plan to implement a new physics package that includes a double-moment microphysics and an aerosol physics. The interface for coupling the large-scale dynamical core and the VVM is under development. In this presentation, we shall describe the recent progress in the development and show some test results.

  5. The Electrochemical Flow Capacitor: Capacitive Energy Storage in Flowable Media

    NASA Astrophysics Data System (ADS)

    Dennison, Christopher R.

    Electrical energy storage (EES) has emerged as a necessary aspect of grid infrastructure to address the increasing problem of grid instability imposed by the large scale implementation of renewable energy sources (such as wind or solar) on the grid. Rapid energy recovery and storage is critically important to enable immediate and continuous utilization of these resources, and provides other benefits to grid operators and consumers as well. In past decades, there has been significant progress in the development of electrochemical EES technologies which has had an immense impact on the consumer and micro-electronics industries. However, these advances primarily address small-scale storage, and are often not practical at the grid-scale. A new energy storage concept called "the electrochemical flow capacitor (EFC)" has been developed at Drexel which has significant potential to be an attractive technology for grid-scale energy storage. This new concept exploits the characteristics of both supercapacitors and flow batteries, potentially enabling fast response rates with high power density, high efficiency, and long cycle lifetime, while decoupling energy storage from power output (i.e., scalable energy storage capacity). The unique aspect of this concept is the use of flowable carbon-electrolyte slurry ("flowable electrode") as the active material for capacitive energy storage. This dissertation work seeks to lay the scientific groundwork necessary to develop this new concept into a practical technology, and to test the overarching hypothesis that energy can be capacitively stored and recovered from a flowable media. In line with these goals, the objectives of this Ph.D. work are to: i) perform an exploratory investigation of the operating principles and demonstrate the technical viability of this new concept and ii) establish a scientific framework to assess the key linkages between slurry composition, flow cell design, operating conditions and system performance. To achieve these goals, a combined experimental and computational approach is undertaken. The technical viability of the technology is demonstrated, and in-depth studies are performed to understand the coupling between flow rate and slurry conductivity, and localized effects arising within the cell. The outlook of EFCs and other flowable electrode technologies is assessed, and opportunities for future work are discussed.

  6. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    NASA Astrophysics Data System (ADS)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.

  7. Large-scale ground motion simulation using GPGPU

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number of cores. Finally, we applied GPU calculation to the simulation of the 2011 Tohoku-oki earthquake. The model was constructed using a slip model from inversion of strong motion data (Suzuki et al., 2012), and a geological- and geophysical-based velocity structure model comprising all the Tohoku and Kanto regions as well as the large source area, which consists of about 1.9 billion grids. The overall characteristics of observed velocity seismograms for a longer period than range of 8 s were successfully reproduced (Maeda et al., 2012 AGU meeting). The turn around time for 50 thousand-step calculation (which correspond to 416 s in seismograph) using 100 GPUs was 52 minutes which is fairly short, especially considering this is the performance for the realistic and complex model.

  8. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.

  9. EverVIEW: a visualization platform for hydrologic and Earth science gridded data

    USGS Publications Warehouse

    Romañach, Stephanie S.; McKelvy, James M.; Suir, Kevin J.; Conzelmann, Craig

    2015-01-01

    The EverVIEW Data Viewer is a cross-platform desktop application that combines and builds upon multiple open source libraries to help users to explore spatially-explicit gridded data stored in Network Common Data Form (NetCDF). Datasets are displayed across multiple side-by-side geographic or tabular displays, showing colorized overlays on an Earth globe or grid cell values, respectively. Time-series datasets can be animated to see how water surface elevation changes through time or how habitat suitability for a particular species might change over time under a given scenario. Initially targeted toward Florida's Everglades restoration planning, EverVIEW has been flexible enough to address the varied needs of large-scale planning beyond Florida, and is currently being used in biological planning efforts nationally and internationally.

  10. Advanced batteries for load-leveling - The utility perspective on system integration

    NASA Astrophysics Data System (ADS)

    Delmonaco, J. L.; Lewis, P. A.; Roman, H. T.; Zemkoski, J.

    1982-09-01

    Rechargeable battery systems for applications as utility load-leveling units, particularly in urban areas, are discussed. Particular attention is given to advanced lead-acid, zinc-halogen, sodium-sulfer, and lithium-iron sulfide battery systems, noting that battery charging can proceed at light load hours and requires no fuel on-site. Each battery site will have a master site controller and related subsystems necessary for ensuring grid-quality power output from the batteries and charging when feasible. The actual interconnection with the grid is envisioned as similar to transmission, subtransmission, or distribution systems similar to cogeneration or wind-derived energy interconnections. Analyses are presented of factors influencing the planning economics, impacts on existing grids through solid-state converters, and operational and maintenance considerations. Finally, research directions towards large scale battery implementation are outlined.

  11. CdS thin film solar cells for terrestrial power

    NASA Technical Reports Server (NTRS)

    Shirland, F. A.

    1975-01-01

    The development of very low cost long lived Cu2S/CdS thin film solar cells for large scale energy conversion is reported. Excellent evaporated metal grid patterns were obtained using a specially designed aperture mask. Vacuum evaporated gold and copper grids of 50 lines per inch and 1 micron thickness were adequate electrically for the fine mesh contacting grid. Real time roof top sunlight exposure tests of encapsulated CdS cells showed no loss in output after 5 months. Accelerated life testing of encapsulated cells showed no loss of output power after 6 months of 12 hour dark-12 hour AMI illumination cycles at 40 C, 60 C, 80 C and 100 C temperatures. However, the cells changed their basic parameters, such as series and shunt resistance and junction capacitance.

  12. TopoSCALE v.1.0: downscaling gridded climate data in complex terrain

    NASA Astrophysics Data System (ADS)

    Fiddes, J.; Gruber, S.

    2014-02-01

    Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).

  13. Expanding the user base beyond HEP for the Ganga distributed analysis user interface

    NASA Astrophysics Data System (ADS)

    Currie, R.; Egede, U.; Richards, A.; Slater, M.; Williams, M.

    2017-10-01

    This document presents the result of recent developments within Ganga[1] project to support users from new communities outside of HEP. In particular I will examine the case of users from the Large Scale Survey Telescope (LSST) group looking to use resources provided by the UK based GridPP[2][3] DIRAC[4][5] instance. An example use case is work performed with users from the LSST Virtual Organisation (VO) to distribute the workflow used for galaxy shape identification analyses. This work highlighted some LSST specific challenges which could be well solved by common tools within the HEP community. As a result of this work the LSST community was able to take advantage of GridPP[2][3] resources to perform large computing tasks within the UK.

  14. Application of a predator-prey overlap metric to determine the impact of sub-grid scale feeding dynamics on ecosystem productivity

    NASA Astrophysics Data System (ADS)

    Greer, A. T.; Woodson, C. B.

    2016-02-01

    Because of the complexity and extremely large size of marine ecosystems, research attention has a strong focus on modelling the system through space and time to elucidate processes driving ecosystem state. One of the major weaknesses of current modelling approaches is the reliance on a particular grid cell size (usually 10's of km in the horizontal & water column mean) to capture the relevant processes, even though empirical research has shown that marine systems are highly structured on fine scales, and this structure can persist over relatively long time scales (days to weeks). Fine-scale features can have a strong influence on the predator-prey interactions driving trophic transfer. Here we apply a statistic, the AB ratio, used to quantify increased predator production due to predator-prey overlap on fine scales in a manner that is computationally feasible for larger scale models. We calculated the AB ratio for predator-prey distributions throughout the scientific literature, as well as for data obtained with a towed plankton imaging system, demonstrating that averaging across a typical model grid cell neglects the fine-scale predator-prey overlap that is an essential component of ecosystem productivity. Organisms from a range of trophic levels and oceanographic regions tended to overlap with their prey both in the horizontal and vertical dimensions. When predator swimming over a diel cycle was incorporated, the amount of production indicated by the AB ratio increased substantially. For the plankton image data, the AB ratio was higher with increasing sampling resolution, especially when prey were highly aggregated. We recommend that ecosystem models incorporate more fine-scale information both to more accurately capture trophic transfer processes and to capitalize on the increasing sampling resolution and data volume from empirical studies.

  15. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  16. Impact of Variable SST on Simulated Warm Season Precipitation

    NASA Astrophysics Data System (ADS)

    Saleeby, S. M.; Cotton, W. R.

    2007-05-01

    The Colorado State University - Regional Atmospheric Modeling System (CSU-RAMS) is being used to examine the variability in monsoon-related warm season precipitation over Mexico and the United States due to variability in SST. Given recent improvements and increased resolution in satellite derived SSTs it is pertinent to examine the sensitivity of the RAMS model to the variety of SST data sources that are available. In particular, we are examining this dependence across continental scales over the full warm season, as well as across the regional scale centered around the Gulf of California on time scales of individual surge events. In this study we performed an ensemble of simulations that include the 2002, 2003, and 2004 warm seasons with use of the Climatology, Reynold's, AVHRR, and MODIS SSTs. From the seasonal 90-day simulations with 30km grid spacing, it was found that variations in surface latent heat flux are directly linked to differences in SST. Regions with cooler (warmer) SST have decreased (increased) moisture flux from the ocean which is in proportion to the magnitude of the SST difference. Over the eastern Pacific, differences in low-level horizontal moisture flux show a general trend toward reduced fluxes over cooler waters and very little inland impact. Over the Gulf of Mexico, however, there is substantial variability for each dataset comparison, despite having only limited variability among the SST data. Causes of this unexpected variability are not straight-forward. Precipitation impacts are greatest near the southern coast of Mexico and along the Sierra Madres. Precipitation variability over the CONUS is rather chaotic and is limited to areas impacted by the Gulf of Mexico or monsoon convection. Another unexpected outcome is the lack of variability in areas near the northern Gulf of California where SST and latent heat flux variability is a maximum. From the 7-day surge period simulations at 7km grid spacing, we found that SST differences on the higher resolution nested grid reveal fine scale variability that is otherwise smoothed out or unapparent on the coarser grid. Unlike the coarse grid, the latent heat flux, temperature, and moisture transport differences on the fine grid reveal an inland impact. This is likely due to fine scale variability in onshore moisture transport and sea- breeze circulations which may alter monsoonal convection and precipitation. However, only the largest SST differences (spatially and in magnitude) tend to invoke large, coherent responses in moisture flux. The SST variability at high resolution produces relatively large differences in precipitation that are focused along the slopes of the SMO, with a tendency toward greater variability along the western slope adjacent to the coast. The precipitation differences are of fine resolution, with variability of +/- 30 mm (over 5 days) along the length of the SMO. Variability on the fine grid also invokes precipitation changes over AZ/NM that are not resolved on the coarse grid. Vertical cross-sections examined along the GoC during the surge episode revealed variations in the moisture and temperature structure of the surge. The cooler SSTs in the climatological dataset produced the greatest variability compared to the other datasets. The surge produced from climatology SSTs was nearly 5g/kg drier and up to 4°C cooler compared to surges influenced by the SST datasets. The overall northward propagation of the surge appeared unaffected by the SSTs.

  17. Distributed and grid computing projects with research focus in human health.

    PubMed

    Diomidous, Marianna; Zikos, Dimitrios

    2012-01-01

    Distributed systems and grid computing systems are used to connect several computers to obtain a higher level of performance, in order to solve a problem. During the last decade, projects use the World Wide Web to aggregate individuals' CPU power for research purposes. This paper presents the existing active large scale distributed and grid computing projects with research focus in human health. There have been found and presented 11 active projects with more than 2000 Processing Units (PUs) each. The research focus for most of them is molecular biology and, specifically on understanding or predicting protein structure through simulation, comparing proteins, genomic analysis for disease provoking genes and drug design. Though not in all cases explicitly stated, common target diseases include research to find cure against HIV, dengue, Duchene dystrophy, Parkinson's disease, various types of cancer and influenza. Other diseases include malaria, anthrax, Alzheimer's disease. The need for national initiatives and European Collaboration for larger scale projects is stressed, to raise the awareness of citizens to participate in order to create a culture of internet volunteering altruism.

  18. Coastal ocean forecasting with an unstructured grid model in the southern Adriatic and northern Ionian seas

    NASA Astrophysics Data System (ADS)

    Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo; Lecci, Rita; Mossa, Michele

    2017-01-01

    SANIFS (Southern Adriatic Northern Ionian coastal Forecasting System) is a coastal-ocean operational system based on the unstructured grid finite-element three-dimensional hydrodynamic SHYFEM model, providing short-term forecasts. The operational chain is based on a downscaling approach starting from the large-scale system for the entire Mediterranean Basin (MFS, Mediterranean Forecasting System), which provides initial and boundary condition fields to the nested system. The model is configured to provide hydrodynamics and active tracer forecasts both in open ocean and coastal waters of southeastern Italy using a variable horizontal resolution from the open sea (3-4 km) to coastal areas (50-500 m). Given that the coastal fields are driven by a combination of both local (also known as coastal) and deep-ocean forcings propagating along the shelf, the performance of SANIFS was verified both in forecast and simulation mode, first (i) on the large and shelf-coastal scales by comparing with a large-scale survey CTD (conductivity-temperature-depth) in the Gulf of Taranto and then (ii) on the coastal-harbour scale (Mar Grande of Taranto) by comparison with CTD, ADCP (acoustic doppler current profiler) and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 6.5 km). The SANIFS forecasts at a lead time of 1 day were compared with the MFS forecasts, highlighting that SANIFS is able to retain the large-scale dynamics of MFS. The large-scale dynamics of MFS are correctly propagated to the shelf-coastal scale, improving the forecast accuracy (+17 % for temperature and +6 % for salinity compared to MFS). Moreover, the added value of SANIFS was assessed on the coastal-harbour scale, which is not covered by the coarse resolution of MFS, where the fields forecasted by SANIFS reproduced the observations well (temperature RMSE equal to 0.11 °C). Furthermore, SANIFS simulations were compared with hourly time series of temperature, sea level and velocity measured on the coastal-harbour scale, showing a good agreement. Simulations in the Gulf of Taranto described a circulation mainly characterized by an anticyclonic gyre with the presence of cyclonic vortexes in shelf-coastal areas. A surface water inflow from the open sea to Mar Grande characterizes the coastal-harbour scale.

  19. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  20. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  1. Review of the development of multi-terminal HVDC and DC power grid

    NASA Astrophysics Data System (ADS)

    Chen, Y. X.

    2017-11-01

    Traditional power equipment, power-grid structures, and operation technology are becoming increasingly powerless with the large-scale renewable energy access to the grid. Thus, we must adopt new technologies, new equipment, and new grid structure to satisfy future requirements in energy patterns. Accordingly, the multiterminal direct current (MTDC) transmission system is receiving increasing attention. This paper starts with a brief description of current developments in MTDC worldwide. The MTDC project, which has been placed into practical operation, is introduced by the Italian-Corsica-Sardinian three-terminal high-voltage DC (HVDC) project. We then describe the basic characteristics and regulations of multiterminal DC transmission. The current mainstream of several control methods are described. In the third chapter, the key to the development of MTDC system or hardware and software technology that restricts the development of multiterminal DC transmission is discussed. This chapter focuses on the comparison of double-ended HVDC and multiterminal HVDC in most aspects and subsequently elaborates the key and difficult point of MTDC development. Finally, this paper summarizes the prospect of a DC power grid. In a few decades, China can build a strong cross-strait AC-DC hybrid power grid.

  2. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  3. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  4. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    NASA Astrophysics Data System (ADS)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  5. Part 2 of a Computational Study of a Drop-Laden Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2004-01-01

    This second of three reports on a computational study of a mixing layer laden with evaporating liquid drops presents the evaluation of Large Eddy Simulation (LES) models. The LES models were evaluated on an existing database that had been generated using Direct Numerical Simulation (DNS). The DNS method and the database are described in the first report of this series, Part 1 of a Computational Study of a Drop-Laden Mixing Layer (NPO-30719), NASA Tech Briefs, Vol. 28, No.7 (July 2004), page 59. The LES equations, which are derived by applying a spatial filter to the DNS set, govern the evolution of the larger scales of the flow and can therefore be solved on a coarser grid. Consistent with the reduction in grid points, the DNS drops would be represented by fewer drops, called computational drops in the LES context. The LES equations contain terms that cannot be directly computed on the coarser grid and that must instead be modeled. Two types of models are necessary: (1) those for the filtered source terms representing the effects of drops on the filtered flow field and (2) those for the sub-grid scale (SGS) fluxes arising from filtering the convective terms in the DNS equations. All of the filtered-sourceterm models that were developed were found to overestimate the filtered source terms. For modeling the SGS fluxes, constant-coefficient Smagorinsky, gradient, and scale-similarity models were assessed and calibrated on the DNS database. The Smagorinsky model correlated poorly with the SGS fluxes, whereas the gradient and scale-similarity models were well correlated with the SGS quantities that they represented.

  6. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  7. Autonomous Energy Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less

  8. Grid-cell representations in mental simulation

    PubMed Central

    Bellmund, Jacob LS; Deuker, Lorena; Navarro Schröder, Tobias; Doeller, Christian F

    2016-01-01

    Anticipating the future is a key motif of the brain, possibly supported by mental simulation of upcoming events. Rodent single-cell recordings suggest the ability of spatially tuned cells to represent subsequent locations. Grid-like representations have been observed in the human entorhinal cortex during virtual and imagined navigation. However, hitherto it remains unknown if grid-like representations contribute to mental simulation in the absence of imagined movement. Participants imagined directions between building locations in a large-scale virtual-reality city while undergoing fMRI without re-exposure to the environment. Using multi-voxel pattern analysis, we provide evidence for representations of absolute imagined direction at a resolution of 30° in the parahippocampal gyrus, consistent with the head-direction system. Furthermore, we capitalize on the six-fold rotational symmetry of grid-cell firing to demonstrate a 60° periodic pattern-similarity structure in the entorhinal cortex. Our findings imply a role of the entorhinal grid-system in mental simulation and future thinking beyond spatial navigation. DOI: http://dx.doi.org/10.7554/eLife.17089.001 PMID:27572056

  9. Reference evapotranspiration from coarse-scale and dynamically downscaled data in complex terrain: Sensitivity to interpolation and resolution

    NASA Astrophysics Data System (ADS)

    Strong, Courtenay; Khatri, Krishna B.; Kochanski, Adam K.; Lewis, Clayton S.; Allen, L. Niel

    2017-05-01

    The main objective of this study was to investigate whether dynamically downscaled high resolution (4-km) climate data from the Weather Research and Forecasting (WRF) model provide physically meaningful additional information for reference evapotranspiration (E) calculation compared to the recently published GridET framework that uses interpolation from coarser-scale simulations run at 32-km resolution. The analysis focuses on complex terrain of Utah in the western United States for years 1985-2010, and comparisons were made statewide with supplemental analyses specifically for regions with irrigated agriculture. E was calculated using the standardized equation and procedures proposed by the American Society of Civil Engineers from hourly data, and climate inputs from WRF and GridET were debiased relative to the same set of observations. For annual mean values, E from WRF (EW) and E from GridET (EG) both agreed well with E derived from observations (r2 = 0.95, bias < 2 mm). Domain-wide, EW and EG were well correlated spatially (r2 = 0.89), however local differences ΔE =EW -EG were as large as +439 mm year-1 (+26%) in some locations, and ΔE averaged +36 mm year-1. After linearly removing the effects of contrasts in solar radiation and wind speed, which are characteristically less reliable under downscaling in complex terrain, approximately half the residual variance was accounted for by contrasts in temperature and humidity between GridET and WRF. These contrasts stemmed from GridET interpolating using an assumed lapse rate of Γ = 6.5 K km-1, whereas WRF produced a thermodynamically-driven lapse rate closer to 5 K km-1 as observed in mountainous terrain. The primary conclusions are that observed lapse rates in complex terrain differ markedly from the commonly assumed Γ = 6.5 K km-1, these lapse rates can be realistically resolved via dynamical downscaling, and use of constant Γ produces differences in E of order as large as 102 mm year-1.

  10. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  11. Scientific Grid activities and PKI deployment in the Cybermedia Center, Osaka University.

    PubMed

    Akiyama, Toyokazu; Teranishi, Yuuichi; Nozaki, Kazunori; Kato, Seiichi; Shimojo, Shinji; Peltier, Steven T; Lin, Abel; Molina, Tomas; Yang, George; Lee, David; Ellisman, Mark; Naito, Sei; Koike, Atsushi; Matsumoto, Shuichi; Yoshida, Kiyokazu; Mori, Hirotaro

    2005-10-01

    The Cybermedia Center (CMC), Osaka University, is a research institution that offers knowledge and technology resources obtained from advanced researches in the areas of large-scale computation, information and communication, multimedia content and education. Currently, CMC is involved in Japanese national Grid projects such as JGN II (Japan Gigabit Network), NAREGI and BioGrid. Not limited to Japan, CMC also actively takes part in international activities such as PRAGMA. In these projects and international collaborations, CMC has developed a Grid system that allows scientists to perform their analysis by remote-controlling the world's largest ultra-high voltage electron microscope located in Osaka University. In another undertaking, CMC has assumed a leadership role in BioGrid by sharing its experiences and knowledge on the system development for the area of biology. In this paper, we will give an overview of the BioGrid project and introduce the progress of the Telescience unit, which collaborates with the Telescience Project led by the National Center for Microscopy and Imaging Research (NCMIR). Furthermore, CMC collaborates with seven Computing Centers in Japan, NAREGI and National Institute of Informatics to deploy PKI base authentication infrastructure. The current status of this project and future collaboration with Grid Projects will be delineated in this paper.

  12. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  13. High-Resolution Subtropical Summer Precipitation Derived from Dynamical Downscaling of the NCEP-DOE Reanalysis: How Much Small-Scale Information Is Added by a Regional Model?

    NASA Technical Reports Server (NTRS)

    Lim, Young-Kwon; Stefanova, Lydia B.; Chan, Steven C.; Schubert, Siegfried D.; OBrien, James J.

    2010-01-01

    This study assesses the regional-scale summer precipitation produced by the dynamical downscaling of analyzed large-scale fields. The main goal of this study is to investigate how much the regional model adds smaller scale precipitation information that the large-scale fields do not resolve. The modeling region for this study covers the southeastern United States (Florida, Georgia, Alabama, South Carolina, and North Carolina) where the summer climate is subtropical in nature, with a heavy influence of regional-scale convection. The coarse resolution (2.5deg latitude/longitude) large-scale atmospheric variables from the National Center for Environmental Prediction (NCEP)/DOE reanalysis (R2) are downscaled using the NCEP Environmental Climate Prediction Center regional spectral model (RSM) to produce precipitation at 20 km resolution for 16 summer seasons (19902005). The RSM produces realistic details in the regional summer precipitation at 20 km resolution. Compared to R2, the RSM-produced monthly precipitation shows better agreement with observations. There is a reduced wet bias and a more realistic spatial pattern of the precipitation climatology compared with the interpolated R2 values. The root mean square errors of the monthly R2 precipitation are reduced over 93 (1,697) of all the grid points in the five states (1,821). The temporal correlation also improves over 92 (1,675) of all grid points such that the domain-averaged correlation increases from 0.38 (R2) to 0.55 (RSM). The RSM accurately reproduces the first two observed eigenmodes, compared with the R2 product for which the second mode is not properly reproduced. The spatial patterns for wet versus dry summer years are also successfully simulated in RSM. For shorter time scales, the RSM resolves heavy rainfall events and their frequency better than R2. Correlation and categorical classification (above/near/below average) for the monthly frequency of heavy precipitation days is also significantly improved by the RSM.

  14. Highly turbulent solutions of the Lagrangian-averaged Navier-Stokes alpha model and their large-eddy-simulation potential.

    PubMed

    Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick

    2007-11-01

    We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.

  15. From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact

    PubMed Central

    Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael

    2005-01-01

    General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096

  16. Experiences Integrating Transmission and Distribution Simulations for DERs with the Integrated Grid Modeling System (IGMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias

    2016-08-11

    This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less

  17. Challenges and opportunities of power systems from smart homes to super-grids.

    PubMed

    Kuhn, Philipp; Huber, Matthias; Dorfner, Johannes; Hamacher, Thomas

    2016-01-01

    The world's power systems are facing a structural change including liberalization of markets and integration of renewable energy sources. This paper describes the challenges that lie ahead in this process and points out avenues for overcoming different problems at different scopes, ranging from individual homes to international super-grids. We apply energy system models at those different scopes and find a trade-off between technical and social complexity. Small-scale systems would require technological breakthroughs, especially for storage, but individual agents can and do already start to build and operate such systems. In contrast, large-scale systems could potentially be more efficient from a techno-economic point of view. However, new political frameworks are required that enable long-term cooperation among sovereign entities through mutual trust. Which scope first achieves its breakthrough is not clear yet.

  18. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.

  19. Spatial application of WEPS for estimating wind erosion in the Pacific Northwest

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on cropland and was originally designed to run simulations on a field-scale size. This study extended WEPS to run on multiple fields (grids) independently to cover a large region and to conduct an initial investigation to ass...

  20. Transforming Power Systems Through Global Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-06-01

    Ambitious and integrated policy and regulatory frameworks are crucial to achieve power system transformation. The 21st Century Power Partnership -- a multilateral initiative of the Clean Energy Ministerial -- serves as a platform for public-private collaboration to advance integrated solutions for the large-scale deployment of renewable energy in combination with energy efficiency and grid modernization.

  1. 78 FR 7464 - Large Scale Networking (LSN)-Middleware And Grid Interagency Coordination (MAGIC) Team

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-01

    ... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD... (703) 292-4873. Date/Location: The MAGIC Team meetings are held on the first Wednesday of each month, 2... basis. WebEx participation is available for each meeting. Please reference the MAGIC Team Web site for...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.

    With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less

  3. Effectively Transparent Front Contacts for Optoelectronic Devices

    DOE PAGES

    Saive, Rebecca; Borsuk, Aleca M.; Emmer, Hal S.; ...

    2016-06-10

    Effectively transparent front contacts for optoelectronic devices achieve a measured transparency of up to 99.9% and a measured sheet resistance of 4.8 Ω sq-1. These 3D microscale triangular cross-section grid fingers redirect incoming photons efficiently to the active semiconductor area and can replace standard grid fingers as well as transparent conductive oxide layers in optoelectronic devices. Optoelectronic devices such as light emitting diodes, photodiodes, and solar cells play an important and expanding role in modern technology. Photovoltaics is one of the largest optoelectronic industry sectors and an ever-increasing component of the world's rapidly growing renewable carbon-free electricity generation infrastructure. Inmore » recent years, the photovoltaics field has dramatically expanded owing to the large-scale manufacture of inexpensive crystalline Si and thin film cells and modules. The current record efficiency (η = 25.6%) Si solar cell utilizes a heterostructure intrinsic thin layer (HIT) design[1] to enable increased open circuit voltage, while more mass-manufacturable solar cell architectures feature front contacts.[2, 3] Thus improved solar cell front contact designs are important for future large-scale photovoltaics with even higher efficiency.« less

  4. On the improvement for charging large-scale flexible electrostatic actuators

    NASA Astrophysics Data System (ADS)

    Liao, Hsu-Ching; Chen, Han-Long; Su, Yu-Hao; Chen, Yu-Chi; Ko, Wen-Ching; Liou, Chang-Ho; Wu, Wen-Jong; Lee, Chih-Kung

    2011-04-01

    Recently, the development of flexible electret based electrostatic actuator has been widely discussed. The devices was shown to have high sound quality, energy saving, flexible structure and can be cut to any shape. However, achieving uniform charge on the electret diaphragm is one of the most critical processes needed to have the speaker ready for large-scale production. In this paper, corona discharge equipment contains multi-corona probes and grid bias was set up to inject spatial charges within the electret diaphragm. The optimal multi-corona probes system was adjusted to achieve uniform charge distribution of electret diaphragm. The processing conditions include the distance between the corona probes, the voltages of corona probe and grid bias, etc. We assembled the flexible electret loudspeakers first and then measured their sound pressure and beam pattern. The uniform charge distribution within the electret diaphragm based flexible electret loudspeaker provided us with the opportunity to shape the loudspeaker arbitrarily and to tailor the sound distribution per specifications request. Some of the potential futuristic applications for this device such as sound poster, smart clothes, and sound wallpaper, etc. were discussed as well.

  5. Comparison of Grid Nudging and Spectral Nudging Techniques for Dynamical Climate Downscaling within the WRF Model

    NASA Astrophysics Data System (ADS)

    Fan, X.; Chen, L.; Ma, Z.

    2010-12-01

    Climate downscaling has been an active research and application area in the past several decades focusing on regional climate studies. Dynamical downscaling, in addition to statistical methods, has been widely used in downscaling as the advanced modern numerical weather and regional climate models emerge. The utilization of numerical models enables that a full set of climate variables are generated in the process of downscaling, which are dynamically consistent due to the constraints of physical laws. While we are generating high resolution regional climate, the large scale climate patterns should be retained. To serve this purpose, nudging techniques, including grid analysis nudging and spectral nudging, have been used in different models. There are studies demonstrating the benefit and advantages of each nudging technique; however, the results are sensitive to many factors such as nudging coefficients and the amount of information to nudge to, and thus the conclusions are controversy. While in a companion work of developing approaches for quantitative assessment of the downscaled climate, in this study, the two nudging techniques are under extensive experiments in the Weather Research and Forecasting (WRF) model. Using the same model provides fair comparability. Applying the quantitative assessments provides objectiveness of comparison. Three types of downscaling experiments were performed for one month of choice. The first type is serving as a base whereas the large scale information is communicated through lateral boundary conditions only; the second is using the grid analysis nudging; and the third is using spectral nudging. Emphases are given to the experiments of different nudging coefficients and nudging to different variables in the grid analysis nudging; while in spectral nudging, we focus on testing the nudging coefficients, different wave numbers on different model levels to nudge.

  6. Nested mesoscale-to-LES modeling of the atmospheric boundary layer in the presence of under-resolved convective structures

    DOE PAGES

    Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.; ...

    2017-07-06

    Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less

  7. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  8. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  9. Nested mesoscale-to-LES modeling of the atmospheric boundary layer in the presence of under-resolved convective structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.

    Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less

  10. Grid and subgrid-scale interactions in viscoelastic turbulent flow and implications for modelling

    NASA Astrophysics Data System (ADS)

    Masoudian, M.; da Silva, C. B.; Pinho, F. T.

    2016-06-01

    Using direct numerical simulations of turbulent plane channel flow of homogeneous polymer solutions, described by the Finitely Extensible Nonlinear Elastic-Peterlin (FENE-P) rheological constitutive model, a-priori analyses of the filtered momentum and FENE-P constitutive equations are performed. The influence of the polymer additives on the subgrid-scale (SGS) energy is evaluated by comparing the Newtonian and the viscoelastic flows, and a severe suppression of SGS stresses and energy is observed in the viscoelastic flow. All the terms of the transport equation of the SGS kinetic energy for FENE-P fluids are analysed, and an approximated version of this equation for use in future large eddy simulation closures is suggested. The terms responsible for kinetic energy transfer between grid-scale (GS) and SGS energy (split into forward/backward energy transfer) are evaluated in the presence of polymers. It is observed that the probability and intensity of forward scatter events tend to decrease in the presence of polymers.

  11. A detailed model for simulation of catchment scale subsurface hydrologic processes

    NASA Technical Reports Server (NTRS)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  12. The influence of sub-grid scale motions on particle collision in homogeneous isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Li, Jing; Liu, Zhaohui; Zheng, Chuguang

    2018-02-01

    The absence of sub-grid scale (SGS) motions leads to severe errors in particle pair dynamics, which represents a great challenge to the large eddy simulation of particle-laden turbulent flow. In order to address this issue, data from direct numerical simulation (DNS) of homogenous isotropic turbulence coupled with Lagrangian particle tracking are used as a benchmark to evaluate the corresponding results of filtered DNS (FDNS). It is found that the filtering process in FDNS will lead to a non-monotonic variation of the particle collision statistics, including radial distribution function, radial relative velocity, and the collision kernel. The peak of radial distribution function shifts to the large-inertia region due to the lack of SGS motions, and the analysis of the local flowstructure characteristic variable at particle position indicates that the most effective interaction scale between particles and fluid eddies is increased in FDNS. Moreover, this scale shifting has an obvious effect on the odd-order moments of the probability density function of radial relative velocity, i.e. the skewness, which exhibits a strong correlation to the variance of radial distribution function in FDNS. As a whole, the radial distribution function, together with radial relative velocity, can compensate the SGS effects for the collision kernel in FDNS when the Stokes number based on the Kolmogorov time scale is greater than 3.0. However, it still leaves considerable errors for { St}_k <3.0.

  13. Numerical simulations of Hurricane Katrina (2005) in the turbulent gray zone

    NASA Astrophysics Data System (ADS)

    Green, Benjamin W.; Zhang, Fuqing

    2015-03-01

    Current numerical simulations of tropical cyclones (TCs) use a horizontal grid spacing as small as Δx = 103 m, with all boundary layer (BL) turbulence parameterized. Eventually, TC simulations can be conducted at Large Eddy Simulation (LES) resolution, which requires Δx to fall in the inertial subrange (often <102 m) to adequately resolve the large, energy-containing eddies. Between the two lies the so-called "terra incognita" because some of the assumptions used by mesoscale models and LES to treat BL turbulence are invalid. This study performs several 4-6 h simulations of Hurricane Katrina (2005) without a BL parameterization at extremely fine Δx [333, 200, and 111 m, hereafter "Large Eddy Permitting (LEP) runs"] and compares with mesoscale simulations with BL parameterizations (Δx = 3 km, 1 km, and 333 m, hereafter "PBL runs"). There are profound differences in the hurricane BL structure between the PBL and LEP runs: the former have a deeper inflow layer and secondary eyewall formation, whereas the latter have a shallow inflow layer without a secondary eyewall. Among the LEP runs, decreased Δx yields weaker subgrid-scale vertical momentum fluxes, but the sum of subgrid-scale and "grid-scale" fluxes remain similar. There is also evidence that the size of the prevalent BL eddies depends upon Δx, suggesting that convergence to true LES has not yet been reached. Nevertheless, the similarities in the storm-scale BL structure among the LEP runs indicate that the net effect of the BL on the rest of the hurricane may be somewhat independent of Δx.

  14. High resolution modeling of reservoir storage and extent dynamics at the continental scale

    NASA Astrophysics Data System (ADS)

    Shin, S.; Pokhrel, Y. N.

    2017-12-01

    Over the past decade, significant progress has been made in developing reservoir schemes in large scale hydrological models to better simulate hydrological fluxes and storages in highly managed river basins. These schemes have been successfully used to study the impact of reservoir operation on global river basins. However, improvements in the existing schemes are needed for hydrological fluxes and storages, especially at the spatial resolution to be used in hyper-resolution hydrological modeling. In this study, we developed a reservoir routing scheme with explicit representation of reservoir storage and extent at the grid scale of 5km or less. Instead of setting reservoir area to a fixed value or diagnosing it using the area-storage equation, which is a commonly used approach in the existing reservoir schemes, we explicitly simulate the inundated storage and area for all grid cells that are within the reservoir extent. This approach enables a better simulation of river-floodplain-reservoir storage by considering both the natural flood and man-made reservoir storage. Results of the seasonal dynamics of reservoir storage, river discharge at the downstream of dams, and the reservoir inundation extent are evaluated with various datasets from ground-observations and satellite measurements. The new model captures the dynamics of these variables with a good accuracy for most of the large reservoirs in the western United States. It is expected that the incorporation of the newly developed reservoir scheme in large-scale land surface models (LSMs) will lead to improved simulation of river flow and terrestrial water storage in highly managed river basins.

  15. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  16. Large-scale Density Structures in Magneto-rotational Disk Turbulence

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew; Johansen, A.; Klahr, H.

    2009-01-01

    Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.

  17. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  18. Impact of cloud horizontal inhomogeneity and directional sampling on the retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-11-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.

  19. Managing large-scale workflow execution from resource provisioning to provenance tracking: The CyberShake example

    USGS Publications Warehouse

    Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.

    2006-01-01

    This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.

  20. Three-dimensional time dependent computation of turbulent flow

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Reynolds, W. C.; Ferziger, J. H.

    1975-01-01

    The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.

  1. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  2. Sub-grid-scale description of turbulent magnetic reconnection in magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widmer, F., E-mail: widmer@mps.mpg.de; Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen; Büchner, J.

    Magnetic reconnection requires, at least locally, a non-ideal plasma response. In collisionless space and astrophysical plasmas, turbulence could transport energy from large to small scales where binary particle collisions are rare. We have investigated the influence of small scale magnetohydrodynamics (MHD) turbulence on the reconnection rate in the framework of a compressible MHD approach including sub-grid-scale (SGS) turbulence. For this sake, we considered Harris-type and force-free current sheets with finite guide magnetic fields directed out of the reconnection plane. The goal is to find out whether unresolved by conventional simulations MHD turbulence can enhance the reconnection process in high-Reynolds-number astrophysicalmore » plasmas. Together with the MHD equations, we solve evolution equations for the SGS energy and cross-helicity due to turbulence according to a Reynolds-averaged turbulence model. The SGS turbulence is self-generated and -sustained through the inhomogeneities of the mean fields. By this way, the feedback of the unresolved turbulence into the MHD reconnection process is taken into account. It is shown that the turbulence controls the regimes of reconnection by its characteristic timescale τ{sub t}. The dependence on resistivity was investigated for large-Reynolds-number plasmas for Harris-type as well as force-free current sheets with guide field. We found that magnetic reconnection depends on the relation between the molecular and apparent effective turbulent resistivity. We found that the turbulence timescale τ{sub t} decides whether fast reconnection takes place or whether the stored energy is just diffused away to small scale turbulence. If the amount of energy transferred from large to small scales is enhanced, fast reconnection can take place. Energy spectra allowed us to characterize the different regimes of reconnection. It was found that reconnection is even faster for larger Reynolds numbers controlled by the molecular resistivity η, as long as the initial level of turbulence is not too large. This implies that turbulence plays an important role to reach the limit of fast reconnection in large Reynolds number plasmas even for smaller amounts of turbulence.« less

  3. Trends and natural variability of North American spring onset as evaluated by a new gridded dataset of spring indices

    USGS Publications Warehouse

    Ault, Toby R.; Schwartz, Mark D.; Zurita-Milla, Raul; Weltzin, Jake F.; Betancourt, Julio L.

    2015-01-01

    Climate change is expected to modify the timing of seasonal transitions this century, impacting wildlife migrations, ecosystem function, and agricultural activity. Tracking seasonal transitions in a consistent manner across space and through time requires indices that can be used for monitoring and managing biophysical and ecological systems during the coming decades. Here a new gridded dataset of spring indices is described and used to understand interannual, decadal, and secular trends across the coterminous United States. This dataset is derived from daily interpolated meteorological data, and the results are compared with historical station data to ensure the trends and variations are robust. Regional trends in the first leaf index range from 20.8 to 21.6 days decade21, while first bloom index trends are between20.4 and 21.2 for most regions. However, these trends are modulated by interannual to multidecadal variations, which are substantial throughout the regions considered here. These findings emphasize the important role large-scale climate modes of variability play in modulating spring onset on interannual to multidecadal time scales. Finally, there is some potential for successful subseasonal forecasts of spring onset, as indices from most regions are significantly correlated with antecedent large-scale modes of variability.

  4. Research on Grid Size Suitability of Gridded Population Distribution in Urban Area: A Case Study in Urban Area of Xuanzhou District, China.

    PubMed

    Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao

    2017-01-01

    The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution.

  5. Research on Grid Size Suitability of Gridded Population Distribution in Urban Area: A Case Study in Urban Area of Xuanzhou District, China

    PubMed Central

    Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao

    2017-01-01

    The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution. PMID:28122050

  6. Simulation of Deep Convective Clouds with the Dynamic Reconstruction Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Shi, X.; Chow, F. K.; Street, R. L.; Bryan, G. H.

    2017-12-01

    The terra incognita (TI), or gray zone, in simulations is a range of grid spacing comparable to the most energetic eddy diameter. Spacing in mesoscale and simulations is much larger than the eddies, and turbulence is parameterized with one-dimensional vertical-mixing. Large eddy simulations (LES) have grid spacing much smaller than the energetic eddies, and use three-dimensional models of turbulence. Studies of convective weather use convection-permitting resolutions, which are in the TI. Neither mesoscale-turbulence nor LES models are designed for the TI, so TI turbulence parameterization needs to be discussed. Here, the effects of sub-filter scale (SFS) closure schemes on the simulation of deep tropical convection are evaluated by comparing three closures, i.e. Smagorinsky model, Deardorff-type TKE model and the dynamic reconstruction model (DRM), which partitions SFS turbulence into resolvable sub-filter scales (RSFS) and unresolved sub-grid scales (SGS). The RSFS are reconstructed, and the SGS are modeled with a dynamic eddy viscosity/diffusivity model. The RSFS stresses/fluxes allow backscatter of energy/variance via counter-gradient stresses/fluxes. In high-resolution (100m) simulations of tropical convection use of these turbulence models did not lead to significant differences in cloud water/ice distribution, precipitation flux, or vertical fluxes of momentum and heat. When model resolutions are coarsened, the Smagorinsky and TKE models overestimate cloud ice and produces large-amplitude downward heat flux in the middle troposphere (not found in the high-resolution simulations). This error is a result of unrealistically large eddy diffusivities, i.e., the eddy diffusivity of the DRM is on the order of 1 for the coarse resolution simulations, the eddy diffusivity of the Smagorinsky and TKE model is on the order of 100. Splitting the eddy viscosity/diffusivity scalars into vertical and horizontal components by using different length scales and strain rate components helps to reduce the errors, but does not completely remedy the problem. In contrast, the coarse resolution simulations using the DRM produce results that are more consistent with the high-resolution results, suggesting that the DRM is a more appropriate turbulence model for simulating convection in the TI.

  7. Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales

    NASA Astrophysics Data System (ADS)

    Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei

    2017-11-01

    Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.

  8. RACORO Continental Boundary Layer Cloud Investigations: 1. Case Study Development and Ensemble Large-Scale Forcings

    NASA Technical Reports Server (NTRS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; hide

    2015-01-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  9. RACORO continental boundary layer cloud investigations: 1. Case study development and ensemble large-scale forcings

    NASA Astrophysics Data System (ADS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  10. Design of energy storage system to improve inertial response for large scale PV generation

    DOE PAGES

    Wang, Xiaoyu; Yue, Meng

    2016-07-01

    With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to use battery energy storage systems (BESS) as the remediation approach to accommodate the degrading inertial response when high penetrations of PV generation are integrated into the existing power grid. A sample power system was adopted and simulated usingmore » PSS/E software. Here, impacts of different penetration levels of PV generation on the system inertial response were investigated and then BESS was incorporated to improve the frequency dynamics.« less

  11. Organic electronics for high-resolution electrocorticography of the human brain.

    PubMed

    Khodagholy, Dion; Gelinas, Jennifer N; Zhao, Zifang; Yeh, Malcolm; Long, Michael; Greenlee, Jeremy D; Doyle, Werner; Devinsky, Orrin; Buzsáki, György

    2016-11-01

    Localizing neuronal patterns that generate pathological brain signals may assist with tissue resection and intervention strategies in patients with neurological diseases. Precise localization requires high spatiotemporal recording from populations of neurons while minimizing invasiveness and adverse events. We describe a large-scale, high-density, organic material-based, conformable neural interface device ("NeuroGrid") capable of simultaneously recording local field potentials (LFPs) and action potentials from the cortical surface. We demonstrate the feasibility and safety of intraoperative recording with NeuroGrids in anesthetized and awake subjects. Highly localized and propagating physiological and pathological LFP patterns were recorded, and correlated neural firing provided evidence about their local generation. Application of NeuroGrids to brain disorders, such as epilepsy, may improve diagnostic precision and therapeutic outcomes while reducing complications associated with invasive electrodes conventionally used to acquire high-resolution and spiking data.

  12. Data grid: a distributed solution to PACS

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyan; Zhang, Jianguo

    2004-04-01

    In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.

  13. Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow

    NASA Technical Reports Server (NTRS)

    Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.

    1977-01-01

    An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.

  14. Flow topologies and turbulence scales in a jet-in-cross-flow

    DOE PAGES

    Oefelein, Joseph C.; Ruiz, Anthony M.; Lacaze, Guilhem

    2015-04-03

    This study presents a detailed analysis of the flow topologies and turbulence scales in the jet-in-cross-flow experiment of [Su and Mungal JFM 2004]. The analysis is performed using the Large Eddy Simulation (LES) technique with a highly resolved grid and time-step and well controlled boundary conditions. This enables quantitative agreement with the first and second moments of turbulence statistics measured in the experiment. LES is used to perform the analysis since experimental measurements of time-resolved 3D fields are still in their infancy and because sampling periods are generally limited with direct numerical simulation. A major focal point is the comprehensivemore » characterization of the turbulence scales and their evolution. Time-resolved probes are used with long sampling periods to obtain maps of the integral scales, Taylor microscales, and turbulent kinetic energy spectra. Scalar-fluctuation scales are also quantified. In the near-field, coherent structures are clearly identified, both in physical and spectral space. Along the jet centerline, turbulence scales grow according to a classical one-third power law. However, the derived maps of turbulence scales reveal strong inhomogeneities in the flow. From the modeling perspective, these insights are useful to design optimized grids and improve numerical predictions in similar configurations.« less

  15. Evaluating GCM land surface hydrology parameterizations by computing river discharges using a runoff routing model: Application to the Mississippi basin

    NASA Technical Reports Server (NTRS)

    Liston, G. E.; Sud, Y. C.; Wood, E. F.

    1994-01-01

    To relate general circulation model (GCM) hydrologic output to readily available river hydrographic data, a runoff routing scheme that routes gridded runoffs through regional- or continental-scale river drainage basins is developed. By following the basin overland flow paths, the routing model generates river discharge hydrographs that can be compared to observed river discharges, thus allowing an analysis of the GCM representation of monthly, seasonal, and annual water balances over large regions. The runoff routing model consists of two linear reservoirs, a surface reservoir and a groundwater reservoir, which store and transport water. The water transport mechanisms operating within these two reservoirs are differentiated by their time scales; the groundwater reservoir transports water much more slowly than the surface reservior. The groundwater reservior feeds the corresponding surface store, and the surface stores are connected via the river network. The routing model is implemented over the Global Energy and Water Cycle Experiment (GEWEX) Continental-Scale International Project Mississippi River basin on a rectangular grid of 2 deg X 2.5 deg. Two land surface hydrology parameterizations provide the gridded runoff data required to run the runoff routing scheme: the variable infiltration capacity model, and the soil moisture component of the simple biosphere model. These parameterizations are driven with 4 deg X 5 deg gridded climatological potential evapotranspiration and 1979 First Global Atmospheric Research Program (GARP) Global Experiment precipitation. These investigations have quantified the importance of physically realistic soil moisture holding capacities, evaporation parameters, and runoff mechanisms in land surface hydrology formulations.

  16. The collaborative historical African rainfall model: description and evaluation

    USGS Publications Warehouse

    Funk, Christopher C.; Michaelsen, Joel C.; Verdin, James P.; Artan, Guleid A.; Husak, Gregory; Senay, Gabriel B.; Gadain, Hussein; Magadazire, Tamuka

    2003-01-01

    In Africa the variability of rainfall in space and time is high, and the general availability of historical gauge data is low. This makes many food security and hydrologic preparedness activities difficult. In order to help overcome this limitation, we have created the Collaborative Historical African Rainfall Model (CHARM). CHARM combines three sources of information: climatologically aided interpolated (CAI) rainfall grids (monthly/0.5° ), National Centers for Environmental Prediction reanalysis precipitation fields (daily/1.875° ) and orographic enhancement estimates (daily/0.1° ). The first set of weights scales the daily reanalysis precipitation fields to match the gridded CAI monthly rainfall time series. This produces data with a daily/0.5° resolution. A diagnostic model of orographic precipitation, VDELB—based on the dot-product of the surface wind V and terrain gradient (DEL) and atmospheric buoyancy B—is then used to estimate the precipitation enhancement produced by complex terrain. Although the data are produced on 0.1° grids to facilitate integration with satellite-based rainfall estimates, the ‘true’ resolution of the data will be less than this value, and varies with station density, topography, and precipitation dynamics. The CHARM is best suited, therefore, to applications that integrate rainfall or rainfall-driven model results over large regions. The CHARM time series is compared with three independent datasets: dekadal satellite-based rainfall estimates across the continent, dekadal interpolated gauge data in Mali, and daily interpolated gauge data in western Kenya. These comparisons suggest reasonable accuracies (standard errors of about half a standard deviation) when data are aggregated to regional scales, even at daily time steps. Thus constrained, numerical weather prediction precipitation fields do a reasonable job of representing large-scale diurnal variations.

  17. A stand-alone tree demography and landscape structure module for Earth system models: integration with global forest data

    NASA Astrophysics Data System (ADS)

    Haverd, V.; Smith, B.; Nieradzik, L. P.; Briggs, P. R.

    2014-02-01

    Poorly constrained rates of biomass turnover are a key limitation of Earth system models (ESM). In light of this, we recently proposed a new approach encoded in a model called Populations-Order-Physiology (POP), for the simulation of woody ecosystem stand dynamics, demography and disturbance-mediated heterogeneity. POP is suitable for continental to global applications and designed for coupling to the terrestrial ecosystem component of any ESM. POP bridges the gap between first generation Dynamic Vegetation Models (DVMs) with simple large-area parameterisations of woody biomass (typically used in current ESMs) and complex second generation DVMs, that explicitly simulate demographic processes and landscape heterogeneity of forests. The key simplification in the POP approach, compared with second-generation DVMs, is to compute physiological processes such as assimilation at grid-scale (with CABLE or a similar land surface model), but to partition the grid-scale biomass increment among age classes defined at sub grid-scale, each subject to its own dynamics. POP was successfully demonstrated along a savanna transect in northern Australia, replicating the effects of strong rainfall and fire disturbance gradients on observed stand productivity and structure. Here, we extend the application of POP to a range of forest types around the globe, employing paired observations of stem biomass and density from forest inventory data to calibrate model parameters governing stand demography and biomass evolution. The calibrated POP model is then coupled to the CABLE land surface model and the combined model (CABLE-POP) is evaluated against leaf-stem allometry observations from forest stands ranging in age from 3 to 200 yr. Results indicate that simulated biomass pools conform well with observed allometry. We conclude that POP represents a preferable alternative to large-area parameterisations of woody biomass turnover, typically used in current ESMs.

  18. Factorial inferential grid grouping and representativeness analysis for a systematic selection of representative grids

    NASA Astrophysics Data System (ADS)

    Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Yao, Yao

    2017-08-01

    A factorial inferential grid grouping and representativeness analysis (FIGGRA) approach is developed to achieve a systematic selection of representative grids in large-scale climate change impact assessment and adaptation (LSCCIAA) studies and other fields of Earth and space sciences. FIGGRA is applied to representative-grid selection for temperature (Tas) and precipitation (Pr) over the Loess Plateau (LP) to verify methodological effectiveness. FIGGRA is effective at and outperforms existing grid-selection approaches (e.g., self-organizing maps) in multiple aspects such as clustering similar grids, differentiating dissimilar grids, and identifying representative grids for both Tas and Pr over LP. In comparison with Pr, the lower spatial heterogeneity and higher spatial discontinuity of Tas over LP lead to higher within-group similarity, lower between-group dissimilarity, lower grid grouping effectiveness, and higher grid representativeness; the lower interannual variability of the spatial distributions of Tas results in lower impacts of the interannual variability on the effectiveness of FIGGRA. For LP, the spatial climatic heterogeneity is the highest in January for Pr and in October for Tas; it decreases from spring, autumn, summer to winter for Tas and from summer, spring, autumn to winter for Pr. Two parameters, i.e., the statistical significance level (α) and the minimum number of grids in every climate zone (Nmin), and their joint effects are significant for the effectiveness of FIGGRA; normalization of a nonnormal climate-variable distribution is helpful for the effectiveness only for Pr. For FIGGRA-based LSCCIAA studies, a low value of Nmin is recommended for both Pr and Tas, and a high and medium value of α for Pr and Tas, respectively.

  19. Heterogeneous collaborative sensor network for electrical management of an automated house with PV energy.

    PubMed

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Alvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier

    2011-01-01

    In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the "Smart Grid" which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called "MagicBox" equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.

  20. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumkehr, Andrew; Hilton, Tim; Whelan, Mary

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less

  1. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    DOE PAGES

    Zumkehr, Andrew; Hilton, Tim; Whelan, Mary; ...

    2018-03-31

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less

  2. Global gridded anthropogenic emissions inventory of carbonyl sulfide

    NASA Astrophysics Data System (ADS)

    Zumkehr, Andrew; Hilton, Tim W.; Whelan, Mary; Smith, Steve; Kuai, Le; Worden, John; Campbell, J. Elliott

    2018-06-01

    Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, the inventory is provided as annually varying estimates from years 1980-2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y-1 (range of 223-586 Gg S y-1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Finally, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.

  3. Grid-enabled mammographic auditing and training system

    NASA Astrophysics Data System (ADS)

    Yap, M. H.; Gale, A. G.

    2008-03-01

    Effective use of new technologies to support healthcare initiatives is important and current research is moving towards implementing secure grid-enabled healthcare provision. In the UK, a large-scale collaborative research project (GIMI: Generic Infrastructures for Medical Informatics), which is concerned with the development of a secure IT infrastructure to support very widespread medical research across the country, is underway. In the UK, there are some 109 breast screening centers and a growing number of individuals (circa 650) nationally performing approximately 1.5 million screening examinations per year. At the same, there is a serious, and ongoing, national workforce issue in screening which has seen a loss of consultant mammographers and a growth in specially trained technologists and other non-radiologists. Thus there is a need to offer effective and efficient mammographic training so as to maintain high levels of screening skills. Consequently, a grid based system has been proposed which has the benefit of offering very large volumes of training cases that the mammographers can access anytime and anywhere. A database, spread geographically across three university systems, of screening cases is used as a test set of known cases. The GIMI mammography training system first audits these cases to ensure that they are appropriately described and annotated. Subsequently, the cases are utilized for training in a grid-based system which has been developed. This paper briefly reviews the background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations, and future plans of such a grid based approach.

  4. Structure and modeling of turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, E.A.

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scalemore » motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).« less

  5. An Experimental Investigation of Unsteady Surface Pressure on an Airfoil in Turbulence

    NASA Technical Reports Server (NTRS)

    Mish, Patrick F.; Devenport, William J.

    2003-01-01

    Measurements of fluctuating surface pressure were made on a NACA 0015 airfoil immersed in grid generated turbulence. The airfoil model has a 2 ft chord and spans the 6 ft Virginia Tech Stability Wind Tunnel test section. Two grids were used to investigate the effects of turbulence length scale on the surface pressure response. A large grid which produced turbulence with an integral scale 13% of the chord and a smaller grid which produced turbulence with an integral scale 1.3% of the chord. Measurements were performed at angles of attack, alpha from 0 to 20 . An array of microphones mounted subsurface was used to measure the unsteady surface pressure. The goal of this measurement was to characterize the effects of angle of attack on the inviscid response. Lift spectra calculated from pressure measurements at each angle of attack revealed two distinct interaction regions; for omega(sub r) = omega b / U(sub infinity) is less than 10 a reduction in unsteady lift of up to 7 decibels (dB) occurs while an increase occurs for omega(sub r) is greater than 10 as the angle of attack is increased. The reduction in unsteady lift at low omega(sub r) with increasing angle of attack is a result that has never before been shown either experimentally or theoretically. The source of the reduction in lift spectral level appears to be closely related to the distortion of inflow turbulence based on analysis of surface pressure spanwise correlation length scales. Furthermore, while the distortion of the inflow appears to be critical in this experiment, this effect does not seem to be significant in larger integral scale (relative to the chord) flows based on the previous experimental work of McKeough suggesting the airfoils size relative to the inflow integral scale is critical in defining how the airfoil will respond under variation of angle of attack. A prediction scheme is developed that correctly accounts for the effects of distortion when the inflow integral scale is small relative to the airfoil chord. This scheme utilizes Rapid Distortion Theory to account for the distortion of the inflow with the distortion field modeled using a circular cylinder.

  6. Mapping Mars' northern plains: origins, evolution and response to climate change - an overview of the grid mapping method.

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason; Balme, Matthew; Conway, Susan

    2015-04-01

    An International Space Science Institute (ISSI) team project has been convened to study the northern plains of Mars. The northern plains are younger and at lower elevation than the majority of the martian surface and are thought to be the remnants of an ancient ocean. Understanding the surface geology and geomorphology of the Northern Plains is complex, because the surface has been subtly modified many times, making traditional unit-boundaries hard to define. Our ISSI team project aims to answer the following questions: 1) "What is the distribution of ice-related landforms in the northern plains, and can it be related to distinct latitude bands or different geological or geomorphological units?" 2) "What is the relationship between the latitude dependent mantle (LDM; a draping unit believed to comprise of ice and dust thought to be deposited under periods of high axial obliquity) and (i) landforms indicative of ground ice, and (ii) other geological units in the northern plains?" 3) "What are the distributions and associations of recent landforms indicative of thaw of ice or snow?" With increasing coverage of high-resolution images of the surface of we are able to identify increasing numbers and varieties of small-scale landforms on Mars. Many such landforms are too small to represent on regional maps, yet determining their presence or absence across large areas can form the observational basis for developing hypotheses on the nature and history of an area. The combination of improved spatial resolution with near-continuous coverage increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre-scale landforms. Here, we describe an approach to mapping small features across large areas. Rather than traditional mapping with points, lines and polygons, we used a grid "tick box" approach to locate specific landforms. The mapping strips were divided into 15×150 grid of squares, each approximately 20×20 km, for each study area. Orbital images at 6-15m/pix were then viewed systematically for each grid square and the presence or absence of each of the basic suite of landforms recorded. The landforms were recorded as being either "present", "dominant", "possible", or "absent" in each grid square. The result is a series of coarse-resolution "rasters" showing the distribution of the different types of landforms across the strip. We have found this approach to be efficient, scalable and appropriate for teams of people mapping remotely. It is easily scalable because, carrying the "absent" values forward to finer grids from the larger grids would mean only areas with positive values for that landform would need to be examined to increase the resolution for the whole strip. As each sub-grid only requires the presence or absence of a landform ascertaining, it therefore removes an individual's decision as to where to draw boundaries, making the method efficient and repeatable.

  7. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  8. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    PubMed Central

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie

    2018-01-01

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811

  9. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds.

    PubMed

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei

    2018-04-13

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  10. Investigating the Impact of Surface Heterogeneity on the Convective Boundary Layer Over Urban Areas Through Coupled Large-Eddy Simulation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Dominguez, Anthony; Kleissl, Jan P.; Luvall, Jeffrey C.

    2011-01-01

    Large-eddy Simulation (LES) was used to study convective boundary layer (CBL) flow through suburban regions with both large and small scale heterogeneities in surface temperature. Constant remotely sensed surface temperatures were applied at the surface boundary at resolutions of 10 m, 90 m, 200 m, and 1 km. Increasing the surface resolution from 1 km to 200 m had the most significant impact on the mean and turbulent flow characteristics as the larger scale heterogeneities became resolved. While previous studies concluded that scales of heterogeneity much smaller than the CBL inversion height have little impact on the CBL characteristics, we found that further increasing the surface resolution (resolving smaller scale heterogeneities) results in an increase in mean surface heat flux, thermal blending height, and potential temperature profile. The results of this study will help to better inform sub-grid parameterization for meso-scale meteorological models. The simulation tool developed through this study (combining LES and high resolution remotely sensed surface conditions) is a significant step towards future studies on the micro-scale meteorology in urban areas.

  11. Power control and management of the grid containing largescale wind power systems

    NASA Astrophysics Data System (ADS)

    Aula, Fadhil Toufick

    The ever increasing demand for electricity has driven many countries toward the installation of new generation facilities. However, concerns such as environmental pollution and global warming issues, clean energy sources, high costs associated with installation of new conventional power plants, and fossil fuels depletion have created many interests in finding alternatives to conventional fossil fuels for generating electricity. Wind energy is one of the most rapidly growing renewable power sources and wind power generations have been increasingly demanded as an alternative to the conventional fossil fuels. However, wind power fluctuates due to variation of wind speed. Therefore, large-scale integration of wind energy conversion systems is a threat to the stability and reliability of utility grids containing these systems. They disturb the balance between power generation and consumption, affect the quality of the electricity, and complicate load sharing and load distribution managing and planning. Overall, wind power systems do not help in providing any services such as operating and regulating reserves to the power grid. In order to resolve these issues, research has been conducted in utilizing weather forecasting data to improve the performance of the wind power system, reduce the influence of the fluctuations, and plan power management of the grid containing large-scale wind power systems which consist of doubly-fed induction generator based energy conversion system. The aims of this research, my dissertation, are to provide new methods for: smoothing the output power of the wind power systems and reducing the influence of their fluctuations, power managing and planning of a grid containing these systems and other conventional power plants, and providing a new structure of implementing of latest microprocessor technology for controlling and managing the operation of the wind power system. In this research, in order to reduce and smooth the fluctuations, two methods are presented. The first method is based on a de-loaded technique while the other method is based on utilizing multiple storage facilities. The de-loaded technique is based on characteristics of the power of a wind turbine and estimation of the generated power according to weather forecasting data. The technique provides a reference power by which the wind power system will operate and generate a smooth power. In contrast, utilizing storage facilities will allow the wind power system to operate at its maximum tracking power points' strategy. Two types of energy storages are considered in this research, battery energy storage system (BESS) and pumped-hydropower storage system (PHSS), to suppress the output fluctuations and to support the wind power system to follow the system load demands. Furthermore, this method provides the ability to store energy when there is a surplus of the generated power and to reuse it when there is a shortage of power generation from wind power systems. Both methods are new in terms of utilizing of the techniques and wind speed data. A microprocessor embedded system using an IntelRTM Atom(TM) processor is presented for controlling the wind power system and for providing the remote communication for enhancing the operation of the individual wind power system in a wind farm. The embedded system helps the wind power system to respond and to follow the commands of the central control of the power system. Moreover, it enhances the performance of the wind power system through self-managing, self-functioning, and self-correcting. Finally, a method of system power management and planning is modeled and studied for a grid containing large-scale wind power systems. The method is based on a new technique through constructing a new load demand curve (NLDC) from merging the estimation of generated power from wind power systems and forecasting of the load. To summarize, the methods and their results presented in this dissertation, enhance the operation of the large-scale wind power systems and reduce their drawbacks on the operation of the power grid.

  12. Horizontal Residual Mean Circulation: Evaluation of Spatial Correlations in Coarse Resolution Ocean Models

    NASA Astrophysics Data System (ADS)

    Li, Y.; McDougall, T. J.

    2016-02-01

    Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.

  13. An Examination of the Sea Ice Rheology for Seasonal Ice Zones Based on Ice Drift and Thickness Observations

    NASA Astrophysics Data System (ADS)

    Toyota, Takenobu; Kimura, Noriaki

    2018-02-01

    The validity of the sea ice rheological model formulated by Hibler (1979), which is widely used in present numerical sea ice models, is examined for the Sea of Okhotsk as an example of the seasonal ice zone (SIZ), based on satellite-derived sea ice velocity, concentration and thickness. Our focus was the formulation of the yield curve, the shape of which can be estimated from ice drift pattern based on the energy equation of deformation, while the strength of the ice cover that determines its magnitude was evaluated using ice concentration and thickness data. Ice drift was obtained with a grid spacing of 37.5 km from the AMSR-E 89 GHz brightness temperature using a maximum cross-correlation method. The ice thickness was obtained with a spatial resolution of 100 m from a regression of the PALSAR backscatter coefficients with ice thickness. To assess scale dependence, the ice drift data derived from a coastal radar covering a 70 km range in the southernmost Sea of Okhotsk were similarly analyzed. The results obtained were mostly consistent with Hibler's formulation that was based on the Arctic Ocean on both scales with no dependence on a time scale, and justify the treatment of sea ice as a plastic material, with an elliptical shaped yield curve to some extent. However, it also highlights the difficulty in parameterizing sub-grid scale ridging in the model because grid scale ice velocities reduce the deformation magnitude by half due to the large variation of the deformation field in the SIZ.

  14. Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices

    Treesearch

    Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling

    2008-01-01

    The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...

  15. NREL Research Team Wins R&D 100 Award | News | NREL

    Science.gov Websites

    performance PV modules for large-scale solar power plants, commercial and residential buildings, and off-grid Laboratory (NREL) and First Solar have been selected to receive a 2003 R&D 100 award from R&D Magazine for developing a new process for depositing semiconductor layers onto photovoltaic (PV) modules

  16. 77 FR 58416 - Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-20

    ... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD.... Dates/Location: The MAGIC Team meetings are held on the first Wednesday of each month, 2:00-4:00pm, at... participation is available for each meeting. Please reference the MAGIC Team Web site for updates. Magic Web...

  17. 78 FR 70076 - Large Scale Networking (LSN)-Middleware and Grid Interagency Coordination (MAGIC) Team

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-22

    ... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD... MAGIC Team meetings are held on the first Wednesday of each month, 2:00-4:00 p.m., at the National... for each meeting. Please reference the MAGIC Team Web site for updates. Magic Web site: The agendas...

  18. Multigrid Equation Solvers for Large Scale Nonlinear Finite Element Simulations

    DTIC Science & Technology

    1999-01-01

    purpose of the second partitioning phase , on each SMP, is to minimize the communication within the SMP; even if a multi - threaded matrix vector product...8.7 Comparison of model with experimental data for send phase of matrix vector product on ne grid...140 8.4 Matrix vector product phase times : : : : : : : : : : : : : : : : : : : : : : : 145 9.1 Flat and

  19. Fast Grid Frequency Support from Distributed Inverter-Based Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson F

    This presentation summarizes power hardware-in-the-loop testing performed to evaluate the ability of distributed inverter-coupled generation to support grid frequency on the fastest time scales. The research found that distributed PV inverters and other DERs can effectively support the grid on sub-second time scales.

  20. Development of Nuclear Renewable Oil Shale Systems for Flexible Electricity and Reduced Fossil Fuel Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel Curtis; Charles Forsberg; Humberto Garcia

    2015-05-01

    We propose the development of Nuclear Renewable Oil Shale Systems (NROSS) in northern Europe, China, and the western United States to provide large supplies of flexible, dispatchable, very-low-carbon electricity and fossil fuel production with reduced CO2 emissions. NROSS are a class of large hybrid energy systems in which base-load nuclear reactors provide the primary energy used to produce shale oil from kerogen deposits and simultaneously provide flexible, dispatchable, very-low-carbon electricity to the grid. Kerogen is solid organic matter trapped in sedimentary shale, and large reserves of this resource, called oil shale, are found in northern Europe, China, and the westernmore » United States. NROSS couples electricity generation and transportation fuel production in a single operation, reduces lifecycle carbon emissions from the fuel produced, improves revenue for the nuclear plant, and enables a major shift toward a very-low-carbon electricity grid. NROSS will require a significant development effort in the United States, where kerogen resources have never been developed on a large scale. In Europe, however, nuclear plants have been used for process heat delivery (district heating), and kerogen use is familiar in certain countries. Europe, China, and the United States all have the opportunity to use large scale NROSS development to enable major growth in renewable generation and either substantially reduce or eliminate their dependence on foreign fossil fuel supplies, accelerating their transitions to cleaner, more efficient, and more reliable energy systems.« less

  1. The influence of misrepresenting the nocturnal boundary layer on idealized daytime convection in large-eddy simulation

    NASA Astrophysics Data System (ADS)

    van Stratum, Bart J. H.; Stevens, Bjorn

    2015-06-01

    The influence of poorly resolving mixing processes in the nocturnal boundary layer (NBL) on the development of the convective boundary layer the following day is studied using large-eddy simulation (LES). Guided by measurement data from meteorological sites in Cabauw (Netherlands) and Hamburg (Germany), the typical summertime NBL conditions for Western Europe are characterized, and used to design idealized (absence of moisture and large-scale forcings) numerical experiments of the diel cycle. Using the UCLA-LES code with a traditional Smagorinsky-Lilly subgrid model and a simplified land-surface scheme, a sensitivity study to grid spacing is performed. At horizontal grid spacings ranging from 3.125 m in which we are capable of resolving most turbulence in the cases of interest to grid a spacing of 100 m which is clearly insufficient to resolve the NBL, the ability of LES to represent the NBL and the influence of NBL biases on the subsequent daytime development of the convective boundary layer are examined. Although the low-resolution experiments produce substantial biases in the NBL, the influence on daytime convection is shown to be small, with biases in the afternoon boundary layer depth and temperature of approximately 100 m and 0.5 K, which partially cancel each other in terms of the mixed-layer top relative humidity.

  2. A Priori Analyses of Three Subgrid-Scale Models for One-Parameter Families of Filters

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Adams, Nikolaus A.

    1998-01-01

    The decay of isotropic turbulence a compressible flow is examined by direct numerical simulation (DNS). A priori analyses of the DNS data are then performed to evaluate three subgrid-scale (SGS) models for large-eddy simulation (LES): a generalized Smagorinsky model (M1), a stress-similarity model (M2), and a gradient model (M3). The models exploit one-parameter second- or fourth-order filters of Pade type, which permit the cutoff wavenumber k(sub c) to be tuned independently of the grid increment (delta)x. The modeled (M) and exact (E) SGS-stresses are compared component-wise by correlation coefficients of the form C(E,M) computed over the entire three-dimensional fields. In general, M1 correlates poorly against exact stresses (C < 0.2), M3 correlates moderately well (C approx. 0.6), and M2 correlates remarkably well (0.8 < C < 1.0). Specifically, correlations C(E, M2) are high provided the grid and test filters are of the same order. Moreover, the highest correlations (C approx.= 1.0) result whenever the grid and test filters are identical (in both order and cutoff). Finally, present results reveal the exact SGS stresses obtained by grid filters of differing orders to be only moderately well correlated. Thus, in LES the model should not be specified independently of the filter.

  3. Multiple Scales in Fluid Dynamics and Meteorology: The DFG Priority Programme 1276 MetStröm

    NASA Astrophysics Data System (ADS)

    von Larcher, Th; Klein, R.

    2012-04-01

    Geophysical fluid motions are characterized by a very wide range of length and time scales, and by a rich collection of varying physical phenomena. The mathematical description of these motions reflects this multitude of scales and mechanisms in that it involves strong non-linearities and various scale-dependent singular limit regimes. Considerable progress has been made in recent years in the mathematical modelling and numerical simulation of such flows in detailed process studies, numerical weather forecasting, and climate research. One task of outstanding importance in this context has been and will remain for the foreseeable future the subgrid scale parameterization of the net effects of non-resolved processes that take place on spacio-temporal scales not resolvable even by the largest most recent supercomputers. Since the advent of numerical weather forecasting some 60 years ago, one simple but efficient means to achieve improved forecasting skills has been increased spacio-temporal resolution. This seems quite consistent with the concept of convergence of numerical methods in Applied Mathematics and Computational Fluid Dynamics (CFD) at a first glance. Yet, the very notion of increased resolution in atmosphere-ocean science is very different from the one used in Applied Mathematics: For the mathematician, increased resolution provides the benefit of getting closer to the ideal of a converged solution of some given partial differential equations. On the other hand, the atmosphere-ocean scientist would naturally refine the computational grid and adjust his mathematical model, such that it better represents the relevant physical processes that occur at smaller scales. This conceptual contradiction remains largely irrelevant as long as geophysical flow models operate with fixed computational grids and time steps and with subgrid scale parameterizations being optimized accordingly. The picture changes fundamentally when modern techniques from CFD involving spacio-temporal grid adaptivity get invoked in order to further improve the net efficiency in exploiting the given computational resources. In the setting of geophysical flow simulation one must then employ subgrid scale parameterizations that dynamically adapt to the changing grid sizes and time steps, implement ways to judiciously control and steer the newly available flexibility of resolution, and invent novel ways of quantifying the remaining errors. The DFG priority program MetStröm covers the expertise of Meteorology, Fluid Dynamics, and Applied Mathematics to develop model- as well as grid-adaptive numerical simulation concepts in multidisciplinary projects. The goal of this priority programme is to provide simulation models which combine scale-dependent (mathematical) descriptions of key physical processes with adaptive flow discretization schemes. Deterministic continuous approaches and discrete and/or stochastic closures and their possible interplay are taken into consideration. Research focuses on the theory and methodology of multiscale meteorological-fluid mechanics modelling. Accompanying reference experiments support model validation.

  4. Three-dimensional computational fluid dynamics modeling of particle uptake by an occupational air sampler using manually-scaled and adaptive grids

    PubMed Central

    Landázuri, Andrea C.; Sáez, A. Eduardo; Anthony, T. Renée

    2016-01-01

    This work presents fluid flow and particle trajectory simulation studies to determine the aspiration efficiency of a horizontally oriented occupational air sampler using computational fluid dynamics (CFD). Grid adaption and manual scaling of the grids were applied to two sampler prototypes based on a 37-mm cassette. The standard k–ε model was used to simulate the turbulent air flow and a second order streamline-upwind discretization scheme was used to stabilize convective terms of the Navier–Stokes equations. Successively scaled grids for each configuration were created manually and by means of grid adaption using the velocity gradient in the main flow direction. Solutions were verified to assess iterative convergence, grid independence and monotonic convergence. Particle aspiration efficiencies determined for both prototype samplers were undistinguishable, indicating that the porous filter does not play a noticeable role in particle aspiration. Results conclude that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail. It was verified that adaptive grids provided a higher number of locations with monotonic convergence than the manual grids and required the least computational effort. PMID:26949268

  5. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

  6. Wall-Resolved Large-Eddy Simulation of Flow Separation Over NASA Wall-Mounted Hump

    NASA Technical Reports Server (NTRS)

    Uzun, Ali; Malik, Mujeeb R.

    2017-01-01

    This paper reports the findings from a study that applies wall-resolved large-eddy simulation to investigate flow separation over the NASA wall-mounted hump geometry. Despite its conceptually simple flow configuration, this benchmark problem has proven to be a challenging test case for various turbulence simulation methods that have attempted to predict flow separation arising from the adverse pressure gradient on the aft region of the hump. The momentum-thickness Reynolds number of the incoming boundary layer has a value that is near the upper limit achieved by recent direct numerical simulation and large-eddy simulation of incompressible turbulent boundary layers. The high Reynolds number of the problem necessitates a significant number of grid points for wall-resolved calculations. The present simulations show a significant improvement in the separation-bubble length prediction compared to Reynolds-Averaged Navier-Stokes calculations. The current simulations also provide good overall prediction of the skin-friction distribution, including the relaminarization observed over the front portion of the hump due to the strong favorable pressure gradient. We discuss a number of problems that were encountered during the course of this work and present possible solutions. A systematic study regarding the effect of domain span, subgrid-scale model, tunnel back pressure, upstream boundary layer conditions and grid refinement is performed. The predicted separation-bubble length is found to be sensitive to the span of the domain. Despite the large number of grid points used in the simulations, some differences between the predictions and experimental observations still exist (particularly for Reynolds stresses) in the case of the wide-span simulation, suggesting that additional grid resolution may be required.

  7. Grid Computing Environment using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Alanis, Fransisco; Mahmood, Akhtar

    2003-10-01

    Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  8. Sub-grid drag model for immersed vertical cylinders in fluidized beds

    DOE PAGES

    Verma, Vikrant; Li, Tingwen; Dietiker, Jean -Francois; ...

    2017-01-03

    Immersed vertical cylinders are often used as heat exchanger in gas-solid fluidized beds. Computational Fluid Dynamics (CFD) simulations are computationally expensive for large scale systems with bundles of cylinders. Therefore sub-grid models are required to facilitate simulations on a coarse grid, where internal cylinders are treated as a porous medium. The influence of cylinders on the gas-solid flow tends to enhance segregation and affect the gas-solid drag. A correction to gas-solid drag must be modeled using a suitable sub-grid constitutive relationship. In the past, Sarkar et al. have developed a sub-grid drag model for horizontal cylinder arrays based on 2Dmore » simulations. However, the effect of a vertical cylinder arrangement was not considered due to computational complexities. In this study, highly resolved 3D simulations with vertical cylinders were performed in small periodic domains. These simulations were filtered to construct a sub-grid drag model which can then be implemented in coarse-grid simulations. Gas-solid drag was filtered for different solids fractions and a significant reduction in drag was identified when compared with simulation without cylinders and simulation with horizontal cylinders. Slip velocities significantly increase when vertical cylinders are present. Lastly, vertical suspension drag due to vertical cylinders is insignificant however substantial horizontal suspension drag is observed which is consistent to the finding for horizontal cylinders.« less

  9. Grid Research | Grid Modernization | NREL

    Science.gov Websites

    Grid Research Grid Research NREL addresses the challenges of today's electric grid through high researcher in a lab Integrated Devices and Systems Developing and evaluating grid technologies and integrated Controls Developing methods for real-time operations and controls of power systems at any scale Photo of

  10. Initial conditions and modeling for simulations of shock driven turbulent material mixing

    DOE PAGES

    Grinstein, Fernando F.

    2016-11-17

    Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less

  11. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    PubMed

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  12. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  13. HiPEP Ion Optics System Evaluation Using Gridlets

    NASA Technical Reports Server (NTRS)

    Willliams, John D.; Farnell, Cody C.; Laufer, D. Mark; Martinez, Rafael A.

    2004-01-01

    Experimental measurements are presented for sub-scale ion optics systems comprised of 7 and 19 aperture pairs with geometrical features that are similar to the HiPEP ion optics system. Effects of hole diameter and grid-to-grid spacing are presented as functions of applied voltage and beamlet current. Recommendations are made for the beamlet current range where the ion optics system can be safely operated without experiencing direct impingement of high energy ions on the accelerator grid surface. Measurements are also presented of the accelerator grid voltage where beam plasma electrons backstream through the ion optics system. Results of numerical simulations obtained with the ffx code are compared to both the impingement limit and backstreaming measurements. An emphasis is placed on identifying differences between measurements and simulation predictions to highlight areas where more research is needed. Relatively large effects are observed in simulations when the discharge chamber plasma properties and ion optics geometry are varied. Parameters investigated using simulations include the applied voltages, grid spacing, hole-to-hole spacing, doubles-to-singles ratio, plasma potential, and electron temperature; and estimates are provided for the sensitivity of impingement limits on these parameters.

  14. A practical approach to virtualization in HEP

    NASA Astrophysics Data System (ADS)

    Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.

    2011-01-01

    In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.

  15. Sub-grid drag models for horizontal cylinder arrays immersed in gas-particle multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2013-09-08

    Immersed cylindrical tube arrays often are used as heat exchangers in gas-particle fluidized beds. In multiphase computational fluid dynamics (CFD) simulations of large fluidized beds, explicit resolution of small cylinders is computationally infeasible. Instead, the cylinder array may be viewed as an effective porous medium in coarse-grid simulations. The cylinders' influence on the suspension as a whole, manifested as an effective drag force, and on the relative motion between gas and particles, manifested as a correction to the gas-particle drag, must be modeled via suitable sub-grid constitutive relationships. In this work, highly resolved unit-cell simulations of flow around an arraymore » of horizontal cylinders, arranged in a staggered configuration, are filtered to construct sub-grid, or `filtered', drag models, which can be implemented in coarse-grid simulations. The force on the suspension exerted by the cylinders is comprised of, as expected, a buoyancy contribution, and a kinetic component analogous to fluid drag on a single cylinder. Furthermore, the introduction of tubes also is found to enhance segregation at the scale of the cylinder size, which, in turn, leads to a reduction in the filtered gas-particle drag.« less

  16. Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers

    NASA Astrophysics Data System (ADS)

    Sandbach, S. D.; Lane, S. N.; Hardy, R. J.; Amsler, M. L.; Ashworth, P. J.; Best, J. L.; Nicholas, A. P.; Orfeo, O.; Parsons, D. R.; Reesink, A. J. H.; Szupiany, R. N.

    2012-12-01

    Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh- or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These "subgrid" elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to "unmeasured" topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers.

  17. Opportunity to Plug Your Car Into the Electric Grid is Arriving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griego, G.

    2010-06-01

    Plug-in hybrid electric vehicles are hitting the U.S. market for the first time this year. Similar to hybrid electric vehicles, they feature a larger battery and plug-in charger that allows consumers to replace a portion of their fossil fuel by simply plugging their cars into standard 110-volt outlets at home or wherever outlets are available. If these vehicles become widely accepted, consumers and the environment will benefit, according to a computer modeling study by Xcel Energy and the Department of Energy's National Renewable Energy Laboratory. Researchers found that each PHEV would cut carbon dioxide emissions in half and save ownersmore » up to $450 in annual fuel costs and up to 240 gallons of gasoline. The study also looked at the impact of PHEVs on the electric grid in Colorado if used on a large scale. Integrating large numbers of these vehicles will depend on the adoption of smart-grid technology - adding digital elements to the electric power system to improve efficiency and enable more dynamic communication between consumers and producers of electricity. Using an intelligent monitoring system that keeps track of all electricity flowing in the system, a smart grid could enable optimal PHEV battery-charging much the same way it would enable users to manage their energy use in household appliances and factory processes to reduce energy costs. When a smart grid is implemented, consumers will have many low-cost opportunities to charge PHEVs at different times of the day. Plug-in vehicles could contribute electricity at peak times, such as summer evenings, while taking electricity from the grid at low-use times such as the middle of the night. Electricity rates could offer incentives for drivers to 'give back' electricity when it is most needed and to 'take' it when it is plentiful. The integration of PHEVs, solar arrays and wind turbines into the grid at larger scales will require a more modern electricity system. Technology already exists to allow customers to feed excess power from their own renewable energy systems back to the grid. As more homes and businesses find opportunities to plan power flows to and from the grid for economic gain using their renewable energy systems and PHEVs, more sophisticated systems will be needed. A smart grid will improve the efficiency of energy consumption, manage real-time power flows and provide two-way metering needed to compensate small power producers. Many states are working toward the smart-grid concept, particularly to incorporate renewable sources into their utility grids. According to the Department of Energy, 30 states have developed and adopted renewable portfolio standards, which require up to 20 percent of a state's energy portfolio to come exclusively from renewable sources by this year, and up to 30 percent in the future. NREL has been laying the foundation for both PHEVs and the smart grid for many years with work including modifying hybrid electric cars with plug-in technology; studying fuel economy, batteries and power electronics; exploring options for recharging batteries with solar and wind technologies; and measuring reductions in greenhouse gas emissions. The laboratory participated in development of smart-grid implementation standards with industry, utilities, government and others to guide the integration of renewable and other small electricity generation and storage sources. Dick DeBlasio, principal program manager for electricity programs, is now leading the Institute of Electrical and Electronics Engineers Standards efforts to connect the dots regarding power generation, communication and information technologies.« less

  18. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF) Compute Working Team. Also highlighted will be the results of large scale climate model intercomparison data analysis experiments, for example: (1) defined in the context of the EU H2020 INDIGO-DataCloud project; (2) implemented in a real geographically distributed environment involving CMCC (Italy) and LLNL (US) sites; (3) exploiting Ophidia as server-side, parallel analytics engine; and (4) applied on real CMIP5 data sets available through ESGF.

  19. Global-scale regionalization of hydrological model parameters using streamflow data from many small catchments

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard

    2015-04-01

    Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.

  20. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  1. Dynamically induced cascading failures in power grids.

    PubMed

    Schäfer, Benjamin; Witthaut, Dirk; Timme, Marc; Latora, Vito

    2018-05-17

    Reliable functioning of infrastructure networks is essential for our modern society. Cascading failures are the cause of most large-scale network outages. Although cascading failures often exhibit dynamical transients, the modeling of cascades has so far mainly focused on the analysis of sequences of steady states. In this article, we focus on electrical transmission networks and introduce a framework that takes into account both the event-based nature of cascades and the essentials of the network dynamics. We find that transients of the order of seconds in the flows of a power grid play a crucial role in the emergence of collective behaviors. We finally propose a forecasting method to identify critical lines and components in advance or during operation. Overall, our work highlights the relevance of dynamically induced failures on the synchronization dynamics of national power grids of different European countries and provides methods to predict and model cascading failures.

  2. The QUANTGRID Project (RO)—Quantum Security in GRID Computing Applications

    NASA Astrophysics Data System (ADS)

    Dima, M.; Dulea, M.; Petre, M.; Petre, C.; Mitrica, B.; Stoica, M.; Udrea, M.; Sterian, R.; Sterian, P.

    2010-01-01

    The QUANTGRID Project, financed through the National Center for Programme Management (CNMP-Romania), is the first attempt at using Quantum Crypted Communications (QCC) in large scale operations, such as GRID Computing, and conceivably in the years ahead in the banking sector and other security tight communications. In relation with the GRID activities of the Center for Computing & Communications (Nat.'l Inst. Nucl. Phys.—IFIN-HH), the Quantum Optics Lab. (Nat.'l Inst. Plasma and Lasers—INFLPR) and the Physics Dept. (University Polytechnica—UPB) the project will build a demonstrator infrastructure for this technology. The status of the project in its incipient phase is reported, featuring tests for communications in classical security mode: socket level communications under AES (Advanced Encryption Std.), both proprietary code in C++ technology. An outline of the planned undertaking of the project is communicated, highlighting its impact in quantum physics, coherent optics and information technology.

  3. Numerical analysis of a high-order unstructured overset grid method for compressible LES of turbomachinery

    NASA Astrophysics Data System (ADS)

    de Laborderie, J.; Duchaine, F.; Gicquel, L.; Vermorel, O.; Wang, G.; Moreau, S.

    2018-06-01

    Large-Eddy Simulation (LES) is recognized as a promising method for high-fidelity flow predictions in turbomachinery applications. The presented approach consists of the coupling of several instances of the same LES unstructured solver through an overset grid method. A high-order interpolation, implemented within this coupling method, is introduced and evaluated on several test cases. It is shown to be third order accurate, to preserve the accuracy of various second and third order convective schemes and to ensure the continuity of diffusive fluxes and subgrid scale tensors even in detrimental interface configurations. In this analysis, three types of spurious waves generated at the interface are identified. They are significantly reduced by the high-order interpolation at the interface. The latter having the same cost as the original lower order method, the high-order overset grid method appears as a promising alternative to be used in all the applications.

  4. High-resolution wavefront reconstruction using the frozen flow hypothesis

    NASA Astrophysics Data System (ADS)

    Liu, Xuewen; Liang, Yonghui; Liu, Jin; Xu, Jieping

    2017-10-01

    This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.

  5. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  6. Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.

  7. The compression–error trade-off for large gridded data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silver, Jeremy D.; Zender, Charles S.

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  8. The compression–error trade-off for large gridded data sets

    DOE PAGES

    Silver, Jeremy D.; Zender, Charles S.

    2017-01-27

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  9. An immersed boundary method for direct and large eddy simulation of stratified flows in complex geometry

    NASA Astrophysics Data System (ADS)

    Rapaka, Narsimha R.; Sarkar, Sutanu

    2016-10-01

    A sharp-interface Immersed Boundary Method (IBM) is developed to simulate density-stratified turbulent flows in complex geometry using a Cartesian grid. The basic numerical scheme corresponds to a central second-order finite difference method, third-order Runge-Kutta integration in time for the advective terms and an alternating direction implicit (ADI) scheme for the viscous and diffusive terms. The solver developed here allows for both direct numerical simulation (DNS) and large eddy simulation (LES) approaches. Methods to enhance the mass conservation and numerical stability of the solver to simulate high Reynolds number flows are discussed. Convergence with second-order accuracy is demonstrated in flow past a cylinder. The solver is validated against past laboratory and numerical results in flow past a sphere, and in channel flow with and without stratification. Since topographically generated internal waves are believed to result in a substantial fraction of turbulent mixing in the ocean, we are motivated to examine oscillating tidal flow over a triangular obstacle to assess the ability of this computational model to represent nonlinear internal waves and turbulence. Results in laboratory-scale (order of few meters) simulations show that the wave energy flux, mean flow properties and turbulent kinetic energy agree well with our previous results obtained using a body-fitted grid (BFG). The deviation of IBM results from BFG results is found to increase with increasing nonlinearity in the wave field that is associated with either increasing steepness of the topography relative to the internal wave propagation angle or with the amplitude of the oscillatory forcing. LES is performed on a large scale ridge, of the order of few kilometers in length, that has the same geometrical shape and same non-dimensional values for the governing flow and environmental parameters as the laboratory-scale topography, but significantly larger Reynolds number. A non-linear drag law is utilized in the large-scale application to parameterize turbulent losses due to bottom friction at high Reynolds number. The large scale problem exhibits qualitatively similar behavior to the laboratory scale problem with some differences: slightly larger intensification of the boundary flow and somewhat higher non-dimensional values for the energy fluxed away by the internal wave field. The phasing of wave breaking and turbulence exhibits little difference between small-scale and large-scale obstacles as long as the important non-dimensional parameters are kept the same. We conclude that IBM is a viable approach to the simulation of internal waves and turbulence in high Reynolds number stratified flows over topography.

  10. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.

  11. Variations/Changes in Daily Precipitation Extremes Derived from Satellite-Based Products

    NASA Astrophysics Data System (ADS)

    Gu, G.; Adler, R. F.

    2017-12-01

    Interannual/decadal-scale variations/changes in daily precipitation extremes are investigated by means of satellite-based high-spatiotemporal resolution precipitation products, including TRMM-TMPA, PERSIANN-CDR-Daily, GPCP 1DD, etc. Extreme precipitation indices at grids are first defined, including the maximum daily precipitation amount (Rx1day), the simple precipitation intensity index (SDII), the conditional (Rcond) daily precipitation rate (Pr>0 mm day-1), and monthly frequencies of rainy (FOCc) and wet (FOCw) days. Other two precipitation intensity indices, i.e., mean daily precipitation rates for Pr ≥10 mm day-1 (Pr10II) and for Pr ≥ 20 mm day-1 (Pr20II), are also constructed. Consistency analyses of daily extreme indices among these data sets are then performed by comparing corresponding averages over large domains such as tropical (30oN-30oS) land, ocean, land+ocean, for their common period (post-1997). This can provide a preliminary uncertainty analysis of these data sets in describing daily extreme precipitation events. Discrepancies can readily be found among these products regarding the magnitudes of area-averaged extreme indices. However, generally consistent temporal variations can be found among the indices derived from different satellite products. Interannual variability in daily precipitation extremes are then examined and compared at grids by exploring their relations with the El Nino-Southern Oscillation (ENSO). Linear correlation and composite analyses are used to examine the impact of ENSO on these extreme indices at grids and over large domains during the post-1997 period. Decadal-scale variability/change in daily extreme events is further examined by using the PERSIANN-CDR-Daily that can cover the entire post-1983 period, based on its general consistency with other two products during the post-1979 period. We specifically focus on exploring and discriminating the effects of decadal-scale internal variability such as the Pacific Decadal Oscillation (PDO) and anthropogenic forcings including the greenhouse-gases (GHG) related warming. Comparisons are also made over global land with the results from two gridded daily rain-gauge products, GPCC Full-record daily (1988-2013) and NOAA/CPC Unified daily (1979-present).

  12. Evaluation of grid generation technologies from an applied perspective

    NASA Technical Reports Server (NTRS)

    Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.

    1995-01-01

    An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.

  13. Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Pirzadeh, S.

    1999-01-01

    A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.

  14. A Design of Irregular Grid Map for Large-Scale Wi-Fi LAN Fingerprint Positioning Systems

    PubMed Central

    Kim, Jae-Hoon; Min, Kyoung Sik; Yeo, Woon-Young

    2014-01-01

    The rapid growth of mobile communication and the proliferation of smartphones have drawn significant attention to location-based services (LBSs). One of the most important factors in the vitalization of LBSs is the accurate position estimation of a mobile device. The Wi-Fi positioning system (WPS) is a new positioning method that measures received signal strength indication (RSSI) data from all Wi-Fi access points (APs) and stores them in a large database as a form of radio fingerprint map. Because of the millions of APs in urban areas, radio fingerprints are seriously contaminated and confused. Moreover, the algorithmic advances for positioning face computational limitation. Therefore, we present a novel irregular grid structure and data analytics for efficient fingerprint map management. The usefulness of the proposed methodology is presented using the actual radio fingerprint measurements taken throughout Seoul, Korea. PMID:25302315

  15. A design of irregular grid map for large-scale Wi-Fi LAN fingerprint positioning systems.

    PubMed

    Kim, Jae-Hoon; Min, Kyoung Sik; Yeo, Woon-Young

    2014-01-01

    The rapid growth of mobile communication and the proliferation of smartphones have drawn significant attention to location-based services (LBSs). One of the most important factors in the vitalization of LBSs is the accurate position estimation of a mobile device. The Wi-Fi positioning system (WPS) is a new positioning method that measures received signal strength indication (RSSI) data from all Wi-Fi access points (APs) and stores them in a large database as a form of radio fingerprint map. Because of the millions of APs in urban areas, radio fingerprints are seriously contaminated and confused. Moreover, the algorithmic advances for positioning face computational limitation. Therefore, we present a novel irregular grid structure and data analytics for efficient fingerprint map management. The usefulness of the proposed methodology is presented using the actual radio fingerprint measurements taken throughout Seoul, Korea.

  16. Study on reasonable curtailment rate of large scale renewable energy

    NASA Astrophysics Data System (ADS)

    Li, Nan; Yuan, Bo; Zhang, Fuqiang

    2018-02-01

    Energy curtailment rate of renewable energy generation is an important indicator to measure renewable energy consumption, it is also an important parameters to determine the other power sources and grids arrangement in the planning stage. In general, to consume the spike power of the renewable energy which is just a small proportion, it is necessary to dispatch a large number of peaking resources, which will reduce the safety and stability of the system. In planning aspect, if it is allowed to give up a certain amount of renewable energy, overall peaking demand of the system will be reduced, the peak power supply construction can be put off to avoid the expensive cost of marginal absorption. In this paper, we introduce the reasonable energy curtailment rate into the power system planning, and use the GESP power planning software, conclude that the reasonable energy curtailment rate of the regional grids in China is 3% -10% in 2020.

  17. Large-eddy simulations of turbulent flow for grid-to-rod fretting in nuclear reactors

    DOE PAGES

    Bakosi, J.; Christon, M. A.; Lowrie, R. B.; ...

    2013-07-12

    The grid-to-rod fretting (GTRF) problem in pressurized water reactors is a flow-induced vibration problem that results in wear and failure of the fuel rods in nuclear assemblies. In order to understand the fluid dynamics of GTRF and to build an archival database of turbulence statistics for various configurations, implicit large-eddy simulations of time-dependent single-phase turbulent flow have been performed in 3 × 3 and 5 × 5 rod bundles with a single grid spacer. To assess the computational mesh and resolution requirements, a method for quantitative assessment of unstructured meshes with no-slip walls is described. The calculations have been carriedmore » out using Hydra-TH, a thermal-hydraulics code developed at Los Alamos for the Consortium for Advanced Simulation of Light water reactors, a United States Department of Energy Innovation Hub. Hydra-TH uses a second-order implicit incremental projection method to solve the singlephase incompressible Navier-Stokes equations. The simulations explicitly resolve the large scale motions of the turbulent flow field using first principles and rely on a monotonicity-preserving numerical technique to represent the unresolved scales. Each series of simulations for the 3 × 3 and 5 × 5 rod-bundle geometries is an analysis of the flow field statistics combined with a mesh-refinement study and validation with available experimental data. Our primary focus is the time history and statistics of the forces loading the fuel rods. These hydrodynamic forces are believed to be the key player resulting in rod vibration and GTRF wear, one of the leading causes for leaking nuclear fuel which costs power utilities millions of dollars in preventive measures. As a result, we demonstrate that implicit large-eddy simulation of rod-bundle flows is a viable way to calculate the excitation forces for the GTRF problem.« less

  18. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Purba, Victor; Jafarpour, Saber

    Next-generation power networks will contain large numbers of grid-connected inverters satisfying a significant fraction of system load. Since each inverter model has a relatively large number of dynamic states, it is impractical to analyze complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loopmore » for grid synchronization. We outline a structure-preserving reduced-order inverter model with lumped parameters for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. We show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as any individual inverter in the system. Numerical simulations validate the reduced-order model.« less

  19. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    NASA Astrophysics Data System (ADS)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  20. Impact Detection for Characterization of Complex Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Chan, Wai Hong Ronald; Urzay, Javier; Mani, Ali; Moin, Parviz

    2016-11-01

    Multiphase flows often involve a wide range of impact events, such as liquid droplets impinging on a liquid pool or gas bubbles coalescing in a liquid medium. These events contribute to a myriad of large-scale phenomena, including breaking waves on ocean surfaces. As impacts between surfaces necessarily occur at isolated points, numerical simulations of impact events will require the resolution of molecular scales near the impact points for accurate modeling. This can be prohibitively expensive unless subgrid impact and breakup models are formulated to capture the effects of the interactions. The first step in a large-eddy simulation (LES) based computational methodology for complex multiphase flows like air-sea interactions requires effective detection of these impact events. The starting point of this work is a collision detection algorithm for structured grids on a coupled level set / volume of fluid (CLSVOF) solver adapted from an earlier algorithm for cloth animations that triangulates the interface with the marching cubes method. We explore the extension of collision detection to a geometric VOF solver and to unstructured grids. Supported by ONR/A*STAR. Agency of Science, Technology and Research, Singapore; Office of Naval Research, USA.

  1. Large-scale optimal control of interconnected natural gas and electrical transmission systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-04-01

    We present a detailed optimal control model that captures spatiotemporal interactions between gas and electric transmission networks. We use the model to study flexibility and economic opportunities provided by coordination. A large-scale case study in the Illinois system reveals that coordination can enable the delivery of significantly larger amounts of natural gas to the power grid. In particular, under a coordinated setting, gas-fired generators act as distributed demand response resources that can be controlled by the gas pipeline operator. This enables more efficient control of pressures and flows in space and time and overcomes delivery bottlenecks. We demonstrate that themore » additional flexibility not only can benefit the gas operator but can also lead to more efficient power grid operations and results in increased revenue for gas-fired power plants. We also use the optimal control model to analyze computational issues arising in these complex models. We demonstrate that the interconnected Illinois system with full physical resolution gives rise to a highly nonlinear optimal control problem with 4400 differential and algebraic equations and 1040 controls that can be solved with a state-of-the-art sparse optimization solver. (C) 2016 Elsevier Ltd. All rights reserved.« less

  2. Boundedness of the mixed velocity-temperature derivative skewness in homogeneous isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Tang, S. L.; Antonia, R. A.; Djenidi, L.; Danaila, L.; Zhou, Y.

    2016-09-01

    The transport equation for the mean scalar dissipation rate ɛ ¯ θ is derived by applying the limit at small separations to the generalized form of Yaglom's equation in two types of flows, those dominated mainly by a decay of energy in the streamwise direction and those which are forced, through a continuous injection of energy at large scales. In grid turbulence, the imbalance between the production of ɛ ¯ θ due to stretching of the temperature field and the destruction of ɛ ¯ θ by the thermal diffusivity is governed by the streamwise advection of ɛ ¯ θ by the mean velocity. This imbalance is intrinsically different from that in stationary forced periodic box turbulence (or SFPBT), which is virtually negligible. In essence, the different types of imbalance represent different constraints imposed by the large-scale motion on the relation between the so-called mixed velocity-temperature derivative skewness ST and the scalar enstrophy destruction coefficient Gθ in different flows, thus resulting in non-universal approaches of ST towards a constant value as Reλ increases. The data for ST collected in grid turbulence and in SFPBT indicate that the magnitude of ST is bounded, this limit being close to 0.5.

  3. Large-eddy simulation of sand dune morphodynamics

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team

    2015-11-01

    Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.

  4. Developing survey grids to substantiate freedom from exotic pests

    Treesearch

    John W. Coulston; Frank H. Koch; William D. Smith

    2009-01-01

    Systematic, hierarchical intensification of the Environmental Monitoring and Assessment Program hexagon for North America yields a simple procedure for developing national-scale survey grids. In this article, we describe the steps to create a national-scale survey grid using a risk map as the starting point. We illustrate the steps using an exotic pest example in which...

  5. Atmospheric mechanisms governing the spatial and temporal variability of phenological phases in central Europe

    NASA Astrophysics Data System (ADS)

    Scheifinger, Helfried; Menzel, Annette; Koch, Elisabeth; Peter, Christian; Ahas, Rein

    2002-11-01

    A data set of 17 phenological phases from Germany, Austria, Switzerland and Slovenia spanning the time period from 1951 to 1998 has been made available for analysis together with a gridded temperature data set (1° × 1° grid) and the North Atlantic Oscillation (NAO) index time series. The disturbances of the westerlies constitute the main atmospheric source for the temporal variability of phenological events in Europe. The trend, the standard deviation and the discontinuity of the phenological time series at the end of the 1980s can, to a great extent, be explained by the NAO. A number of factors modulate the influence of the NAO in time and space. The seasonal northward shift of the westerlies overlaps with the sequence of phenological spring phases, thereby gradually reducing its influence on the temporal variability of phenological events with progression of spring (temporal loss of influence). This temporal process is reflected by a pronounced decrease in trend and standard deviation values and common variability with the NAO with increasing year-day. The reduced influence of the NAO with increasing distance from the Atlantic coast is not only apparent in studies based on the data set of the International Phenological Gardens, but also in the data set of this study with a smaller spatial extent (large-scale loss of influence). The common variance between phenological and NAO time series displays a discontinuous drop from the European Atlantic coast towards the Alps. On a local and regional scale, mountainous terrain reduces the influence of the large-scale atmospheric flow from the Atlantic via a proposed decoupling mechanism. Valleys in mountainous terrain have the inclination to harbour temperature inversions over extended periods of time during the cold season, which isolate the valley climate from the large-scale atmospheric flow at higher altitudes. Most phenological stations reside at valley bottoms and are thus largely decoupled in their temporal variability from the influence of the westerly flow regime (local-scale loss of influence). This study corroborates an increasing number of similar investigations that find that vegetation does react in a sensitive way to variations of its atmospheric environment across various temporal and spatial scales.

  6. Consistency of Aquarius version-4 sea surface salinity with Argo products on various spatial and temporal scales

    NASA Astrophysics Data System (ADS)

    Lee, Tong

    2017-04-01

    Understanding the accuracies of satellite-derived sea surface salinity (SSS) measurements in depicting temporal changes and the dependence of the accuracies on spatiotemporal scales are important to capability assessment, future mission design, and applications to study oceanic phenomena of different spatiotemporal scales. This study quantifies the consistency between Aquarius Version-4 monthly gridded SSS (released in late 2015) with two widely used Argo monthly gridded near-surface salinity products. The analysis focused on their consistency in depicting temporal changes (including seasonal and non-seasonal) on various spatial scales: 1˚ x1˚ , 3˚ x3˚ , and 10˚ x10˚ . Globally averaged standard deviation (STD) values for Aquarius-Argo salinity differences on these three spatial scales are 0.16, 0.14, 0.09 psu, compared to those between the two Argo products of 0.10, 0.09, and 0.04 psu. Aquarius SSS compare better with Argo data on non-seasonal (e.g., interannual and intraseasonal) than for seasonal time scales. The seasonal Aquarius-Argo SSS differences are mostly concentrated at high latitudes. The Aquarius team is making active efforts to further reduce these high-latitude seasonal biases. The consistency between Aquarius and Argo salinity is similar to that between the two Argo products in the tropics and subtropics for non-seasonal signals, and in the tropics for seasonal signals. Therefore, the representativeness errors of the Argo products for various spatial scales (related to sampling and gridding) need to be taken into account when estimating the uncertainty of Aquarius SSS. The globally averaged uncertainty of large-scale (10˚ x10˚ ) non-seasonal Aquarius SSS is approximately 0.04 psu. These estimates reflect the significant improvements of Aquarius Version-4 SSS over the previous versions. The estimates can be used as baseline requirements for future ocean salinity missions from space. The spatial distribution of the uncertainty estimates is also useful for assimilation of Aquarius SSS.

  7. Unstructured-grid coastal ocean modelling in Southern Adriatic and Northern Ionian Seas

    NASA Astrophysics Data System (ADS)

    Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo

    2016-04-01

    The Southern Adriatic Northern Ionian coastal Forecasting System (SANIFS) is a short-term forecasting system based on unstructured grid approach. The model component is built on SHYFEM finite element three-dimensional hydrodynamic model. The operational chain exploits a downscaling approach starting from the Mediterranean oceanographic-scale model MFS (Mediterranean Forecasting System, operated by INGV). The implementation set-up has been designed to provide accurate hydrodynamics and active tracer processes in the coastal waters of Southern Eastern Italy (Apulia, Basilicata and Calabria regions), where the model is characterized by a variable resolution in range of 50-500 m. The horizontal resolution is also high in open-sea areas, where the elements size is approximately 3 km. The model is forced: (i) at the lateral open boundaries through a full nesting strategy directly with the MFS (temperature, salinity, non-tidal sea surface height and currents) and OTPS (tidal forcing) fields; (ii) at surface through two alternative atmospheric forcing datasets (ECMWF and COSMOME) via MFS-bulk-formulae. Given that the coastal fields are driven by a combination of both local/coastal and deep ocean forcings propagating along the shelf, the performance of SANIFS was verified first (i) at the large and shelf-coastal scales by comparing with a large scale CTD survey and then (ii) at the coastal-harbour scale by comparison with CTD, ADCP and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 7 km). The present work highlights how downscaling could improve the simulation of the flow field going from typical open-ocean scales of the order of several km to the coastal (and harbour) scales of tens to hundreds of meters.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.

    Small signal stability problems are one of the major threats to grid stability and reliability in the U.S. power grid. An undamped mode can cause large-amplitude oscillations and may result in system breakups and large-scale blackouts. There have been several incidents of system-wide oscillations. Of those incidents, the most notable is the August 10, 1996 western system breakup, a result of undamped system-wide oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision, time-synchronized data needed for detecting oscillation modes. Measurement-based modal analysis,more » also known as ModeMeter, uses real-time phasor measurements to identify system oscillation modes and their damping. Low damping indicates potential system stability issues. Modal analysis has been demonstrated with phasor measurements to have the capability of estimating system modes from both oscillation signals and ambient data. With more and more phasor measurements available and ModeMeter techniques maturing, there is yet a need for methods to bring modal analysis from monitoring to actions. The methods should be able to associate low damping with grid operating conditions, so operators or automated operation schemes can respond when low damping is observed. The work presented in this report aims to develop such a method and establish a Modal Analysis for Grid Operation (MANGO) procedure to aid grid operation decision making to increase inter-area modal damping. The procedure can provide operation suggestions (such as increasing generation or decreasing load) for mitigating inter-area oscillations.« less

  9. Development of a gridded meteorological dataset over Java island, Indonesia 1985-2014.

    PubMed

    Yanto; Livneh, Ben; Rajagopalan, Balaji

    2017-05-23

    We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985-2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology.

  10. Load Balancing Unstructured Adaptive Grids for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1996-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.

  11. Factorizable Upwind Schemes: The Triangular Unstructured Grid Formulation

    NASA Technical Reports Server (NTRS)

    Sidilkover, David; Nielsen, Eric J.

    2001-01-01

    The upwind factorizable schemes for the equations of fluid were introduced recently. They facilitate achieving the Textbook Multigrid Efficiency (TME) and are expected also to result in the solvers of unparalleled robustness. The approach itself is very general. Therefore, it may well become a general framework for the large-scale, Computational Fluid Dynamics. In this paper we outline the triangular grid formulation of the factorizable schemes. The derivation is based on the fact that the factorizable schemes can be expressed entirely using vector notation. without explicitly mentioning a particular coordinate frame. We, describe the resulting discrete scheme in detail and present some computational results verifying the basic properties of the scheme/solver.

  12. Last millennium Northern Hemisphere summer temperatures from tree rings: Part II, spatially resolved reconstructions

    NASA Astrophysics Data System (ADS)

    Anchukaitis, Kevin J.; Wilson, Rob; Briffa, Keith R.; Büntgen, Ulf; Cook, Edward R.; D'Arrigo, Rosanne; Davi, Nicole; Esper, Jan; Frank, David; Gunnarson, Björn E.; Hegerl, Gabi; Helama, Samuli; Klesse, Stefan; Krusic, Paul J.; Linderholm, Hans W.; Myglan, Vladimir; Osborn, Timothy J.; Zhang, Peng; Rydval, Milos; Schneider, Lea; Schurer, Andrew; Wiles, Greg; Zorita, Eduardo

    2017-05-01

    Climate field reconstructions from networks of tree-ring proxy data can be used to characterize regional-scale climate changes, reveal spatial anomaly patterns associated with atmospheric circulation changes, radiative forcing, and large-scale modes of ocean-atmosphere variability, and provide spatiotemporal targets for climate model comparison and evaluation. Here we use a multiproxy network of tree-ring chronologies to reconstruct spatially resolved warm season (May-August) mean temperatures across the extratropical Northern Hemisphere (40-90°N) using Point-by-Point Regression (PPR). The resulting annual maps of temperature anomalies (750-1988 CE) reveal a consistent imprint of volcanism, with 96% of reconstructed grid points experiencing colder conditions following eruptions. Solar influences are detected at the bicentennial (de Vries) frequency, although at other time scales the influence of insolation variability is weak. Approximately 90% of reconstructed grid points show warmer temperatures during the Medieval Climate Anomaly when compared to the Little Ice Age, although the magnitude varies spatially across the hemisphere. Estimates of field reconstruction skill through time and over space can guide future temporal extension and spatial expansion of the proxy network.

  13. The value of residential photovoltaic systems: A comprehensive assessment

    NASA Technical Reports Server (NTRS)

    Borden, C. S.

    1983-01-01

    Utility-interactive photovoltaic (PV) arrays on residential rooftops appear to be a potentially attractive, large-scale application of PV technology. Results of a comprehensive assessment of the value (i.e., break-even cost) of utility-grid connected residential photovoltaic power systems under a variety of technological and economic assumptions are presented. A wide range of allowable PV system costs are calculated for small (4.34 kW (p) sub ac) residential PV systems in various locales across the United States. Primary factor in this variation are differences in local weather conditions, utility-specific electric generation capacity, fuel types, and customer-load profiles that effect purchase and sell-back rates, and non-uniform state tax considerations. Additional results from this analysis are: locations having the highest insolation values are not necessary the most economically attractive sites; residential PV systems connected in parallel to the utility demonstrate high percentages of energy sold back to the grid, and owner financial and tax assumptions cause large variations in break-even costs. Significant cost reduction and aggressive resolution of potential institutional impediments (e.g., liability, standards, metering, and technical integration) are required for a residential PV marker to become a major electric-grid-connected energy-generation source.

  14. The value of residential photovoltaic systems: A comprehensive assessment

    NASA Astrophysics Data System (ADS)

    Borden, C. S.

    1983-09-01

    Utility-interactive photovoltaic (PV) arrays on residential rooftops appear to be a potentially attractive, large-scale application of PV technology. Results of a comprehensive assessment of the value (i.e., break-even cost) of utility-grid connected residential photovoltaic power systems under a variety of technological and economic assumptions are presented. A wide range of allowable PV system costs are calculated for small (4.34 kW (p) sub ac) residential PV systems in various locales across the United States. Primary factor in this variation are differences in local weather conditions, utility-specific electric generation capacity, fuel types, and customer-load profiles that effect purchase and sell-back rates, and non-uniform state tax considerations. Additional results from this analysis are: locations having the highest insolation values are not necessary the most economically attractive sites; residential PV systems connected in parallel to the utility demonstrate high percentages of energy sold back to the grid, and owner financial and tax assumptions cause large variations in break-even costs. Significant cost reduction and aggressive resolution of potential institutional impediments (e.g., liability, standards, metering, and technical integration) are required for a residential PV marker to become a major electric-grid-connected energy-generation source.

  15. Algebraic dynamic multilevel method for compositional flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Cusini, Matteo; Fryer, Barnaby; van Kruijsdijk, Cor; Hajibeygi, Hadi

    2018-02-01

    This paper presents the algebraic dynamic multilevel method (ADM) for compositional flow in three dimensional heterogeneous porous media in presence of capillary and gravitational effects. As a significant advancement compared to the ADM for immiscible flows (Cusini et al., 2016) [33], here, mass conservation equations are solved along with k-value based thermodynamic equilibrium equations using a fully-implicit (FIM) coupling strategy. Two different fine-scale compositional formulations are considered: (1) the natural variables and (2) the overall-compositions formulation. At each Newton's iteration the fine-scale FIM Jacobian system is mapped to a dynamically defined (in space and time) multilevel nested grid. The appropriate grid resolution is chosen based on the contrast of user-defined fluid properties and on the presence of specific features (e.g., well source terms). Consistent mapping between different resolutions is performed by the means of sequences of restriction and prolongation operators. While finite-volume restriction operators are employed to ensure mass conservation at all resolutions, various prolongation operators are considered. In particular, different interpolation strategies can be used for the different primary variables, and multiscale basis functions are chosen as pressure interpolators so that fine scale heterogeneities are accurately accounted for across different resolutions. Several numerical experiments are conducted to analyse the accuracy, efficiency and robustness of the method for both 2D and 3D domains. Results show that ADM provides accurate solutions by employing only a fraction of the number of grid-cells employed in fine-scale simulations. As such, it presents a promising approach for large-scale simulations of multiphase flow in heterogeneous reservoirs with complex non-linear fluid physics.

  16. A test-bed modeling study for wave resource assessment

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Neary, V. S.; Wang, T.; Gunawan, B.; Dallman, A.

    2016-02-01

    Hindcasts from phase-averaged wave models are commonly used to estimate standard statistics used in wave energy resource assessments. However, the research community and wave energy converter industry is lacking a well-documented and consistent modeling approach for conducting these resource assessments at different phases of WEC project development, and at different spatial scales, e.g., from small-scale pilot study to large-scale commercial deployment. Therefore, it is necessary to evaluate current wave model codes, as well as limitations and knowledge gaps for predicting sea states, in order to establish best wave modeling practices, and to identify future research needs to improve wave prediction for resource assessment. This paper presents the first phase of an on-going modeling study to address these concerns. The modeling study is being conducted at a test-bed site off the Central Oregon Coast using two of the most widely-used third-generation wave models - WaveWatchIII and SWAN. A nested-grid modeling approach, with domain dimension ranging from global to regional scales, was used to provide wave spectral boundary condition to a local scale model domain, which has a spatial dimension around 60km by 60km and a grid resolution of 250m - 300m. Model results simulated by WaveWatchIII and SWAN in a structured-grid framework are compared to NOAA wave buoy data for the six wave parameters, including omnidirectional wave power, significant wave height, energy period, spectral width, direction of maximum directionally resolved wave power, and directionality coefficient. Model performance and computational efficiency are evaluated, and the best practices for wave resource assessments are discussed, based on a set of standard error statistics and model run times.

  17. An efficient grid layout algorithm for biological networks utilizing various biological attributes

    PubMed Central

    Kojima, Kaname; Nagasaki, Masao; Jeong, Euna; Kato, Mitsuru; Miyano, Satoru

    2007-01-01

    Background Clearly visualized biopathways provide a great help in understanding biological systems. However, manual drawing of large-scale biopathways is time consuming. We proposed a grid layout algorithm that can handle gene-regulatory networks and signal transduction pathways by considering edge-edge crossing, node-edge crossing, distance measure between nodes, and subcellular localization information from Gene Ontology. Consequently, the layout algorithm succeeded in drastically reducing these crossings in the apoptosis model. However, for larger-scale networks, we encountered three problems: (i) the initial layout is often very far from any local optimum because nodes are initially placed at random, (ii) from a biological viewpoint, human layouts still exceed automatic layouts in understanding because except subcellular localization, it does not fully utilize biological information of pathways, and (iii) it employs a local search strategy in which the neighborhood is obtained by moving one node at each step, and automatic layouts suggest that simultaneous movements of multiple nodes are necessary for better layouts, while such extension may face worsening the time complexity. Results We propose a new grid layout algorithm. To address problem (i), we devised a new force-directed algorithm whose output is suitable as the initial layout. For (ii), we considered that an appropriate alignment of nodes having the same biological attribute is one of the most important factors of the comprehension, and we defined a new score function that gives an advantage to such configurations. For solving problem (iii), we developed a search strategy that considers swapping nodes as well as moving a node, while keeping the order of the time complexity. Though a naïve implementation increases by one order, the time complexity, we solved this difficulty by devising a method that caches differences between scores of a layout and its possible updates. Conclusion Layouts of the new grid layout algorithm are compared with that of the previous algorithm and human layout in an endothelial cell model, three times as large as the apoptosis model. The total cost of the result from the new grid layout algorithm is similar to that of the human layout. In addition, its convergence time is drastically reduced (40% reduction). PMID:17338825

  18. Modelling Pesticide Leaching At Column, Field and Catchment Scales I. Analysis of Soil Variability At Field and Catchment Scales

    NASA Astrophysics Data System (ADS)

    Gärdenäs, A.; Jarvis, N.; Alavi, G.

    The spatial variability of soil characteristics was studied in a small agricultural catch- ment (Vemmenhög, 9 km2) at the field and catchment scales. This analysis serves as a basis for assumptions concerning upscaling approaches used to model pesticide leaching from the catchment with the MACRO model (Jarvis et al., this meeting). The work focused on the spatial variability of two key soil properties for pesticide fate in soil, organic carbon and clay content. The Vemmenhög catchment (9 km2) is formed in a glacial till deposit in southernmost Sweden. The landscape is undulating (30 - 65 m a.s.l.) and 95 % of the area is used for crop production (winter rape, winter wheat, sugar beet and spring barley). The climate is warm temperate. Soil samples for or- ganic C and texture were taken on a small regular grid at Näsby Farm, (144 m x 144 m, sampling distance: 6-24 m, 77 points) and on an irregular large grid covering the whole catchment (sampling distance: 333 m, 46 points). At the field scale, it could be shown that the organic C content was strongly related to landscape position and height (R2= 73 %, p < 0.001, n=50). The organic C content of hollows in the landscape is so high that they contribute little to the total loss of pesticides (Jarvis et al., this meeting). Clay content is also related to landscape position, being larger at the hilltop locations resulting in lower near-saturated hydraulic conductivity. Hence, macropore flow can be expected to be more pronounced (see also Roulier & Jarvis, this meeting). The variability in organic C was similar for the field and catchment grids, which made it possible to krige the organic C content of the whole catchment using data from both grids and an uneven lag distance.

  19. Advanced Grid-Friendly Controls Demonstration Project for Utility-Scale PV Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorgian, Vahan; O'Neill, Barbara

    A typical photovoltaic (PV) power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. The availability and dissemination of actual test data showing the viability of advanced utility-scale PV controls among all industry stakeholders can leverage PV's value from being simply an energy resource to providing additional ancillary services that range from variability smoothing and frequency regulation to power quality. Strategically partnering with a selected utility and/or PV power plant operator is a key condition for a successful demonstration project. The U.S. Department of Energy's (DOE's) Solar Energy Technologies Officemore » selected the National Renewable Energy Laboratory (NREL) to be a principal investigator in a two-year project with goals to (1) identify a potential partner(s), (2) develop a detailed scope of work and test plan for a field project to demonstrate the gird-friendly capabilities of utility-scale PV power plants, (3) facilitate conducting actual demonstration tests, and (4) disseminate test results among industry stakeholders via a joint NREL/DOE publication and participation in relevant technical conferences. The project implementation took place in FY 2014 and FY 2015. In FY14, NREL established collaborations with AES and First Solar Electric, LLC, to conduct demonstration testing on their utility-scale PV power plants in Puerto Rico and Texas, respectively, and developed test plans for each partner. Both Puerto Rico Electric Power Authority and the Electric Reliability Council of Texas expressed interest in this project because of the importance of such advanced controls for the reliable operation of their power systems under high penetration levels of variable renewable generation. During FY15, testing was completed on both plants, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to provide various types of new grid-friendly controls.« less

  20. Local curvature entropy-based 3D terrain representation using a comprehensive Quadtree

    NASA Astrophysics Data System (ADS)

    Chen, Qiyu; Liu, Gang; Ma, Xiaogang; Mariethoz, Gregoire; He, Zhenwen; Tian, Yiping; Weng, Zhengping

    2018-05-01

    Large scale 3D digital terrain modeling is a crucial part of many real-time applications in geoinformatics. In recent years, the improved speed and precision in spatial data collection make the original terrain data more complex and bigger, which poses challenges for data management, visualization and analysis. In this work, we presented an effective and comprehensive 3D terrain representation based on local curvature entropy and a dynamic Quadtree. The Level-of-detail (LOD) models of significant terrain features were employed to generate hierarchical terrain surfaces. In order to reduce the radical changes of grid density between adjacent LODs, local entropy of terrain curvature was regarded as a measure of subdividing terrain grid cells. Then, an efficient approach was presented to eliminate the cracks among the different LODs by directly updating the Quadtree due to an edge-based structure proposed in this work. Furthermore, we utilized a threshold of local entropy stored in each parent node of this Quadtree to flexibly control the depth of the Quadtree and dynamically schedule large-scale LOD terrain. Several experiments were implemented to test the performance of the proposed method. The results demonstrate that our method can be applied to construct LOD 3D terrain models with good performance in terms of computational cost and the maintenance of terrain features. Our method has already been deployed in a geographic information system (GIS) for practical uses, and it is able to support the real-time dynamic scheduling of large scale terrain models more easily and efficiently.

Top