Sample records for large regular grids

  1. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  2. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  3. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  4. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  5. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  6. Variable Grid Traveltime Tomography for Near-surface Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Cai, A.; Zhang, J.

    2017-12-01

    We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.

  7. Global Static Indexing for Real-Time Exploration of Very Large Regular Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascucci, V; Frank, R

    2001-07-23

    In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less

  8. Artificial intelligence approach to planning the robotic assembly of large tetrahedral truss structures

    NASA Technical Reports Server (NTRS)

    Homemdemello, Luiz S.

    1992-01-01

    An assembly planner for tetrahedral truss structures is presented. To overcome the difficulties due to the large number of parts, the planner exploits the simplicity and uniformity of the shapes of the parts and the regularity of their interconnection. The planning automation is based on the computational formalism known as production system. The global data base consists of a hexagonal grid representation of the truss structure. This representation captures the regularity of tetrahedral truss structures and their multiple hierarchies. It maps into quadratic grids and can be implemented in a computer by using a two-dimensional array data structure. By maintaining the multiple hierarchies explicitly in the model, the choice of a particular hierarchy is only made when needed, thus allowing a more informed decision. Furthermore, testing the preconditions of the production rules is simple because the patterned way in which the struts are interconnected is incorporated into the topology of the hexagonal grid. A directed graph representation of assembly sequences allows the use of both graph search and backtracking control strategies.

  9. High throughput profile-profile based fold recognition for the entire human proteome.

    PubMed

    McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T

    2006-06-07

    In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.

  10. Membrane potential dynamics of grid cells

    PubMed Central

    Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.

    2014-01-01

    During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984

  11. netCDF Operators for Rapid Analysis of Measured and Modeled Swath-like Data

    NASA Astrophysics Data System (ADS)

    Zender, C. S.

    2015-12-01

    Swath-like data (hereafter SLD) are defined by non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data on curvilinear grids. Researchers and data centers want user-friendly, fast, and powerful methods to specify, extract, serve, manipulate, and thus analyze, SLD. To meet these needs, large research-oriented agencies and modeling center such as NASA, DOE, and NOAA increasingly employ the netCDF Operators (NCO), an open-source scientific data analysis software package applicable to netCDF and HDF data. NCO includes extensive, fast, parallelized regridding features to facilitate analysis and intercomparison of SLD and model data. Remote sensing, weather and climate modeling and analysis communities face similar problems in handling SLD including how to easily: 1. Specify and mask irregular regions such as ocean basins and political boundaries in SLD (and rectangular) grids. 2. Bin, interpolate, average, or re-map SLD to regular grids. 3. Derive secondary data from given quality levels of SLD. These common tasks require a data extraction and analysis toolkit that is SLD-friendly and, like NCO, familiar in all these communities. With NCO users can 1. Quickly project SLD onto the most useful regular grids for intercomparison. 2. Access sophisticated statistical and regridding functions that are robust to missing data and allow easy specification of quality control metrics. These capabilities improve interoperability, software-reuse, and, because they apply to SLD, minimize transmission, storage, and handling of unwanted data. While SLD analysis still poses many challenges compared to regularly gridded, rectangular data, the custom analyses scripts SLD once required are now shorter, more powerful, and user-friendly.

  12. Cascading failures in ac electricity grids.

    PubMed

    Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan

    2016-09-01

    Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.

  13. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  14. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-02-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  15. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-06-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  16. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. High Order Numerical Simulation of Waves Using Regular Grids and Non-conforming Interfaces

    DTIC Science & Technology

    2013-10-06

    SECURITY CLASSIFICATION OF: We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material...of space with smooth, but not necessarily constant, material characteristics, separated into sub-domains by interfaces of arbitrary shape. We...Abstract We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material characteristics, separated into

  18. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  19. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  20. Incompressible flow simulations on regularized moving meshfree grids

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2017-11-01

    A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.

  1. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDzORINCA model for 137Cs reinforces the importance of atmospheric modeling in emergency cases to gather information for protecting the population from the adverse effects of radiation.

  2. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl Nuclear Power Plant accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-07-01

    The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDZORINCA model for 137Cs reinforces the importance of atmospheric modelling in emergency cases to gather information for protecting the population from the adverse effects of radiation.

  3. Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.

  4. 3D data processing with advanced computer graphics tools

    NASA Astrophysics Data System (ADS)

    Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott

    2012-09-01

    Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.

  5. Quadtree of TIN: a new algorithm of dynamic LOD

    NASA Astrophysics Data System (ADS)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  6. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  7. Considering the Spatial Layout Information of Bag of Features (BoF) Framework for Image Classification.

    PubMed

    Mu, Guangyu; Liu, Ying; Wang, Limin

    2015-01-01

    The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.

  8. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Inviscid Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.

  9. Enhanced nearfield acoustic holography for larger distances of reconstructions using fixed parameter Tikhonov regularization

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.

    2016-07-07

    This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.

  10. Traveling in the dark: the legibility of a regular and predictable structure of the environment extends beyond its borders.

    PubMed

    Yaski, Osnat; Portugali, Juval; Eilam, David

    2012-04-01

    The physical structure of the surrounding environment shapes the paths of progression, which in turn reflect the structure of the environment and the way that it shapes behavior. A regular and coherent physical structure results in paths that extend over the entire environment. In contrast, irregular structure results in traveling over a confined sector of the area. In this study, rats were tested in a dark arena in which half the area contained eight objects in a regular grid layout, and the other half contained eight objects in an irregular layout. In subsequent trials, a salient landmark was placed first within the irregular half, and then within the grid. We hypothesized that rats would favor travel in the area with regular order, but found that activity in the area with irregular object layout did not differ from activity in the area with grid layout, even when the irregular half included a salient landmark. Thus, the grid impact in one arena half extended to the other half and overshadowed the presumed impact of the salient landmark. This could be explained by mechanisms that control spatial behavior, such as grid cells and odometry. However, when objects were spaced irregularly over the entire arena, the salient landmark became dominant and the paths converged upon it, especially from objects with direct access to the salient landmark. Altogether, three environmental properties: (i) regular and predictable structure; (ii) salience of landmarks; and (iii) accessibility, hierarchically shape the paths of progression in a dark environment. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Effects of high-frequency damping on iterative convergence of implicit viscous solver

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko

    2017-11-01

    This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.

  12. Improving ATLAS grid site reliability with functional tests using HammerCloud

    NASA Astrophysics Data System (ADS)

    Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan

    2012-12-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.

  13. A critical remark on the applicability of E-OBS European gridded temperature data set for validating control climate simulations

    NASA Astrophysics Data System (ADS)

    Kyselý, Jan; Plavcová, Eva

    2010-12-01

    The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.

  14. Directional kriging implementation for gridded data interpolation and comparative study with common methods

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, H.; Briggs, G.

    2016-12-01

    Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.

  15. Earth Observations taken by the Expedition 35 Crew

    NASA Image and Video Library

    2013-03-16

    ISS035-E-005438 (16 March 2013) --- One of the Expedition 35 crew members on the International Space Station used a still camera with a 400 millimeter lens to record this nocturnal image of the Phoenix, Arizona area. Like many large urban areas of the central and western United States, the Phoenix metropolitan area is laid out along a regular grid of city blocks and streets. While visible during the day, this grid is most evident at night, when the pattern of street lighting is clearly visible from above – in the case of this photograph, from the low Earth orbit vantage point of the International Space Station. The urban grid form encourages growth of a city outwards along its borders, by providing optimal access to new real estate. Fueled by the adoption of widespread personal automobile use during the 20th century, the Phoenix metropolitan area today includes 25 other municipalities (many of them largely suburban and residential in character) linked by a network of surface streets and freeways. The image area includes parts of several cities in the metropolitan area including Phoenix proper (right), Glendale (center), and Peoria (left). While the major street grid is oriented north-south, the northwest-southeast oriented Grand Avenue cuts across it at image center. Grand Avenue is a major transportation corridor through the western metropolitan area; the lighting patterns of large industrial and commercial properties are visible along its length. Other brightly lit properties include large shopping centers, strip centers, and gas stations which tend to be located at the intersections of north-south and east-west trending streets. While much of the land area highlighted in this image is urbanized, there are several noticeably dark areas. The Phoenix Mountains at upper right are largely public park and recreational land. To the west (image lower left), agricultural fields provide a sharp contrast to the lit streets of neighboring residential developments. The Salt River channel appears as a dark ribbon within the urban grid at lower right.

  16. GLOFRIM v1.0 - A globally applicable computational framework for integrated hydrological-hydrodynamic modelling

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.

    2017-10-01

    We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.

  17. CFD analysis of turbopump volutes

    NASA Technical Reports Server (NTRS)

    Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken

    1993-01-01

    An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.

  18. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.

    PubMed

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Mass production of extensive air showers for the Pierre Auger Collaboration using Grid Technology

    NASA Astrophysics Data System (ADS)

    Lozano Bahilo, Julio; Pierre Auger Collaboration

    2012-06-01

    When ultra-high energy cosmic rays enter the atmosphere they interact producing extensive air showers (EAS) which are the objects studied by the Pierre Auger Observatory. The number of particles involved in an EAS at these energies is of the order of billions and the generation of a single simulated EAS requires many hours of computing time with current processors. In addition, the storage space consumed by the output of one simulated EAS is very high. Therefore we have to make use of Grid resources to be able to generate sufficient quantities of showers for our physics studies in reasonable time periods. We have developed a set of highly automated scripts written in common software scripting languages in order to deal with the high number of jobs which we have to submit regularly to the Grid. In spite of the low number of sites supporting our Virtual Organization (VO) we have reached the top spot on CPU consumption among non LHC (Large Hadron Collider) VOs within EGI (European Grid Infrastructure).

  20. Assimilating data into open ocean tidal models

    NASA Astrophysics Data System (ADS)

    Kivman, Gennady A.

    The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.

  1. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids

    PubMed Central

    Boschitsch, Alexander H.; Fenley, Marcia O.

    2011-01-01

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent – analytical solutions are available for this case, thus allowing rigorous assessment of the solution accuracy; (ii) a pair of low dielectric charged spheres embedded in a ionic solvent to compute electrostatic interaction free energies as a function of the distance between sphere centers; (iii) surface potentials of proteins, nucleic acids and their larger-scale assemblies such as ribosomes; and (iv) electrostatic solvation free energies and their salt sensitivities – obtained with both linear and nonlinear Poisson-Boltzmann equation – for a large set of proteins. These latter results along with timings can serve as benchmarks for comparing the performance of different PBE solvers. PMID:21984876

  2. Fibonacci Grids

    NASA Technical Reports Server (NTRS)

    Swinbank, Richard; Purser, James

    2006-01-01

    Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.

  3. If Pythagoras Had a Geoboard...

    ERIC Educational Resources Information Center

    Ewbank, William A.

    1973-01-01

    Finding areas on square grid and on isometric grid geoboards is explained, then the Pythagorean Theorem is investigated when regular n-gons and when similar figures are erected on the sides of a right triangle. (DT)

  4. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, Josef; Magnera, Thomas F.; David, Donald E.; Harrison, Robin M.

    1999-01-01

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures.

  6. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, J.; Magnera, T.F.; David, D.E.; Harrison, R.M.

    1999-03-02

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures. 9 figs.

  7. An Evaluation of Recently Developed RANS-Based Turbulence Models for Flow Over a Two-Dimensional Block Subjected to Different Mesh Structures and Grid Resolutions

    NASA Astrophysics Data System (ADS)

    Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando

    2016-04-01

    Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.

  8. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  9. Economics of electricity

    NASA Astrophysics Data System (ADS)

    Erdmann, G.

    2015-08-01

    The following text is an introduction into the economic theory of electricity supply and demand. The basic approach of economics has to reflect the physical peculiarities of electric power that is based on the directed movement of electrons from the minus pole to the plus pole of a voltage source. The regular grid supply of electricity is characterized by a largely constant frequency and voltage. Thus, from a physical point of view electricity is a homogeneous product. But from an economic point of view, electricity is not homogeneous. Wholesale electricity prices show significant fluctuations over time and between regions, because this product is not storable (in relevant quantities) and there may be bottlenecks in the transmission and distribution grids. The associated non-homogeneity is the starting point of the economic analysis of electricity markets.

  10. The ARM Best Estimate 2-dimensional Gridded Surface

    DOE Data Explorer

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  11. Distributed Wavelet Transform for Irregular Sensor Network Grids

    DTIC Science & Technology

    2005-01-01

    implement it in a multi-hop, wireless sensor network ; and illustrate with several simulations. The new transform performs on par with conventional wavelet methods in a head-to-head comparison on a regular grid of sensor nodes.

  12. Fast and accurate 3D tensor calculation of the Fock operator in a general basis

    NASA Astrophysics Data System (ADS)

    Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.

    2012-11-01

    The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.

  13. Detector shape in hexagonal sampling grids

    NASA Astrophysics Data System (ADS)

    Baronti, Stefano; Capanni, Annalisa; Romoli, Andrea; Santurri, Leonardo; Vitulli, Raffaele

    2001-12-01

    Recent improvements in CCD technology make hexagonal sampling attractive for practical applications and bring a new interest on this topic. In the following the performances of hexagonal sampling are analyzed under general assumptions and compared with the performances of conventional rectangular sampling. This analysis will take into account both the lattice form (squared, rectangular, hexagonal, and regular hexagonal), and the pixel shape. The analyzed hexagonal grid will not based a-priori on a regular hexagon tessellation, i.e., no constraints will be made on the ratio between the sampling frequencies in the two spatial directions. By assuming an elliptic support for the spectrum of the signal being sampled, sampling conditions will be expressed for a generic hexagonal sampling grid, and a comaprison with the well-known sampling conditions for a comparable rectangular lattice will be performed. Further, by considering for sake of clarity a spectrum with a circular support, the comparison will be performed under the assumption of same number of pixels for unity of surface, and the particular case of regular hexagonal sampling grid will also be considered. Regular hexagonal lattice with regular hexagonal sensitivity shape of the detector elements will result as the best trade-off between the proposed sampling requirement. Concerning the detector shape, the hexagonal is more advantageous than the rectangular. To show that a figure of merit is defined which takes into account that the MTF (modulation transfer function) of a hexagonal detector is not separable, conversely from that of a rectangular detector. As a final result, octagonal shape detectors are compared to those with rectangular and hexagonal shape in the two hypotheses of equal and ideal fill factor, respectively.

  14. Assessment of sub-grid scale dispersion closure with regularized deconvolution method in a particle-laden turbulent jet

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Zhao, Xinyu; Ihme, Matthias

    2017-11-01

    Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).

  15. 2D Unstructured Grid Based Constrained Inversion of Magnetic Data Using Fuzzy C Means Clustering and Lithology Classification

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Singh, A.; Sharma, S. P.

    2016-12-01

    Regular grid discretization is often utilized to define complex geological models. However, this subdivision strategy performs at lower precision to represent the topographical observation surface. We have developed a new 2D unstructured grid based inversion for magnetic data for models including topography. It will consolidate prior parametric information into a deterministic inversion system to enhance the boundary between the different lithology based on recovered magnetic susceptibility distribution from the inversion. The presented susceptibility model will satisfy both the observed magnetic data and parametric information and therefore can represent the earth better than geophysical inversion models that only honor the observed magnetic data. Geophysical inversion and lithology classification are generally treated as two autonomous methodologies and connected in a serial way. The presented inversion strategy integrates these two parts into a unified scheme. To reduce the storage space and computation time, the conjugate gradient method is used. It results in feasible and practical imaging inversion of magnetic data to deal with large number of triangular grids. The efficacy of the presented inversion is demonstrated using two synthetic examples and one field data example.

  16. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  17. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  18. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  19. Irregular-regular mode oscillations inside plasma bubble and its fractal analysis in glow discharge magnetized plasma

    NASA Astrophysics Data System (ADS)

    Megalingam, Mariammal; Hari Prakash, N.; Solomon, Infant; Sarma, Arun; Sarma, Bornali

    2017-04-01

    Experimental evidence of different kinds of oscillations in floating potential fluctuations of glow discharge magnetized plasma is being reported. A spherical gridded cage is inserted into the ambient plasma volume for creating plasma bubbles. Plasma is produced between a spherical mesh grid and chamber. The spherical mesh grid of 80% optical transparency is connected to the positive terminal of power supply and considered as anode. Two Langmuir probes are kept in the ambient plasma to measure the floating potential fluctuations in different positions within the system, viz., inside and outside the spherical mesh grid. At certain conditions of discharge voltage (Vd) and magnetic field, irregular to regular mode appears, and it shows chronological changes with respect to magnetic field. Further various nonlinear analyses such as Recurrence Plot, Hurst exponent, and Lyapunov exponent have been carried out to investigate the dynamics of oscillation at a range of discharge voltages and external magnetic fields. Determinism, entropy, and Lmax are important measures of Recurrence Quantification Analysis which indicate an irregular to regular transition in the dynamics of the fluctuations. Furthermore, behavior of the plasma oscillation is characterized by the technique called multifractal detrended fluctuation analysis to explore the nature of the fluctuations. It reveals that it has a multifractal nature and behaves as a long range correlated process.

  20. NCAR global model topography generation software for unstructured grids

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.

    2015-06-01

    It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.

  1. Development and deployment of a Desktop and Mobile application on grid for GPS studie

    NASA Astrophysics Data System (ADS)

    Ntumba, Patient; Lotoy, Vianney; Djungu, Saint Jean; Fleury, Rolland; Petitdidier, Monique; Gemünd, André; Schwichtenberg, Horst

    2013-04-01

    GPS networks for scientific studies are developed all other the world and large databases, regularly updated, like IGS are also available. Many GPS have been installed in West and Central Africa during AMMA (African Monsoon Multiplidisciplinary Analysis), IHY (International heliophysical Year)and many other projects since 2005. African scientists have been educated to use those data especially for meteorological and ionospheric studies. The annual variations of ionospheric parameters for a given station or map of a given region are very intensive computing. Then grid or cloud computing may be a solution to obtain results in a relatively short time. Real time At the University of Kinshasa the chosen solution is a grid of several PCs. It has been deployed by using Globus Toolkit on a Condor pool in order to support the processing of GPS data for ionospheric studies. To be user-friendly, graphical user interfaces(GUI) have been developed to help the user to prepare and submit jobs. One is a java GUI for desktop client, the other is an Android GUI for mobile client. The interest of a grid is the possibility to send a bunch of jobs with an adequate agent control in order to survey the job execution and result storage. After the feasibility study the grid will be extended to a larger number of PCs. Other solutions will be in parallel explored.

  2. Optical side-effects of fs-laser treatment in refractive surgery investigated by means of a model eye

    PubMed Central

    Ackermann, Roland; Kammel, Robert; Merker, Marina; Kamm, Andreas; Tünnermann, Andreas; Nolte, Stefan

    2013-01-01

    Optical side-effects of fs-laser treatment in refractive surgery are investigated by means of a model eye. We show that rainbow glare is the predominant perturbation, which can be avoided by randomly distributing laser spots within the lens. For corneal applications such as fs-LASIK, even a regular grid with spot-to-spot distances of ~3 µm is sufficient to minimize rainbow glare perception. Contrast sensitivity is affected, when the lens is treated with large 3D-patterns. PMID:23413236

  3. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  4. Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen

    1997-01-01

    In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.

  5. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  6. Statistically-Estimated Tree Composition for the Northeastern United States at Euro-American Settlement.

    PubMed

    Paciorek, Christopher J; Goring, Simon J; Thurman, Andrew L; Cogbill, Charles V; Williams, John W; Mladenoff, David J; Peters, Jody A; Zhu, Jun; McLachlan, Jason S

    2016-01-01

    We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio) and midwestern (Indiana to Minnesota). Public Land Survey point data in the midwestern region (ca. 0.8-km resolution) are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0.

  7. Unstructured viscous grid generation by advancing-front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.

  8. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  9. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  10. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    PubMed Central

    Pereira, N F; Sitek, A

    2011-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496

  11. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    NASA Astrophysics Data System (ADS)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  12. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  13. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  14. Two variants of minimum discarded fill ordering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, E.F.; Forsyth, P.A.; Tang, Wei-Pai

    1991-01-01

    It is well known that the ordering of the unknowns can have a significant effect on the convergence of Preconditioned Conjugate Gradient (PCG) methods. There has been considerable experimental work on the effects of ordering for regular finite difference problems. In many cases, good results have been obtained with preconditioners based on diagonal, spiral or natural row orderings. However, for finite element problems having unstructured grids or grids generated by a local refinement approach, it is difficult to define many of the orderings for more regular problems. A recently proposed Minimum Discarded Fill (MDF) ordering technique is effective in findingmore » high quality Incomplete LU (ILU) preconditioners, especially for problems arising from unstructured finite element grids. Testing indicates this algorithm can identify a rather complicated physical structure in an anisotropic problem and orders the unknowns in the preferred'' direction. The MDF technique may be viewed as the numerical analogue of the minimum deficiency algorithm in sparse matrix technology. At any stage of the partial elimination, the MDF technique chooses the next pivot node so as to minimize the amount of discarded fill. In this work, two efficient variants of the MDF technique are explored to produce cost-effective high-order ILU preconditioners. The Threshold MDF orderings combine MDF ideas with drop tolerance techniques to identify the sparsity pattern in the ILU preconditioners. These techniques identify an ordering that encourages fast decay of the entries in the ILU factorization. The Minimum Update Matrix (MUM) ordering technique is a simplification of the MDF ordering and is closely related to the minimum degree algorithm. The MUM ordering is especially for large problems arising from Navier-Stokes problems. Some interesting pictures of the orderings are presented using a visualization tool. 22 refs., 4 figs., 7 tabs.« less

  15. Grid scale drives the scale and long-term stability of place maps

    PubMed Central

    Mallory, Caitlin S; Hardcastle, Kiah; Bant, Jason S; Giocomo, Lisa M

    2018-01-01

    Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally-defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here, we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory. PMID:29335607

  16. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  17. Multigrid Strategies for Viscous Flow Solvers on Anisotropic Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Movriplis, Dimitri J.

    1998-01-01

    Unstructured multigrid techniques for relieving the stiffness associated with high-Reynolds number viscous flow simulations on extremely stretched grids are investigated. One approach consists of employing a semi-coarsening or directional-coarsening technique, based on the directions of strong coupling within the mesh, in order to construct more optimal coarse grid levels. An alternate approach is developed which employs directional implicit smoothing with regular fully coarsened multigrid levels. The directional implicit smoothing is obtained by constructing implicit lines in the unstructured mesh based on the directions of strong coupling. Both approaches yield large increases in convergence rates over the traditional explicit full-coarsening multigrid algorithm. However, maximum benefits are achieved by combining the two approaches in a coupled manner into a single algorithm. An order of magnitude increase in convergence rate over the traditional explicit full-coarsening algorithm is demonstrated, and convergence rates for high-Reynolds number viscous flows which are independent of the grid aspect ratio are obtained. Further acceleration is provided by incorporating low-Mach-number preconditioning techniques, and a Newton-GMRES strategy which employs the multigrid scheme as a preconditioner. The compounding effects of these various techniques on speed of convergence is documented through several example test cases.

  18. Grid-cell-based crop water accounting for the famine early warning system

    USGS Publications Warehouse

    Verdin, J.; Klaver, R.

    2002-01-01

    Rainfall monitoring is a regular activity of food security analysts for sub-Saharan Africa due to the potentially disastrous impact of drought. Crop water accounting schemes are used to track rainfall timing and amounts relative to phenological requirements, to infer water limitation impacts on yield. Unfortunately, many rain gauge reports are available only after significant delays, and the gauge locations leave large gaps in coverage. As an alternative, a grid-cell-based formulation for the water requirement satisfaction index (WRSI) was tested for maize in Southern Africa. Grids of input variables were obtained from remote sensing estimates of rainfall, meteorological models, and digital soil maps. The spatial WRSI was computed for the 1996–97 and 1997–98 growing seasons. Maize yields were estimated by regression and compared with a limited number of reports from the field for the 1996–97 season in Zimbabwe. Agreement at a useful level (r = 0·80) was observed. This is comparable to results from traditional analysis with station data. The findings demonstrate the complementary role that remote sensing, modelling, and geospatial analysis can play in an era when field data collection in sub-Saharan Africa is suffering an unfortunate decline.

  19. The Numerical Simulation of Time Dependent Flow Structures Over a Natural Gravel Surface.

    NASA Astrophysics Data System (ADS)

    Hardy, R. J.; Lane, S. N.; Ferguson, R. I.; Parsons, D. R.

    2004-05-01

    Research undertaken over the last few years has demonstrated the importance of the structure of gravel river beds for understanding the interaction between fluid flow and sediment transport processes. This includes the observation of periodic high-speed fluid wedges interconnected by low-speed flow regions. Our understanding of these flows has been enhanced significantly through a series of laboratory experiments and supported by field observations. However, the potential of high resolution three dimensional Computational Fluid Dynamics (CFD) modeling has yet to be fully developed. This is largely the result of the problems of designing numerically stable meshes for use with complex bed topographies and that Reynolds averaged turbulence schemes are applied. This paper develops two novel techniques for dealing with these issues. The first is the development and validation of a method for representing the complex surface topography of gravel-bed rivers in high resolution three-dimensional computational fluid dynamic models. This is based upon a porosity treatment with a regular structured grid and the application of a porosity modification to the mass conservation equation in which: fully blocked cells are assigned a porosity of zero; fully unblocked cells are assigned a porosity of one; and partly blocked cells are assigned a porosity of between 0 and 1, according to the percentage of the cell volume that is blocked. The second is the application of Large Eddy Simulation (LES) which enables time dependent flow structures to be numerically predicted over the complex bed topographies. The regular structured grid with the embedded porosity algorithm maintains a constant grid cell size throughout the domain implying a constant filter scale for the LES simulation. This enables the prediction of coherent structures, repetitive quasi-cyclic large-scale turbulent motions, over the gravel surface which are of a similar magnitude and frequency to those previously observed in both flume and field studies. These structures are formed by topographic forcing within the domain and are scaled with the flow depth. Finally, this provides the numerical framework for the prediction of sediment transport within a time dependent framework. The turbulent motions make a significant contribution to the turbulent shear stress and the pressure fluctuations which significantly affect the forces acting on the bed and potentially control sediment motion.

  20. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  1. Colony mapping: A new technique for monitoring crevice-nesting seabirds

    USGS Publications Warehouse

    Renner, H.M.; Renner, M.; Reynolds, J.H.; Harping, A.M.A.; Jones, I.L.; Irons, D.B.; Byrd, G.V.

    2006-01-01

    Monitoring populations of auklets and other crevice-nesting seabirds remains problematic, although numerous methods have been attempted since the mid-1960s. Anecdotal evidence suggests several large auklet colonies have recently decreased in both abundance and extent, concurrently with vegetation encroachment and succession. Quantifying changes in the geographical extent of auklet colonies may be a useful alternative to monitoring population size directly. We propose a standardized method for colony mapping using a randomized systematic grid survey with two components: a simple presence/absence survey and an auklet evidence density survey. A quantitative auklet evidence density index was derived from the frequency of droppings and feathers. This new method was used to map the colony on St. George Island in the southeastern Bering Sea and results were compared to previous colony mapping efforts. Auklet presence was detected in 62 of 201 grid cells (each grid cell = 2500 m2) by sampling a randomly placed 16 m2 plot in each cell; estimated colony area = 155 000 m2. The auklet evidence density index varied by two orders of magnitude across the colony and was strongly correlated with means of replicated counts of birds socializing on the colony surface. Quantitatively mapping all large auklet colonies is logistically feasible using this method and would provide an important baseline for monitoring colony status. Regularly monitoring select colonies using this method may be the best means of detecting changes in distribution and population size of crevice-nesting seabirds. ?? The Cooper Ornithological Society 2006.

  2. Measuring the Flatness of Focal Plane for Very Large Mosaic CCD Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Jiangang; Estrada, Juan; Cease, Herman

    2010-06-08

    Large mosaic multiCCD camera is the key instrument for modern digital sky survey. DECam is an extremely red sensitive 520 Megapixel camera designed for the incoming Dark Energy Survey (DES). It is consist of sixty two 4k x 2k and twelve 2k x 2k 250-micron thick fully-depleted CCDs, with a focal plane of 44 cm in diameter and a field of view of 2.2 square degree. It will be attached to the Blanco 4-meter telescope at CTIO. The DES will cover 5000 square-degrees of the southern galactic cap in 5 color bands (g, r, i, z, Y) in 5 yearsmore » starting from 2011. To achieve the science goal of constraining the Dark Energy evolution, stringent requirements are laid down for the design of DECam. Among them, the flatness of the focal plane needs to be controlled within a 60-micron envelope in order to achieve the specified PSF variation limit. It is very challenging to measure the flatness of the focal plane to such precision when it is placed in a high vacuum dewar at 173 K. We developed two image based techniques to measure the flatness of the focal plane. By imaging a regular grid of dots on the focal plane, the CCD offset along the optical axis is converted to the variation the grid spacings at different positions on the focal plane. After extracting the patterns and comparing the change in spacings, we can measure the flatness to high precision. In method 1, the regular dots are kept in high sub micron precision and cover the whole focal plane. In method 2, no high precision for the grid is required. Instead, we use a precise XY stage moves the pattern across the whole focal plane and comparing the variations of the spacing when it is imaged by different CCDs. Simulation and real measurements show that the two methods work very well for our purpose, and are in good agreement with the direct optical measurements.« less

  3. Optimizing "self-wicking" nanowire grids.

    PubMed

    Wei, Hui; Dandey, Venkata P; Zhang, Zhening; Raczkowski, Ashleigh; Rice, Willam J; Carragher, Bridget; Potter, Clinton S

    2018-05-01

    We have developed a self-blotting TEM grid for use with a novel instrument for vitrifying samples for cryo-electron microscopy (cryoEM). Nanowires are grown on the copper surface of the grid using a simple chemical reaction and the opposite smooth side is used to adhere to a holey sample substrate support, for example carbon or gold. When small volumes of sample are applied to the nanowire grids the wires effectively act as blotting paper to rapidly wick away the liquid, leaving behind a thin film. In this technical note, we present a detailed description of how we make these grids using a variety of substrates fenestrated with either lacey or regularly spaced holes. We explain how we characterize the quality of the grids and we describe their behavior under a variety of conditions. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Making sense of sparse rating data in collaborative filtering via topographic organization of user preference patterns.

    PubMed

    Polcicová, Gabriela; Tino, Peter

    2004-01-01

    We introduce topographic versions of two latent class models (LCM) for collaborative filtering. Latent classes are topologically organized on a square grid. Topographic organization of latent classes makes orientation in rating/preference patterns captured by the latent classes easier and more systematic. The variation in film rating patterns is modelled by multinomial and binomial distributions with varying independence assumptions. In the first stage of topographic LCM construction, self-organizing maps with neural field organized according to the LCM topology are employed. We apply our system to a large collection of user ratings for films. The system can provide useful visualization plots unveiling user preference patterns buried in the data, without loosing potential to be a good recommender model. It appears that multinomial distribution is most adequate if the model is regularized by tight grid topologies. Since we deal with probabilistic models of the data, we can readily use tools from probability and information theories to interpret and visualize information extracted by our system.

  5. Exploration of exposure conditions with a novel wireless detector for bedside digital radiography

    NASA Astrophysics Data System (ADS)

    Bosmans, Hilde; Nens, Joris; Delzenne, Louis; Marshall, Nicholas; Pauwels, Herman; De Wever, Walter; Oyen, Raymond

    2012-03-01

    We propose, apply and validate an optimization scheme for a new wireless CsI based DR detector in combination with a regular mobile X-ray system for bedside imaging applications. Three different grids were tested in this combination. Signal-difference-to-noise was investigated in two ways, using a 1mm Cu piece in combination with different thicknesses of PMMA and by means of the CDRAD phantom using 10 images per condition and an automated evaluation method. A Figure of Merit (FOM), namely SDNR2/Imparted Energy, was calculated for a large range of exposure conditions, without and with grid in place. Misalignment of the grids was evaluated via the same FOMs. This optimization study was validated with comparative X-ray acquisitions performed on dead bodies. An experienced radiologist scored the quality of several specific aspects for all these exposures. Signal difference to noise ratios measured with the Cu method correlated well with the threshold contrasts from the CDRAD analysis (R2 > 0.9). The analysis showed optimal FOM with detector air kerma rates as typically used in clinical practice. Lower tube voltages provide higher FOM than the higher values but their practical use depends on the limitations of X-ray tubes, linked to patient motion artefacts. The use of high resolution grids should be encouraged, as the FOM increases with 47% at 75kV. These scores from the Visual grading study confirmed the results obtained with the FOM. The switch to (wireless) DR technology for bedside imaging could benefit from devices to improve grid positioning or any scatter reduction technique.

  6. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  7. a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Dhanda, A.; Remondino, F.; Santana Quintero, M.

    2018-05-01

    This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.

  8. Operation quality assessment model for video conference system

    NASA Astrophysics Data System (ADS)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  9. Speed-up of the volumetric method of moments for the approximate RCS of large arbitrary-shaped dielectric targets

    NASA Astrophysics Data System (ADS)

    Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe

    2017-08-01

    A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.

  10. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  11. C library for topological study of the electronic charge density.

    PubMed

    Vega, David; Aray, Yosslen; Rodríguez, Jesús

    2012-12-05

    The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.

  12. Interpolation of scattered temperature data measurements onto a worldwide regular grid using radial basis functions with applications to global warming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kansa, E.J.; Axelrod, M.C.; Kercher, J.R.

    1994-05-01

    Our current research into the response of natural ecosystems to a hypothesized climatic change requires that we have estimates of various meteorological variables on a regularly spaced grid of points on the surface of the earth. Unfortunately, the bulk of the world`s meteorological measurement stations is located at airports that tend to be concentrated on the coastlines of the world or near populated areas. We can also see that the spatial density of the station locations is extremely non-uniform with the greatest density in the USA, followed by Western Europe. Furthermore, the density of airports is rather sparse in desertmore » regions such as the Sahara, the Arabian, Gobi, and Australian deserts; likewise the density is quite sparse in cold regions such as Antarctica Northern Canada, and interior northern Russia. The Amazon Basin in Brazil has few airports. The frequency of airports is obviously related to the population centers and the degree of industrial development of the country. We address the following problem here. Given values of meteorological variables, such as maximum monthly temperature, measured at the more than 5,500 airport stations, interpolate these values onto a regular grid of terrestrial points spaced by one degree in both latitude and longitude. This is known as the scattered data problem.« less

  13. Dynamic Testing and Automatic Repair of Reconfigurable Wiring Harnesses

    DTIC Science & Technology

    2006-11-27

    Switch An M ×N grid of switches configured to provide a M -input, N -output routing network. Permutation Network A permutation network performs an...wiring reduces the effective advantage of their reduced switch count, particularly when considering that regular grids (crossbar switches being a...are connected to. The outline circuit shown in Fig. 20 shows how a suitable ‘discovery probe’ might be implemented. The circuit shows a UART

  14. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  15. Integrating bathymetric and topographic data

    NASA Astrophysics Data System (ADS)

    Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat

    2017-11-01

    The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.

  16. Rapid Decimation for Direct Volume Rendering

    NASA Technical Reports Server (NTRS)

    Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane

    1997-01-01

    An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.

  17. Grid-cell-based crop water accounting for the famine early warning system

    NASA Astrophysics Data System (ADS)

    Verdin, James; Klaver, Robert

    2002-06-01

    Rainfall monitoring is a regular activity of food security analysts for sub-Saharan Africa due to the potentially disastrous impact of drought. Crop water accounting schemes are used to track rainfall timing and amounts relative to phenological requirements, to infer water limitation impacts on yield. Unfortunately, many rain gauge reports are available only after significant delays, and the gauge locations leave large gaps in coverage. As an alternative, a grid-cell-based formulation for the water requirement satisfaction index (WRSI) was tested for maize in Southern Africa. Grids of input variables were obtained from remote sensing estimates of rainfall, meteorological models, and digital soil maps. The spatial WRSI was computed for the 1996-97 and 1997-98 growing seasons. Maize yields were estimated by regression and compared with a limited number of reports from the field for the 1996-97 season in Zimbabwe. Agreement at a useful level (r = 0·80) was observed. This is comparable to results from traditional analysis with station data. The findings demonstrate the complementary role that remote sensing, modelling, and geospatial analysis can play in an era when field data collection in sub-Saharan Africa is suffering an unfortunate decline. Published in 2002 by John Wiley & Sons, Ltd.

  18. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.

  19. Analysis the Transient Process of Wind Power Resources when there are Voltage Sags in Distribution Grid

    NASA Astrophysics Data System (ADS)

    Nhu Y, Do

    2018-03-01

    Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.

  20. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  1. Modelling effects on grid cells of sensory input during self‐motion

    PubMed Central

    Raudies, Florian; Hinman, James R.

    2016-01-01

    Abstract The neural coding of spatial location for memory function may involve grid cells in the medial entorhinal cortex, but the mechanism of generating the spatial responses of grid cells remains unclear. This review describes some current theories and experimental data concerning the role of sensory input in generating the regular spatial firing patterns of grid cells, and changes in grid cell firing fields with movement of environmental barriers. As described here, the influence of visual features on spatial firing could involve either computations of self‐motion based on optic flow, or computations of absolute position based on the angle and distance of static visual cues. Due to anatomical selectivity of retinotopic processing, the sensory features on the walls of an environment may have a stronger effect on ventral grid cells that have wider spaced firing fields, whereas the sensory features on the ground plane may influence the firing of dorsal grid cells with narrower spacing between firing fields. These sensory influences could contribute to the potential functional role of grid cells in guiding goal‐directed navigation. PMID:27094096

  2. Hexagonal Pixels and Indexing Scheme for Binary Images

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    2004-01-01

    A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.

  3. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  4. Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2016-11-01

    We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.

  5. Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2012-01-01

    The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.

  6. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  7. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less

  8. CFD Script for Rapid TPS Damage Assessment

    NASA Technical Reports Server (NTRS)

    McCloud, Peter

    2013-01-01

    This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.

  9. A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Capolino, F; Wilton, D

    2005-02-02

    A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less

  10. Integrating TITAN2D Geophysical Mass Flow Model with GIS

    NASA Astrophysics Data System (ADS)

    Namikawa, L. M.; Renschler, C.

    2005-12-01

    TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.

  11. Evaluation of grid generation technologies from an applied perspective

    NASA Technical Reports Server (NTRS)

    Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.

    1995-01-01

    An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.

  12. Methods and apparatus of analyzing electrical power grid data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.

    Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less

  13. Optimizing the Distribution of Tie Points for the Bundle Adjustment of HRSC Image Mosaics

    NASA Astrophysics Data System (ADS)

    Bostelmann, J.; Breitkopf, U.; Heipke, C.

    2017-07-01

    For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.

  14. Evaluation of global equal-area mass grid solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron

    2015-04-01

    The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.

  15. Self-Avoiding Walks Over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.

  16. The abrupt development of adult-like grid cell firing in the medial entorhinal cortex

    PubMed Central

    Wills, Thomas J.; Barry, Caswell; Cacucci, Francesca

    2012-01-01

    Understanding the development of the neural circuits subserving specific cognitive functions such as navigation remains a central problem in neuroscience. Here, we characterize the development of grid cells in the medial entorhinal cortex, which, by nature of their regularly spaced firing fields, are thought to provide a distance metric to the hippocampal neural representation of space. Grid cells emerge at the time of weaning in the rat, at around 3 weeks of age. We investigated whether grid cells in young rats are functionally equivalent to those observed in the adult as soon as they appear, or if instead they follow a gradual developmental trajectory. We find that, from the very youngest ages at which reproducible grid firing is observed (postnatal day 19): grid cells display adult-like firing fields that tessellate to form a coherent map of the local environment; that this map is universal, maintaining its internal structure across different environments; and that grid cells in young rats, as in adults, also encode a representation of direction and speed. To further investigate the developmental processes leading up to the appearance of grid cells, we present data from individual medial entorhinal cortex cells recorded across more than 1 day, spanning the period before and after the grid firing pattern emerged. We find that increasing spatial stability of firing was correlated with increasing gridness. PMID:22557949

  17. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  18. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  19. A MATLAB-based graphical user interface program for computing functionals of the geopotential up to ultra-high degrees and orders

    NASA Astrophysics Data System (ADS)

    Bucha, Blažej; Janák, Juraj

    2013-07-01

    We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.

  20. Using deconvolution to improve the metrological performance of the grid method

    NASA Astrophysics Data System (ADS)

    Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis

    2013-06-01

    The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.

  1. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  2. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  3. Nine martian years of dust optical depth observations: A reference dataset

    NASA Astrophysics Data System (ADS)

    Montabone, Luca; Forget, Francois; Kleinboehl, Armin; Kass, David; Wilson, R. John; Millour, Ehouarn; Smith, Michael; Lewis, Stephen; Cantor, Bruce; Lemmon, Mark; Wolff, Michael

    2016-07-01

    We present a multi-annual reference dataset of the horizontal distribution of airborne dust from martian year 24 to 32 using observations of the martian atmosphere from April 1999 to June 2015 made by the Thermal Emission Spectrometer (TES) aboard Mars Global Surveyor, the Thermal Emission Imaging System (THEMIS) aboard Mars Odyssey, and the Mars Climate Sounder (MCS) aboard Mars Reconnaissance Orbiter (MRO). Our methodology to build the dataset works by gridding the available retrievals of column dust optical depth (CDOD) from TES and THEMIS nadir observations, as well as the estimates of this quantity from MCS limb observations. The resulting (irregularly) gridded maps (one per sol) were validated with independent observations of CDOD by PanCam cameras and Mini-TES spectrometers aboard the Mars Exploration Rovers "Spirit" and "Opportunity", by the Surface Stereo Imager aboard the Phoenix lander, and by the Compact Reconnaissance Imaging Spectrometer for Mars aboard MRO. Finally, regular maps of CDOD are produced by spatially interpolating the irregularly gridded maps using a kriging method. These latter maps are used as dust scenarios in the Mars Climate Database (MCD) version 5, and are useful in many modelling applications. The two datasets (daily irregularly gridded maps and regularly kriged maps) for the nine available martian years are publicly available as NetCDF files and can be downloaded from the MCD website at the URL: http://www-mars.lmd.jussieu.fr/mars/dust_climatology/index.html

  4. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  5. Transmission Technologies and Operational Characteristic Analysis of Hybrid UHV AC/DC Power Grids in China

    NASA Astrophysics Data System (ADS)

    Tian, Zhang; Yanfeng, Gong

    2017-05-01

    In order to solve the contradiction between demand and distribution range of primary energy resource, Ultra High Voltage (UHV) power grids should be developed rapidly to meet development of energy bases and accessing of large-scale renewable energy. This paper reviewed the latest research processes of AC/DC transmission technologies, summarized the characteristics of AC/DC power grids, concluded that China’s power grids certainly enter a new period of large -scale hybrid UHV AC/DC power grids and characteristics of “strong DC and weak AC” becomes increasingly pro minent; possible problems in operation of AC/DC power grids was discussed, and interaction or effect between AC/DC power grids was made an intensive study of; according to above problems in operation of power grids, preliminary scheme is summarized as fo llows: strengthening backbone structures, enhancing AC/DC transmission technologies, promoting protection measures of clean energ y accessing grids, and taking actions to solve stability problems of voltage and frequency etc. It’s valuable for making hybrid UHV AC/DC power grids adapt to operating mode of large power grids, thus guaranteeing security and stability of power system.

  6. Using Computing and Data Grids for Large-Scale Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2001-01-01

    We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.

  7. The FORBIO Climate data set for climate analyses

    NASA Astrophysics Data System (ADS)

    Delvaux, C.; Journée, M.; Bertrand, C.

    2015-06-01

    In the framework of the interdisciplinary FORBIO Climate research project, the Royal Meteorological Institute of Belgium is in charge of providing high resolution gridded past climate data (i.e. temperature and precipitation). This climate data set will be linked to the measurements on seedlings, saplings and mature trees to assess the effects of climate variation on tree performance. This paper explains how the gridded daily temperature (minimum and maximum) data set was generated from a consistent station network between 1980 and 2013. After station selection, data quality control procedures were developed and applied to the station records to ensure that only valid measurements will be involved in the gridding process. Thereafter, the set of unevenly distributed validated temperature data was interpolated on a 4 km × 4 km regular grid over Belgium. The performance of different interpolation methods has been assessed. The method of kriging with external drift using correlation between temperature and altitude gave the most relevant results.

  8. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  9. GeoPAT: A toolbox for pattern-based information retrieval from large geospatial databases

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Jarosław; Netzel, Paweł; Stepinski, Tomasz

    2015-07-01

    Geospatial Pattern Analysis Toolbox (GeoPAT) is a collection of GRASS GIS modules for carrying out pattern-based geospatial analysis of images and other spatial datasets. The need for pattern-based analysis arises when images/rasters contain rich spatial information either because of their very high resolution or their very large spatial extent. Elementary units of pattern-based analysis are scenes - patches of surface consisting of a complex arrangement of individual pixels (patterns). GeoPAT modules implement popular GIS algorithms, such as query, overlay, and segmentation, to operate on the grid of scenes. To achieve these capabilities GeoPAT includes a library of scene signatures - compact numerical descriptors of patterns, and a library of distance functions - providing numerical means of assessing dissimilarity between scenes. Ancillary GeoPAT modules use these functions to construct a grid of scenes or to assign signatures to individual scenes having regular or irregular geometries. Thus GeoPAT combines knowledge retrieval from patterns with mapping tasks within a single integrated GIS environment. GeoPAT is designed to identify and analyze complex, highly generalized classes in spatial datasets. Examples include distinguishing between different styles of urban settlements using VHR images, delineating different landscape types in land cover maps, and mapping physiographic units from DEM. The concept of pattern-based spatial analysis is explained and the roles of all modules and functions are described. A case study example pertaining to delineation of landscape types in a subregion of NLCD is given. Performance evaluation is included to highlight GeoPAT's applicability to very large datasets. The GeoPAT toolbox is available for download from

  10. Multivariate Spline Algorithms for CAGD

    NASA Technical Reports Server (NTRS)

    Boehm, W.

    1985-01-01

    Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.

  11. Sound-field measurement with moving microphones

    PubMed Central

    Katzberg, Fabrice; Mazur, Radoslaw; Maass, Marco; Koch, Philipp; Mertins, Alfred

    2017-01-01

    Closed-room scenarios are characterized by reverberation, which decreases the performance of applications such as hands-free teleconferencing and multichannel sound reproduction. However, exact knowledge of the sound field inside a volume of interest enables the compensation of room effects and allows for a performance improvement within a wide range of applications. The sampling of sound fields involves the measurement of spatially dependent room impulse responses, where the Nyquist-Shannon sampling theorem applies in the temporal and spatial domains. The spatial measurement often requires a huge number of sampling points and entails other difficulties, such as the need for exact calibration of a large number of microphones. In this paper, a method for measuring sound fields using moving microphones is presented. The number of microphones is customizable, allowing for a tradeoff between hardware effort and measurement time. The goal is to reconstruct room impulse responses on a regular grid from data acquired with microphones between grid positions, in general. For this, the sound field at equidistant positions is related to the measurements taken along the microphone trajectories via spatial interpolation. The benefits of using perfect sequences for excitation, a multigrid recovery, and the prospects for reconstruction by compressed sensing are presented. PMID:28599533

  12. Evaluation and application of the ROMS 1-way embedding procedure to the central california upwelling system

    NASA Astrophysics Data System (ADS)

    Penven, Pierrick; Debreu, Laurent; Marchesiello, Patrick; McWilliams, James C.

    What most clearly distinguishes near-shore and off-shore currents is their dominant spatial scale, O (1-30) km near-shore and O (30-1000) km off-shore. In practice, these phenomena are usually both measured and modeled with separate methods. In particular, it is infeasible for any regular computational grid to be large enough to simultaneously resolve well both types of currents. In order to obtain local solutions at high resolution while preserving the regional-scale circulation at an affordable computational cost, a 1-way grid embedding capability has been integrated into the Regional Oceanic Modeling System (ROMS). It takes advantage of the AGRIF (Adaptive Grid Refinement in Fortran) Fortran 90 package based on the use of pointers. After a first evaluation in a baroclinic vortex test case, the embedding procedure has been applied to a domain that covers the central upwelling region off California, around Monterey Bay, embedded in a domain that spans the continental U.S. Pacific Coast. Long-term simulations (10 years) have been conducted to obtain mean-seasonal statistical equilibria. The final solution shows few discontinuities at the parent-child domain boundary and a valid representation of the local upwelling structure, at a CPU cost only slightly greater than for the inner region alone. The solution is assessed by comparison with solutions for the whole US Pacific Coast at both low and high resolutions and to solutions for only the inner region at high resolution with mean-seasonal boundary conditions.

  13. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    PubMed

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  14. A Damping Grid Strapdown Inertial Navigation System Based on a Kalman Filter for Ships in Polar Regions.

    PubMed

    Huang, Weiquan; Fang, Tao; Luo, Li; Zhao, Lin; Che, Fengzhu

    2017-07-03

    The grid strapdown inertial navigation system (SINS) used in polar navigation also includes three kinds of periodic oscillation errors as common SINS are based on a geographic coordinate system. Aiming ships which have the external information to conduct a system reset regularly, suppressing the Schuler periodic oscillation is an effective way to enhance navigation accuracy. The Kalman filter based on the grid SINS error model which applies to the ship is established in this paper. The errors of grid-level attitude angles can be accurately estimated when the external velocity contains constant error, and then correcting the errors of the grid-level attitude angles through feedback correction can effectively dampen the Schuler periodic oscillation. The simulation results show that with the aid of external reference velocity, the proposed external level damping algorithm based on the Kalman filter can suppress the Schuler periodic oscillation effectively. Compared with the traditional external level damping algorithm based on the damping network, the algorithm proposed in this paper can reduce the overshoot errors when the state of grid SINS is switched from the non-damping state to the damping state, and this effectively improves the navigation accuracy of the system.

  15. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  16. Can fractal objects operate as efficient inline mixers?

    NASA Astrophysics Data System (ADS)

    Laizet, Sylvain; Vassilicos, John; Turbulence, Mixing; Flow Control Group Team

    2011-11-01

    Recently, Hurst & Vassilicos, PoF 2007, Seoud & Vassilicos, PoF 2007, Mazellier & Vassilicos, PoF, 2010 used different multiscale grids to generate turbulence in a wind tunnel and have shown that complex multiscale boundary/initial conditions can drastically influence the behaviour of a turbulent flow, but that the detailled specific nature of the multiscale geometry matters too. Multiscale (fractal) objects can be designed to be immersed in any fluid flow where there is a need to control and design the turbulence generated by the object. Different types of multiscale objects can be designed as different types of energy-efficient mixers with varying degrees of high turbulent intensities, small pressure drop and downstream distance from the grid where the turbulence is most vigorous. Here, we present a 3D DNS study of the stirring and mixing of a passive scalar by turbulence generated with either a fractal square grid or a regular grid in the presence of a mean scalar gradient. The results show that: (1) there is a linear increase for the passive scalar variance for both grids, (2) the passive scalar variance is ten times bigger for the fractal grid, (3) the passive scalar flux is constant after the production region for both grids, (4) the passive scalar flux is enhanced by an order of magnitude for the fractal grid. We acknowledge support from EPSRC, UK.

  17. A class of renormalised meshless Laplacians for boundary value problems

    NASA Astrophysics Data System (ADS)

    Basic, Josip; Degiuli, Nastia; Ban, Dario

    2018-02-01

    A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.

  18. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  19. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  20. Pulsed laser-induced formation of silica nanogrids

    PubMed Central

    2014-01-01

    Silica grids with micron to sub-micron mesh sizes and wire diameters of 50 nm are fabricated on fused silica substrates. They are formed by single-pulse structured excimer laser irradiation of a UV-absorbing silicon suboxide (SiO x ) coating through the transparent substrate. A polydimethylsiloxane (PDMS) superstrate (cover layer) coated on top of the SiO x film prior to laser exposure serves as confinement for controlled laser-induced structure formation. At sufficiently high laser fluence, this process leads to grids consisting of a periodic loop network connected to the substrate at regular positions. By an additional high-temperature annealing, the residual SiO x is oxidized, and a pure SiO2 grid is obtained. PACS 81.07.-b; 81.07.Gf; 81.65.Cf PMID:24581305

  1. Distributed data mining on grids: services, tools, and applications.

    PubMed

    Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo

    2004-12-01

    Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.

  2. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  3. Interactive (statistical) visualisation and exploration of a billion objects with vaex

    NASA Astrophysics Data System (ADS)

    Breddels, M. A.

    2017-06-01

    With new catalogues arriving such as the Gaia DR1, containing more than a billion objects, new methods of handling and visualizing these data volumes are needed. We show that by calculating statistics on a regular (N-dimensional) grid, visualizations of a billion objects can be done within a second on a modern desktop computer. This is achieved using memory mapping of hdf5 files together with a simple binning algorithm, which are part of a Python library called vaex. This enables efficient exploration or large datasets interactively, making science exploration of large catalogues feasible. Vaex is a Python library and an application, which allows for interactive exploration and visualization. The motivation for developing vaex is the catalogue of the Gaia satellite, however, vaex can also be used on SPH or N-body simulations, any other (future) catalogues such as SDSS, Pan-STARRS, LSST, etc. or other tabular data. The homepage for vaex is http://vaex.astro.rug.nl.

  4. Processing large remote sensing image data sets on Beowulf clusters

    USGS Publications Warehouse

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  5. Research on the comparison of extension mechanism of cellular automaton based on hexagon grid and rectangular grid

    NASA Astrophysics Data System (ADS)

    Zhai, Xiaofang; Zhu, Xinyan; Xiao, Zhifeng; Weng, Jie

    2009-10-01

    Historically, cellular automata (CA) is a discrete dynamical mathematical structure defined on spatial grid. Research on cellular automata system (CAS) has focused on rule sets and initial condition and has not discussed its adjacency. Thus, the main focus of our study is the effect of adjacency on CA behavior. This paper is to compare rectangular grids with hexagonal grids on their characteristics, strengths and weaknesses. They have great influence on modeling effects and other applications including the role of nearest neighborhood in experimental design. Our researches present that rectangular and hexagonal grids have different characteristics. They are adapted to distinct aspects, and the regular rectangular or square grid is used more often than the hexagonal grid. But their relative merits have not been widely discussed. The rectangular grid is generally preferred because of its symmetry, especially in orthogonal co-ordinate system and the frequent use of raster from Geographic Information System (GIS). However, in terms of complex terrain, uncertain and multidirectional region, we have preferred hexagonal grids and methods to facilitate and simplify the problem. Hexagonal grids can overcome directional warp and have some unique characteristics. For example, hexagonal grids have a simpler and more symmetric nearest neighborhood, which avoids the ambiguities of the rectangular grids. Movement paths or connectivity, the most compact arrangement of pixels, make hexagonal appear great dominance in the process of modeling and analysis. The selection of an appropriate grid should be based on the requirements and objectives of the application. We use rectangular and hexagonal grids respectively for developing city model. At the same time we make use of remote sensing images and acquire 2002 and 2005 land state of Wuhan. On the base of city land state in 2002, we make use of CA to simulate reasonable form of city in 2005. Hereby, these results provide a proof of concept for hexagonal which has great dominance.

  6. Forecasting Epidemics Through Nonparametric Estimation of Time-Dependent Transmission Rates Using the SEIR Model.

    PubMed

    Smirnova, Alexandra; deCamp, Linda; Chowell, Gerardo

    2017-05-02

    Deterministic and stochastic methods relying on early case incidence data for forecasting epidemic outbreaks have received increasing attention during the last few years. In mathematical terms, epidemic forecasting is an ill-posed problem due to instability of parameter identification and limited available data. While previous studies have largely estimated the time-dependent transmission rate by assuming specific functional forms (e.g., exponential decay) that depend on a few parameters, here we introduce a novel approach for the reconstruction of nonparametric time-dependent transmission rates by projecting onto a finite subspace spanned by Legendre polynomials. This approach enables us to effectively forecast future incidence cases, the clear advantage over recovering the transmission rate at finitely many grid points within the interval where the data are currently available. In our approach, we compare three regularization algorithms: variational (Tikhonov's) regularization, truncated singular value decomposition (TSVD), and modified TSVD in order to determine the stabilizing strategy that is most effective in terms of reliability of forecasting from limited data. We illustrate our methodology using simulated data as well as case incidence data for various epidemics including the 1918 influenza pandemic in San Francisco and the 2014-2015 Ebola epidemic in West Africa.

  7. Crew Earth Observations (CEO) taken during Expedition 9

    NASA Image and Video Library

    2004-06-07

    ISS009-E-10382 (7 June 2004) --- Tucson, Arizona is featured in this image photographed by an Expedition 9 crewmember on the International Space Station (ISS). Tucson lies between the forested Catalina Mountains and the Tucson Mountains (dark reddish brown at left). The typical western North American cityscape is a pattern of regular north-south aligned rectangles outlined by major streets set one mile apart. Tucson’s Randolph golf course is the large rectangular dark zone in the image center. The striking contrast between the golf course and its surroundings is due to dense grass cover maintained by frequent watering. The rectangular grid pattern disappears in the small streets of the original city center, situated along the Santa Cruz River (enters the view lower left and exits in the top left corner). Newer and less densely built-up neighborhoods in the foothills of the Catalina Mountains are designed to incorporate natural landscape features, and retain major washes with natural vegetation. This portion of the cityscape seen from space is consequently quite different from the main city grid. The foothills afford views of the city to the south and the mountains to the north and are major areas of development. Large white dots within the urban grid are the reflective rooftops of shopping malls. Tucson enjoys an important position along several major crossroads. Interstate highway I-10, which connects southern California to Florida, appears as a straight line running parallel with the Santa Cruz River northwest from Tucson in the direction of Phoenix. The I-10 traverses a well-marked alluvial fan that extends from the Santa Rita Mountains to the southeast (fine drainage pattern lower center) and exits the view lower right. Highway I-19 is the straight line (lower left) leading south from the city center, between the Santa Cruz River and rectangular spoil heaps of nearby copper mines. The I-19 connects Tucson with Nogales on the Mexican border.

  8. Duke Energy | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Smart Inverters for Grid Stability NREL and Duke Energy are exploring ways that smart inverters can increase grid stability. Photo of two men looking at a large simulation of the electric grid Grid Voltage

  9. Optimal Grid Size for Inter-Comparability of MODIS And VIIRS Vegetation Indices at Level 2G or Higher

    NASA Astrophysics Data System (ADS)

    Campagnolo, M.; Schaaf, C.

    2016-12-01

    Due to the necessity of time compositing and other user requirements, vegetation indices, as well as many other EOS derived products, are distributed in a gridded format (level L2G or higher) using an equal area sinusoidal grid, at grid sizes of 232 m, 463 m or 926 m. In this process, the actual surface signal suffers somewhat of a degradation, caused by both the sensor's point spread function and this resampling from swath to the regular grid. The magnitude of that degradation depends on a number of factors, such as surface heterogeneity, band nominal resolution, observation geometry and grid size. In this research, the effect of grid size is quantified for MODIS and VIIRS (at five EOS validation sites with distinct land covers), for the full range of view zenith angles, and at grid sizes of 232 m, 253 m, 309 m, 371 m, 397 m and 463 m. This allows us to compare MODIS and VIIRS gridded products for the same scenes, and to determine the grid size at which these products are most similar. Towards that end, simulated MODIS and VIIRS bands are generated from Landsat 8 surface reflectance images at each site and gridded products are then derived by using maximum obscov resampling. Then, for every grid size, the original Landsat 8 NDVI and the derived MODIS and VIIRS NDVI products are compared. This methodology can be applied to other bands and products, to determine which spatial aggregation overall is best suited for EOS to S-NPP product continuity. Results for MODIS (250 m bands) and VIIRS (375 m bands) NDVI products show that finer grid sizes tend to be better at preserving the original signal. Significant degradation for gridded NDVI occurs when grid size is larger then 253 m (MODIS) and 371 m (VIIRS). Our results suggest that current MODIS "500 m" (actually 463 m) grid size is best for product continuity. Note however, that up to that grid size value, MODIS gridded products are somewhat better at preserving the surface signal than VIIRS, except for at very high VZA.

  10. Comparison of measuring strategies for the 3-D electrical resistivity imaging of tumuli

    NASA Astrophysics Data System (ADS)

    Tsourlos, Panagiotis; Papadopoulos, Nikos; Yi, Myeong-Jong; Kim, Jung-Ho; Tsokas, Gregory

    2014-02-01

    Artificial erected hills like tumuli, mounds, barrows and kurgans comprise monuments of the past human activity and offer opportunities to reconstruct habitation models regarding the life and customs during their building period. These structures also host features of archeological significance like architectural relics, graves or chamber tombs. Tumulus exploration is a challenging geophysical problem due to the complex distribution of the subsurface physical properties, the size and burial depth of potential relics and the uneven topographical terrain. Geoelectrical methods by means of three-dimensional (3-D) inversion are increasingly popular for tumulus investigation. Typically data are obtained by establishing a regular rectangular grid and assembling the data collected by parallel two-dimensional (2-D) tomographies. In this work the application of radial 3-D mode is studied, which is considered as the assembly of data collected by radially positioned Electrical Resistivity Tomography (ERT) lines. The relative advantages and disadvantages of this measuring mode over the regular grid measurements were investigated and optimum ways to perform 3-D ERT surveys for tumuli investigations were proposed. Comparative test was performed by means of synthetic examples as well as by tests with field data. Overall all tested models verified the superiority of the radial mode in delineating bodies positioned at the central part of the tumulus while regular measuring mode proved superior in recovering bodies positioned away from the center of the tumulus. The combined use of radial and regular modes seems to produce superior results in the expense of time required for data acquisition and processing.

  11. Automatic optical inspection of regular grid patterns with an inspection camera used below the Shannon-Nyquist criterion for optical resolution

    NASA Astrophysics Data System (ADS)

    Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.

    2017-02-01

    An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.

  12. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  13. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    NASA Astrophysics Data System (ADS)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  14. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  15. Load Balancing Strategies for Multi-Block Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.

  16. Nonlinear refraction and reflection travel time tomography

    USGS Publications Warehouse

    Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.

    1998-01-01

    We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.

  17. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    NASA Astrophysics Data System (ADS)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  18. Spiking Neurons in a Hierarchical Self-Organizing Map Model Can Learn to Develop Spatial and Temporal Properties of Entorhinal Grid Cells and Hippocampal Place Cells

    PubMed Central

    Pilly, Praveen K.; Grossberg, Stephen

    2013-01-01

    Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous adaptive robots capable of spatial navigation. PMID:23577130

  19. SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, M; Tobias, R; Pankuch, M

    Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less

  20. Changing from computing grid to knowledge grid in life-science grid.

    PubMed

    Talukdar, Veera; Konar, Amit; Datta, Ayan; Choudhury, Anamika Roy

    2009-09-01

    Grid computing has a great potential to become a standard cyber infrastructure for life sciences that often require high-performance computing and large data handling, which exceeds the computing capacity of a single institution. Grid computer applies the resources of many computers in a network to a single problem at the same time. It is useful to scientific problems that require a great number of computer processing cycles or access to a large amount of data.As biologists,we are constantly discovering millions of genes and genome features, which are assembled in a library and distributed on computers around the world.This means that new, innovative methods must be developed that exploit the re-sources available for extensive calculations - for example grid computing.This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing a "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. By extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  1. From grid cells to place cells with realistic field sizes

    PubMed Central

    2017-01-01

    While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005

  2. Large Area Coverage of a TPC Endcap with GridPix Detectors

    NASA Astrophysics Data System (ADS)

    Kaminski, Jochen

    2018-02-01

    The Large Prototype TPC at DESY, Hamburg, was built by the LCTPC collaboration as a testbed for new readout technologies of Time Projection Chambers. Up to seven modules of about 400 cm2 each can be placed in the endcap. Three of these modules were equipped with a total of 160 GridPix detectors. This is a combination of a highly pixelated readout ASIC and a Micromegas built on top. GridPix detectors have a very high efficiency of detecting primary electrons, which leads to excellent spatial and energy resolutions. For the first time a large number of GridPix detectors has been operated and long segments of tracks have been recorded with excellent precision.

  3. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  4. A distributed air index based on maximum boundary rectangle over grid-cells for wireless non-flat spatial data broadcast.

    PubMed

    Im, Seokjin; Choi, JinTak

    2014-06-17

    In the pervasive computing environment using smart devices equipped with various sensors, a wireless data broadcasting system for spatial data items is a natural way to efficiently provide a location dependent information service, regardless of the number of clients. A non-flat wireless broadcast system can support the clients in accessing quickly their preferred data items by disseminating the preferred data items more frequently than regular data on the wireless channel. To efficiently support the processing of spatial window queries in a non-flat wireless data broadcasting system, we propose a distributed air index based on a maximum boundary rectangle (MaxBR) over grid-cells (abbreviated DAIM), which uses MaxBRs for filtering out hot data items on the wireless channel. Unlike the existing index that repeats regular data items in close proximity to hot items at same frequency as hot data items in a broadcast cycle, DAIM makes it possible to repeat only hot data items in a cycle and reduces the length of the broadcast cycle. Consequently, DAIM helps the clients access the desired items quickly, improves the access time, and reduces energy consumption. In addition, a MaxBR helps the clients decide whether they have to access regular data items or not. Simulation studies show the proposed DAIM outperforms existing schemes with respect to the access time and energy consumption.

  5. On a turbulent wall model to predict hemolysis numerically in medical devices

    NASA Astrophysics Data System (ADS)

    Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung

    2017-11-01

    Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.

  6. Efficient Modeling of Gravity Fields Caused by Sources with Arbitrary Geometry and Arbitrary Density Distribution

    NASA Astrophysics Data System (ADS)

    Wu, Leyuan

    2018-01-01

    We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.

  7. Preconditioned MoM Solutions for Complex Planar Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Jackson, D; Champagne, N

    2004-01-23

    The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less

  8. Quantitative characterization of the small-scale fracture patterns on the plains of Venus

    NASA Technical Reports Server (NTRS)

    Sammis, Charles G.; Bowman, David D.

    1995-01-01

    The objectives of this research project were to (1) compile a comprehensive database of the occurrence of regularly spaced kilometer scale lineations on the volcanic plains of Venus in an effort to verify the effectiveness of the shear-lag model developed by Banerdt and Sammis (1992), and (2) develop a model for the formation of irregular kilometer scale lineations such as typified in the gridded plains region of Guinevere Planitia. Attached to this report is the paper 'A Tectonic Model for the Formation of the Gridded Plains on Guinevere Planitia, Venus, and Implications for the Elastic Thickness of the Lithosphere'.

  9. GLOBAL GRIDS FROM RECURSIVE DIAMOND SUBDIVISIONS OF THE SURFACE OF AN OCTAHEDRON OR ICOSAHEDRON

    EPA Science Inventory

    In recent years a number of methods have been developed for subdividing the surface of the earth to meet the needs of applications in dynamic modeling, survey sampling, and information storage and display. One set of methods uses the surfaces of Platonic solids, or regular polyhe...

  10. Real-Time Rotational Activity Detection in Atrial Fibrillation

    PubMed Central

    Ríos-Muñoz, Gonzalo R.; Arenal, Ángel; Artés-Rodríguez, Antonio

    2018-01-01

    Rotational activations, or spiral waves, are one of the proposed mechanisms for atrial fibrillation (AF) maintenance. We present a system for assessing the presence of rotational activity from intracardiac electrograms (EGMs). Our system is able to operate in real-time with multi-electrode catheters of different topologies in contact with the atrial wall, and it is based on new local activation time (LAT) estimation and rotational activity detection methods. The EGM LAT estimation method is based on the identification of the highest sustained negative slope of unipolar signals. The method is implemented as a linear filter whose output is interpolated on a regular grid to match any catheter topology. Its operation is illustrated on selected signals and compared to the classical Hilbert-Transform-based phase analysis. After the estimation of the LAT on the regular grid, the detection of rotational activity in the atrium is done by a novel method based on the optical flow of the wavefront dynamics, and a rotation pattern match. The methods have been validated using in silico and real AF signals. PMID:29593566

  11. Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids

    NASA Astrophysics Data System (ADS)

    Abbà, Antonella; Campaniello, Dario; Nini, Michele

    2017-06-01

    The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.

  12. Modelling Pesticide Leaching At Column, Field and Catchment Scales I. Analysis of Soil Variability At Field and Catchment Scales

    NASA Astrophysics Data System (ADS)

    Gärdenäs, A.; Jarvis, N.; Alavi, G.

    The spatial variability of soil characteristics was studied in a small agricultural catch- ment (Vemmenhög, 9 km2) at the field and catchment scales. This analysis serves as a basis for assumptions concerning upscaling approaches used to model pesticide leaching from the catchment with the MACRO model (Jarvis et al., this meeting). The work focused on the spatial variability of two key soil properties for pesticide fate in soil, organic carbon and clay content. The Vemmenhög catchment (9 km2) is formed in a glacial till deposit in southernmost Sweden. The landscape is undulating (30 - 65 m a.s.l.) and 95 % of the area is used for crop production (winter rape, winter wheat, sugar beet and spring barley). The climate is warm temperate. Soil samples for or- ganic C and texture were taken on a small regular grid at Näsby Farm, (144 m x 144 m, sampling distance: 6-24 m, 77 points) and on an irregular large grid covering the whole catchment (sampling distance: 333 m, 46 points). At the field scale, it could be shown that the organic C content was strongly related to landscape position and height (R2= 73 %, p < 0.001, n=50). The organic C content of hollows in the landscape is so high that they contribute little to the total loss of pesticides (Jarvis et al., this meeting). Clay content is also related to landscape position, being larger at the hilltop locations resulting in lower near-saturated hydraulic conductivity. Hence, macropore flow can be expected to be more pronounced (see also Roulier & Jarvis, this meeting). The variability in organic C was similar for the field and catchment grids, which made it possible to krige the organic C content of the whole catchment using data from both grids and an uneven lag distance.

  13. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  14. QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.

    PubMed

    Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo

    2011-01-18

    The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics.

  15. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  16. Strategies to Enhance the Model Update in Regions of Weak Sensitivities for Use in Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Nuber, André; Manukyan, Edgar; Maurer, Hansruedi

    2014-05-01

    Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.

  17. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  18. glideinWMS—a generic pilot-based workload management system

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.

    2008-07-01

    The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. glideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.

  19. glideinWMS - A generic pilot-based Workload Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sfiligoi, Igor; /Fermilab

    The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. GlideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a setmore » of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.« less

  20. Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.

    PubMed

    Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee

    2017-01-01

    At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.

  1. A multi-resolution approach to electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-07-01

    We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  2. Wind-Tunnel Experiments for Gas Dispersion in an Atmospheric Boundary Layer with Large-Scale Turbulent Motion

    NASA Astrophysics Data System (ADS)

    Michioka, Takenobu; Sato, Ayumu; Sada, Koichi

    2011-10-01

    Large-scale turbulent motions enhancing horizontal gas spread in an atmospheric boundary layer are simulated in a wind-tunnel experiment. The large-scale turbulent motions can be generated using an active grid installed at the front of the test section in the wind tunnel, when appropriate parameters for the angular deflection and the rotation speed are chosen. The power spectra of vertical velocity fluctuations are unchanged with and without the active grid because they are strongly affected by the surface. The power spectra of both streamwise and lateral velocity fluctuations with the active grid increase in the low frequency region, and are closer to the empirical relations inferred from field observations. The large-scale turbulent motions do not affect the Reynolds shear stress, but change the balance of the processes involved. The relative contributions of ejections to sweeps are suppressed by large-scale turbulent motions, indicating that the motions behave as sweep events. The lateral gas spread is enhanced by the lateral large-scale turbulent motions generated by the active grid. The large-scale motions, however, do not affect the vertical velocity fluctuations near the surface, resulting in their having a minimal effect on the vertical gas spread. The peak concentration normalized using the root-mean-squared value of concentration fluctuation is remarkably constant over most regions of the plume irrespective of the operation of the active grid.

  3. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  4. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.

  5. OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Chan, William M.

    2012-01-01

    Structured grid solvers such as NASA's OVERFLOW compressible Navier-Stokes flow solver can generate large data files that contain convergence histories for flow equation residuals, turbulence model equation residuals, component forces and moments, and component relative motion dynamics variables. Most of today's large-scale problems can extend to hundreds of grids, and over 100 million grid points. However, due to the lack of efficient tools, only a small fraction of information contained in these files is analyzed. OVERSMART (OVERFLOW Solution Monitoring And Reporting Tool) provides a comprehensive report of solution convergence of flow computations over large, complex grid systems. It produces a one-page executive summary of the behavior of flow equation residuals, turbulence model equation residuals, and component forces and moments. Under the automatic option, a matrix of commonly viewed plots such as residual histograms, composite residuals, sub-iteration bar graphs, and component forces and moments is automatically generated. Specific plots required by the user can also be prescribed via a command file or a graphical user interface. Output is directed to the user s computer screen and/or to an html file for archival purposes. The current implementation has been targeted for the OVERFLOW flow solver, which is used to obtain a flow solution on structured overset grids. The OVERSMART framework allows easy extension to other flow solvers.

  6. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    NASA Astrophysics Data System (ADS)

    Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.

    2013-04-01

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.

  7. Exploring Hypersonic, Unstructured-Grid Issues through Structured Grids

    NASA Technical Reports Server (NTRS)

    Mazaheri, Ali R.; Kleb, Bill

    2007-01-01

    Pure-tetrahedral unstructured grids have been shown to produce asymmetric heat transfer rates for symmetric problems. Meanwhile, two-dimensional structured grids produce symmetric solutions and as documented here, introducing a spanwise degree of freedom to these structured grids also yields symmetric solutions. The effects of grid skewness and other perturbations of structured-grids are investigated to uncover possible mechanisms behind the unstructured-grid solution asymmetries. By using controlled experiments around a known, good solution, the effects of particular grid pathologies are uncovered. These structured-grid experiments reveal that similar solution degradation occurs as for unstructured grids, especially for heat transfer rates. Non-smooth grids within the boundary layer is also shown to produce large local errors in heat flux but do not affect surface pressures.

  8. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  9. On the Surprising Salience of Curvature in Grouping by Proximity

    ERIC Educational Resources Information Center

    Strother, Lars; Kubovy, Michael

    2006-01-01

    The authors conducted 3 experiments to explore the roles of curvature, density, and relative proximity in the perceptual organization of ambiguous dot patterns. To this end, they developed a new family of regular dot patterns that tend to be perceptually grouped into parallel contours, dot-sampled structured grids (DSGs). DSGs are similar to the…

  10. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  11. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  12. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  13. Folding Proteins at 500 ns/hour with Work Queue.

    PubMed

    Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A

    2012-10-01

    Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.

  14. Folding Proteins at 500 ns/hour with Work Queue

    PubMed Central

    Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.

    2014-01-01

    Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799

  15. Analysis, tuning and comparison of two general sparse solvers for distributed memory computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amestoy, P.R.; Duff, I.S.; L'Excellent, J.-Y.

    2000-06-30

    We describe the work performed in the context of a Franco-Berkeley funded project between NERSC-LBNL located in Berkeley (USA) and CERFACS-ENSEEIHT located in Toulouse (France). We discuss both the tuning and performance analysis of two distributed memory sparse solvers (superlu from Berkeley and mumps from Toulouse) on the 512 processor Cray T3E from NERSC (Lawrence Berkeley National Laboratory). This project gave us the opportunity to improve the algorithms and add new features to the codes. We then quite extensively analyze and compare the two approaches on a set of large problems from real applications. We further explain the main differencesmore » in the behavior of the approaches on artificial regular grid problems. As a conclusion to this activity report, we mention a set of parallel sparse solvers on which this type of study should be extended.« less

  16. Preliminary result on the enhancement of Ufer electrodes using recycle additives materials

    NASA Astrophysics Data System (ADS)

    Zulkifli, Muhammad Haziq Aniq Bin; Ahmad, Hussein Bin

    2016-11-01

    Ground building pillars is to be used as ground rod. The pillars are design, fabricated, and formulated with new ground fillers. The additives will be used from recycle waste materials mainly from the palm oil plant process. Micro scale building pillars will be fabricated and install in the test ground at all of the location. Earth tester meter are used to measure and collect the data of the soil resistivity when the research is conducted. In collecting these data, 3-terminal methods are used to carry the measurements. This experiment will be conducted for 30 weeks and regular measurements at the test ground copper grids will be conducted to measure the ground electrode resistance. The study will mainly base on IEC 62503-3. The used of reinforcing rods and mixture of recycle additives could produce a better grounding system that are suitable and can be used in all kind of soil condition and large industries.

  17. Asynchronous discrete event schemes for PDEs

    NASA Astrophysics Data System (ADS)

    Stone, D.; Geiger, S.; Lord, G. J.

    2017-08-01

    A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.

  18. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  19. NREL Partners With General Electric, Duke Energy on Grid Voltage Regulation

    Science.gov Websites

    Study | Energy Systems Integration Facility | NREL NREL Partners With General Electric, Duke Energy on Grid Voltage Regulation Study NREL Partners With General Electric, Duke Energy on Grid Voltage Regulation Study When a large solar photovoltaic (PV) system is connected to the electric grid, a utility's

  20. INFN, IT the GENIUS grid portal and the robot certificates to perform phylogenetic analysis on large scale: a success story from the International LIBI project

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio

    This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.

  1. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  2. Trends in life science grid: from computing grid to knowledge grid.

    PubMed

    Konagaya, Akihiko

    2006-12-18

    Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  3. Trends in life science grid: from computing grid to knowledge grid

    PubMed Central

    Konagaya, Akihiko

    2006-01-01

    Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community. PMID:17254294

  4. Networks of channels for self-healing composite materials

    NASA Astrophysics Data System (ADS)

    Bejan, A.; Lorente, S.; Wang, K.-M.

    2006-08-01

    This is a fundamental study of how to vascularize a self-healing composite material so that healing fluid reaches all the crack sites that may occur randomly through the material. The network of channels is built into the material and is filled with pressurized healing fluid. When a crack forms, the pressure drops at the crack site and fluid flows from the network into the crack. The objective is to discover the network configuration that is capable of delivering fluid to all the cracks the fastest. The crack site dimension and the total volume of the channels are fixed. It is argued that the network must be configured as a grid and not as a tree. Two classes of grids are considered and optimized: (i) grids with one channel diameter and regular polygonal loops (square, triangle, hexagon) and (ii) grids with two channel sizes. The best architecture of type (i) is the grid with triangular loops. The best architecture of type (ii) has a particular (optimal) ratio of diameters that departs from 1 as the crack length scale becomes smaller than the global scale of the vascularized structure from which the crack draws its healing fluid. The optimization of the ratio of channel diameters cuts in half the time of fluid delivery to the crack.

  5. Balancing Power Absorption and Structural Loading for a Novel Fixed-Bottom Wave Energy Converter with Nonideal Power Take-Off in Regular Waves: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D

    In this work, the net power delivered to the grid from a nonideal power take-off (PTO) is introduced followed by a review of the pseudo-spectral control theory. A power-to-load ratio, used to evaluate the pseudo-spectral controller performance, is discussed, and the results obtained from optimizing a multiterm objective function are compared against results obtained from maximizing the net output power to the grid. Simulation results are then presented for four different oscillating wave energy converter geometries to highlight the potential of combing both geometry and PTO control to maximize power while minimizing loads.

  6. A far-field non-reflecting boundary condition for two-dimensional wake flows

    NASA Technical Reports Server (NTRS)

    Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli

    1995-01-01

    Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.

  7. High-resolution CSR GRACE RL05 mascons

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2016-10-01

    The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.

  8. geoknife: Reproducible web-processing of large gridded datasets

    USGS Publications Warehouse

    Read, Jordan S.; Walker, Jordan I.; Appling, Alison P.; Blodgett, David L.; Read, Emily K.; Winslow, Luke A.

    2016-01-01

    Geoprocessing of large gridded data according to overlap with irregular landscape features is common to many large-scale ecological analyses. The geoknife R package was created to facilitate reproducible analyses of gridded datasets found on the U.S. Geological Survey Geo Data Portal web application or elsewhere, using a web-enabled workflow that eliminates the need to download and store large datasets that are reliably hosted on the Internet. The package provides access to several data subset and summarization algorithms that are available on remote web processing servers. Outputs from geoknife include spatial and temporal data subsets, spatially-averaged time series values filtered by user-specified areas of interest, and categorical coverage fractions for various land-use types.

  9. Uncertainty in gridded CO 2 emissions estimates

    DOE PAGES

    Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...

    2016-05-19

    We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less

  10. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  11. Efficient radiative transfer methods for continuum and line transfer in large three-dimensional models

    NASA Astrophysics Data System (ADS)

    Juvela, Mika J.

    The relationship between physical conditions of an interstellar cloud and the observed radiation is defined by the radiative transfer problem. Radiative transfer calculations are needed if, e.g., one wants to disentangle abundance variations from excitation effects or wants to model variations of dust properties inside an interstellar cloud. New observational facilities (e.g., ALMA and Herschel) will bring improved accuracy both in terms of intensity and spatial resolution. This will enable detailed studies of the densest sub-structures of interstellar clouds and star forming regions. Such observations must be interpreted with accurate radiative transfer methods and realistic source models. In many cases this will mean modelling in three dimensions. High optical depths and observed wide range of linear scales are, however, challenging for radiative transfer modelling. A large range of linear scales can be accessed only with hierarchical models. Figure 1 shows an example of the use of a hierarchical grid for radiative transfer calculations when the original model cloud (L=10 pc, =500 cm-3) was based a MHD simulation carried out on a regular grid (Juvela & Padoan, 2005). For computed line intensities an accuracy of 10% was still reached when the number of individual cells (and the run time) was reduced by a factor of ten. This illustrates how, as long as cloud is not extremely optically thick, most of the emission comes from a small sub-volume. It is also worth noting that while errors are ~10% for any given point they are much smaller when compared with intensity variations. In particular, calculations on hierarchical grid recovered the spatial power spectrum of line emission with very good accuracy. Monte Carlo codes are used widely in both continuum and line transfer calculations. Like any lambda iteration schemes these suffer from slow convergence when models are optically thick. In line transfer Accelerated Monte Carlo methods (AMC) present a partial solution to this problem (Juvela & Padoan, 2000; Hogerheijde & van der Tak, 2000). AMC methods can be used similarly in continuum calculations to speed up the computation of dust temperatures (Juvela, 2005). The sampling problems associated with high optical depths can be solved with weighted sampling and the handling of models with τV ~ 1000 is perfectly feasible. Transiently heated small dust grains pose another problem because the calculation of their temperature distribution is very time consuming. However, a 3D model will contain thousands of cells at very similar conditions. If dust temperature distributions are calculated only once for such a set an approximate solution can be found in a much shorter time time. (Juvela & Padoan, 2003; see Figure 2a). MHD simulations with Automatic Mesh Refinement (AMR) techniques present an exciting development for the modelling of interstellar clouds. Cloud models consist of a hierarchy of grids with different grid steps and the ratio between the cloud size and the smallest resolution elements can be 106 or even larger. We are currently working on radiative transfer codes (line and continuum) that could be used efficiently on such grids (see Figure 2b). The radiative transfer problem can be solved relatively independently on each of the sub-grids. This means that the use of convergence acceleration methods can be limited to those sub-grids where they are needed and, on the other hand, parallelization of the code is straightforward.

  12. New planetary boundary layer parametrization in ECHAM5-HAM: Dynamical refinement of the vertical resolution

    NASA Astrophysics Data System (ADS)

    Siegenthaler-Le Drian, C.; Spichtinger, P.; Lohmann, U.

    2010-09-01

    Marine stratocumulus-capped boundary layers exhibit a strong net cooling impact on the Earth-Atmosphere system. Moreover, they are highly persistent over subtropical oceans. Therefore climate models need to represent them well in order to make reliable projections of future climate. One of the reasons for the absence of stratocumuli in the general circulation model ECHAM5-HAM (Roeckner et al., 2003; Stier et al., 2005) is due to the limited vertical resolution. In the current model version, no vertical sub-grid scale variability of clouds is taken into account, such that clouds occupy the full vertical layer. Around the inversion on top of the planetary boundary layer (PBL), conserved variables often have a steep gradient, which in a GCM may produce large discretization errors (Bretherton and Park, 2009). This inversion has a large diurnal cycle and varies with location around the globe, which is difficult to represent in a classical, coarse Eulerian approach. Furthermore, Lenderink and Holtslag (2000) and Lock (2001) showed that an inconsistent numerical representation between the entrainment parametrization and the other schemes, particularly with the vertical advection can lead to the occurrence of 'numerical entrainment'. The problem can be resolved by introducing a dynamical inversion as introduced by Grenier and Bretherton (2001) and Lock (2001). As these features can be seen in our version of ECHAM5-HAM, our implementation is aimed to reduce the numerical entrainment and to better represent stratocumuli in ECHAM5-HAM. To better resolve stratocumulus clouds, their inversion and the interaction between the turbulent diffusion and the vertical advection, the vertical grid is dynamically refined. The new grid is based on the reconstruction of the profiles of variables experiencing a sharp gradient (temperature, mixing ratio) applying the method presented in Grenier and Bretherton (2001). In typical stratocumulus regions, an additional grid level is thus associated with the PBL top. In case a cloud can be formed, a new level is associated with the lifting condensation level as well. The regular grid plus the two additional levels define the new dynamical grid, which varies geographically and temporally. The physical processes are computed on this new dynamical grid, Consequently, the sharp gradients and the interaction between the different processes can be better resolved. Some results of this new parametrization will be presented. On a single column model set-up, the reconstruction method accurately finds the inversion at the PBL top for the EPIC stratocumulus case. Also, on a global scale, the occurrence of a successful reconstruction, which is restricted in typical stratocumulus regions, occurs with a high frequency. The impact of the new dynamical grid on clouds and the radiation balance will be presented in the talk. References [Bretherton and Park, 2009] Bretherton, C. S. and Park, S. (2009). A new moist turbulence parametrization in the community atmosphere model. J. Climate, 22:3422-3448. [Grenier and Bretherton, 2001] Grenier, H. and Bretherton, C. S. (2001). A moist parametrization for large-scale models and its application to subtropical cloud-topped marine boundary layers. Mon. Wea. Rev., 129:357-377. [Lenderink and Holtslag, 2000] Lenderink, G. and Holtslag, A. M. (2000). Evaluation of the kinetic energy approach for modeling turbulent fluxes in stratocumulus. Mon. Wea. Rev., 128:244-258. [Lock, 2001] Lock, A. P. (2001). The numerical representation of entrainment in parametrizations of boundary layer turbulent mixing. Mon. Wea. Rev., 129:1148-1163. [Roeckner et al., 2003] Roeckner, E., Bäuml, G., Bonaventura, L. et al. (2003). The atmospheric general circulation model echam5, part I: Model description. Technical Report 349, Max-Planck-Institute for Meteorology, Hamburg,Germany. [Stier et al., 2005] Stier, P., Feichter, J., Kinne, S. et al. (2005). The aerosol-climate model ECHAM5-HAM. Atmos. Chem. Phys., 5:1125-1156.

  13. Arithmetic Data Cube as a Data Intensive Benchmark

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  14. Far infrared through millimeter backshort-under-grid arrays

    NASA Astrophysics Data System (ADS)

    Allen, Christine A.; Abrahams, John; Benford, Dominic J.; Chervenak, James A.; Chuss, David T.; Staguhn, Johannes G.; Miller, Timothy M.; Moseley, S. Harvey; Wollack, Edward J.

    2006-06-01

    We are developing a large-format, versatile, bolometer array for a wide range of infrared through millimeter astronomical applications. The array design consists of three key components - superconducting transition edge sensor bolometer arrays, quarter-wave reflective backshort grids, and Superconducting Quantum Interference Device (SQUID) multiplexer readouts. The detector array is a filled, square grid of bolometers with superconducting sensors. The backshort arrays are fabricated separately and are positioned in the etch cavities behind the detector grid. The grids have unique three-dimensional interlocking features micromachined into the walls for positioning and mechanical stability. The ultimate goal of the program is to produce large-format arrays with background-limited sensitivity, suitable for a wide range of wavelengths and applications. Large-format (kilopixel) arrays will be directly indium bump bonded to a SQUID multiplexer circuit. We have produced and tested 8×8 arrays of 1 mm detectors to demonstrate proof of concept. 8×16 arrays of 2 mm detectors are being produced for a new Goddard Space Flight Center instrument. We have also produced models of a kilopixel detector grid and dummy multiplexer chip for bump bonding development. We present detector design overview, several unique fabrication highlights, and assembly technologies.

  15. Explosive force of primacord grid forms large sheet metal parts

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Primacord which is woven through fish netting in a grid pattern is used for explosive forming of large sheet metal parts. The explosive force generated by the primacord detonation is uniformly distributed over the entire surface of the sheet metal workpiece.

  16. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  17. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  18. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  19. Sparse spikes super-resolution on thin grids II: the continuous basis pursuit

    NASA Astrophysics Data System (ADS)

    Duval, Vincent; Peyré, Gabriel

    2017-09-01

    This article analyzes the performance of the continuous basis pursuit (C-BP) method for sparse super-resolution. The C-BP has been recently proposed by Ekanadham, Tranchina and Simoncelli as a refined discretization scheme for the recovery of spikes in inverse problems regularization. One of the most well known discretization scheme, the basis pursuit (BP, also known as \

  20. Sensor network based solar forecasting using a local vector autoregressive ridge framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Yoo, S.; Heiser, J.

    2016-04-04

    The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations duemore » to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.« less

  1. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  2. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  3. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  4. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  5. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  6. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  7. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  8. Response Grids: Practical Ways to Display Large Data Sets with High Visual Impact

    ERIC Educational Resources Information Center

    Gates, Simon

    2013-01-01

    Spreadsheets are useful for large data sets but they may be too wide or too long to print as conventional tables. Response grids offer solutions to the challenges posed by any large data set. They have wide application throughout science and for every subject and context where visual data displays are designed, within education and elsewhere.…

  9. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  10. Grid Computing in K-12 Schools. Soapbox Digest. Volume 3, Number 2, Fall 2004

    ERIC Educational Resources Information Center

    AEL, 2004

    2004-01-01

    Grid computing allows large groups of computers (either in a lab, or remote and connected only by the Internet) to extend extra processing power to each individual computer to work on components of a complex request. Grid middleware, recognizing priorities set by systems administrators, allows the grid to identify and use this power without…

  11. AC HTS Transmission Cable for Integration into the Future EHV Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future grid must be capable to transmit all the connected power. Power generation will be more decentralized like for instance wind parks connected to the grid. Furthermore, future large scale production units are expected to be installed near coastal regions. This creates some potential grid issues, such as: large power amounts to be transmitted to consumers from west to east and grid stability. High temperature superconductors (HTS) can help solving these grid problems. Advantages to integrate HTS components at Extra High Voltage (EHV) and High Voltage (HV) levels are numerous: more power with less losses and less emissions, intrinsic fault current limiting capability, better control of power flow, reduced footprint, etc. Today's main obstacle is the relatively high price of HTS. Nevertheless, as the price goes down, initial market penetration for several HTS components is expected by year 2015 (e.g.: cables, fault current limiters). In this paper we present a design of intrinsically compensated EHV HTS cable for future grid integration. Discussed are the parameters of such cable providing an optimal power transmission in the future network.

  12. Decentral Smart Grid Control

    NASA Astrophysics Data System (ADS)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals.

  13. A high throughput geocomputing system for remote sensing quantitative retrieval and a case study

    NASA Astrophysics Data System (ADS)

    Xue, Yong; Chen, Ziqiang; Xu, Hui; Ai, Jianwen; Jiang, Shuzheng; Li, Yingjie; Wang, Ying; Guang, Jie; Mei, Linlu; Jiao, Xijuan; He, Xingwei; Hou, Tingting

    2011-12-01

    The quality and accuracy of remote sensing instruments have been improved significantly, however, rapid processing of large-scale remote sensing data becomes the bottleneck for remote sensing quantitative retrieval applications. The remote sensing quantitative retrieval is a data-intensive computation application, which is one of the research issues of high throughput computation. The remote sensing quantitative retrieval Grid workflow is a high-level core component of remote sensing Grid, which is used to support the modeling, reconstruction and implementation of large-scale complex applications of remote sensing science. In this paper, we intend to study middleware components of the remote sensing Grid - the dynamic Grid workflow based on the remote sensing quantitative retrieval application on Grid platform. We designed a novel architecture for the remote sensing Grid workflow. According to this architecture, we constructed the Remote Sensing Information Service Grid Node (RSSN) with Condor. We developed a graphic user interface (GUI) tools to compose remote sensing processing Grid workflows, and took the aerosol optical depth (AOD) retrieval as an example. The case study showed that significant improvement in the system performance could be achieved with this implementation. The results also give a perspective on the potential of applying Grid workflow practices to remote sensing quantitative retrieval problems using commodity class PCs.

  14. Grid cells form a global representation of connected environments.

    PubMed

    Carpenter, Francis; Manson, Daniel; Jeffery, Kate; Burgess, Neil; Barry, Caswell

    2015-05-04

    The firing patterns of grid cells in medial entorhinal cortex (mEC) and associated brain areas form triangular arrays that tessellate the environment [1, 2] and maintain constant spatial offsets to each other between environments [3, 4]. These cells are thought to provide an efficient metric for navigation in large-scale space [5-8]. However, an accurate and universal metric requires grid cell firing patterns to uniformly cover the space to be navigated, in contrast to recent demonstrations that environmental features such as boundaries can distort [9-11] and fragment [12] grid patterns. To establish whether grid firing is determined by local environmental cues, or provides a coherent global representation, we recorded mEC grid cells in rats foraging in an environment containing two perceptually identical compartments connected via a corridor. During initial exposures to the multicompartment environment, grid firing patterns were dominated by local environmental cues, replicating between the two compartments. However, with prolonged experience, grid cell firing patterns formed a single, continuous representation that spanned both compartments. Thus, we provide the first evidence that in a complex environment, grid cell firing can form the coherent global pattern necessary for them to act as a metric capable of supporting large-scale spatial navigation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Grid Cells Form a Global Representation of Connected Environments

    PubMed Central

    Carpenter, Francis; Manson, Daniel; Jeffery, Kate; Burgess, Neil; Barry, Caswell

    2015-01-01

    Summary The firing patterns of grid cells in medial entorhinal cortex (mEC) and associated brain areas form triangular arrays that tessellate the environment [1, 2] and maintain constant spatial offsets to each other between environments [3, 4]. These cells are thought to provide an efficient metric for navigation in large-scale space [5–8]. However, an accurate and universal metric requires grid cell firing patterns to uniformly cover the space to be navigated, in contrast to recent demonstrations that environmental features such as boundaries can distort [9–11] and fragment [12] grid patterns. To establish whether grid firing is determined by local environmental cues, or provides a coherent global representation, we recorded mEC grid cells in rats foraging in an environment containing two perceptually identical compartments connected via a corridor. During initial exposures to the multicompartment environment, grid firing patterns were dominated by local environmental cues, replicating between the two compartments. However, with prolonged experience, grid cell firing patterns formed a single, continuous representation that spanned both compartments. Thus, we provide the first evidence that in a complex environment, grid cell firing can form the coherent global pattern necessary for them to act as a metric capable of supporting large-scale spatial navigation. PMID:25913404

  16. Large area projection liquid-crystal video display system with inherent grid pattern optically removed

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1992-01-01

    A relatively small and low-cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.

  17. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  18. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  19. How to deal with petabytes of data: the LHC Grid project

    NASA Astrophysics Data System (ADS)

    Britton, D.; Lloyd, S. L.

    2014-06-01

    We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the ‘Big Data’ problem and how the Grid has been forerunner of today's cloud computing.

  20. ION ACCELERATION SYSTEM

    DOEpatents

    Luce, J.S.; Martin, J.A.

    1960-02-23

    Well focused, intense ion beams are obtained by providing a multi- apertured source grid in front of an ion source chamber and an accelerating multi- apertured grid closely spaced from and in alignment with the source grid. The longest dimensions of the elongated apertures in the grids are normal to the direction of the magnetic field used with the device. Large ion currents may be withdrawn from the source, since they do not pass through any small focal region between the grids.

  1. A Framework for Testing Automated Detection, Diagnosis, and Remediation Systems on the Smart Grid

    NASA Technical Reports Server (NTRS)

    Lau, Shing-hon

    2011-01-01

    America's electrical grid is currently undergoing a multi-billion dollar modernization effort aimed at producing a highly reliable critical national infrastructure for power - a Smart Grid. While the goals for the Smart Grid include upgrades to accommodate large quantities of clean, but transient, renewable energy and upgrades to provide customers with real-time pricing information, perhaps the most important objective is to create an electrical grid with a greatly increased robustness.

  2. Geometry modeling and multi-block grid generation for turbomachinery configurations

    NASA Technical Reports Server (NTRS)

    Shih, Ming H.; Soni, Bharat K.

    1992-01-01

    An interactive 3D grid generation code, Turbomachinery Interactive Grid genERation (TIGER), was developed for general turbomachinery configurations. TIGER features the automatic generation of multi-block structured grids around multiple blade rows for either internal, external, or internal-external turbomachinery flow fields. Utilization of the Bezier's curves achieves a smooth grid and better orthogonality. TIGER generates the algebraic grid automatically based on geometric information provided by its built-in pseudo-AI algorithm. However, due to the large variation of turbomachinery configurations, this initial grid may not always be as good as desired. TIGER therefore provides graphical user interactions during the process which allow the user to design, modify, as well as manipulate the grid, including the capability of elliptic surface grid generation.

  3. PDF added value of a high resolution climate simulation for precipitation

    NASA Astrophysics Data System (ADS)

    Soares, Pedro M. M.; Cardoso, Rita M.

    2015-04-01

    General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.

  4. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create profit for investors for renting their transmission capacity, and cheaper electricity for end users. We propose a hybrid method based on a heuristic and deterministic method to attain new transmission lines additions and increase transmission capacity. Renewable energy resources (RES) have zero operating cost, which makes them very attractive for generation companies and market participants. In addition, RES have zero carbon emission, which helps relieve the concerns of environmental impacts of electric generation resources' carbon emission. RES are wind, solar, hydro, biomass, and geothermal. By 2030, the expectation is that more than 30% of electricity in the U.S. will come from RES. One major contributor of RES generation will be from wind energy resources (WES). Furthermore, WES will be an important component of the future generation portfolio. However, the nature of WES is that it experiences a high intermittency and volatility. Because of the great expectation of high WES penetration and the nature of such resources, researchers focus on studying the effects of such resources on the electric grid operation and its adequacy from different aspects. Additionally, current market operations of electric grids add another complication to consider while integrating RES (e.g., specifically WES). Mandates by market rules and long-term analysis of renewable penetration in large-scale electric grid are also the focus of researchers in recent years. We advocate a method for high-wind resources penetration study on large-scale electric grid operations. PMU is a geographical positioning system (GPS) based device, which provides immediate and precise measurements of voltage angle in a high-voltage transmission system. PMUs can update the status of a transmission line and related measurements (e.g., voltage magnitude and voltage phase angle) more frequently. Every second, a PMU can provide 30 samples of measurements compared to traditional systems (e.g., supervisory control and data acquisition [SCADA] system), which provides one sample of measurement every 2 to 5 seconds. Because PMUs provide more measurement data samples, PMU can improve electric grid reliability and observability. (Abstract shortened by UMI.)

  5. Towards data warehousing and mining of protein unfolding simulation data.

    PubMed

    Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner

    2005-10-01

    The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

  6. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  7. Entropy-conservative spatial discretization of the multidimensional quasi-gasdynamic system of equations

    NASA Astrophysics Data System (ADS)

    Zlotnik, A. A.

    2017-04-01

    The multidimensional quasi-gasdynamic system written in the form of mass, momentum, and total energy balance equations for a perfect polytropic gas with allowance for a body force and a heat source is considered. A new conservative symmetric spatial discretization of these equations on a nonuniform rectangular grid is constructed (with the basic unknown functions—density, velocity, and temperature—defined on a common grid and with fluxes and viscous stresses defined on staggered grids). Primary attention is given to the analysis of entropy behavior: the discretization is specially constructed so that the total entropy does not decrease. This is achieved via a substantial revision of the standard discretization and applying numerous original features. A simplification of the constructed discretization serves as a conservative discretization with nondecreasing total entropy for the simpler quasi-hydrodynamic system of equations. In the absence of regularizing terms, the results also hold for the Navier-Stokes equations of a viscous compressible heat-conducting gas.

  8. Ion accelerator systems for high power 30 cm thruster operation

    NASA Technical Reports Server (NTRS)

    Aston, G.

    1982-01-01

    Two and three-grid accelerator systems for high power ion thruster operation were investigated. Two-grid translation tests show that over compensation of the 30 cm thruster SHAG grid set spacing the 30 cm thruster radial plasma density variation and by incorporating grid compensation only sufficient to maintain grid hole axial alignment, it is shown that beam current gains as large as 50% can be realized. Three-grid translation tests performed with a simulated 30 cm thruster discharge chamber show that substantial beamlet steering can be reliably affected by decelerator grid translation only, at net-to-total voltage ratios as low as 0.05.

  9. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations. Part 1; Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2009-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.

  10. 3D Voronoi grid dedicated software for modeling gas migration in deep layered sedimentary formations with TOUGH2-TMGAS

    NASA Astrophysics Data System (ADS)

    Bonduà, Stefano; Battistelli, Alfredo; Berry, Paolo; Bortolotti, Villiam; Consonni, Alberto; Cormio, Carlo; Geloni, Claudio; Vasini, Ester Maria

    2017-11-01

    As is known, a full three-dimensional (3D) unstructured grid permits a great degree of flexibility when performing accurate numerical reservoir simulations. However, when the Integral Finite Difference Method (IFDM) is used for spatial discretization, constraints (arising from the required orthogonality between the segment connecting the blocks nodes and the interface area between blocks) pose difficulties in the creation of grids with irregular shaped blocks. The full 3D Voronoi approach guarantees the respect of IFDM constraints and allows generation of grids conforming to geological formations and structural objects and at the same time higher grid resolution in volumes of interest. In this work, we present dedicated pre- and post-processing gridding software tools for the TOUGH family of numerical reservoir simulators, developed by the Geothermal Research Group of the DICAM Department, University of Bologna. VORO2MESH is a new software coded in C++, based on the voro++ library, allowing computation of the 3D Voronoi tessellation for a given domain and the creation of a ready to use TOUGH2 MESH file. If a set of geological surfaces is available, the software can directly generate the set of Voronoi seed points used for tessellation. In order to reduce the number of connections and so to decrease computation time, VORO2MESH can produce a mixed grid with regular blocks (orthogonal prisms) and irregular blocks (polyhedron Voronoi blocks) at the point of contact between different geological formations. In order to visualize 3D Voronoi grids together with the results of numerical simulations, the functionality of the TOUGH2Viewer post-processor has been extended. We describe an application of VORO2MESH and TOUGH2Viewer to validate the two tools. The case study deals with the simulation of the migration of gases in deep layered sedimentary formations at basin scale using TOUGH2-TMGAS. A comparison between the simulation performances of unstructured and structured grids is presented.

  11. Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells

    PubMed Central

    Trimper, John B.; Trettel, Sean G.; Hwaun, Ernie; Colgin, Laura Lee

    2017-01-01

    At rest, hippocampal “place cells,” neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These “replay” events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay. PMID:28824388

  12. GridMan: A grid manipulation system

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.; Wang, Zhu

    1992-01-01

    GridMan is an interactive grid manipulation system. It operates on grids to produce new grids which conform to user demands. The input grids are not constrained to come from any particular source. They may be generated by algebraic methods, elliptic methods, hyperbolic methods, parabolic methods, or some combination of methods. The methods are included in the various available structured grid generation codes. These codes perform the basic assembly function for the various elements of the initial grid. For block structured grids, the assembly can be quite complex due to a large number of clock corners, edges, and faces for which various connections and orientations must be properly identified. The grid generation codes are distinguished among themselves by their balance between interactive and automatic actions and by their modest variations in control. The basic form of GridMan provides a much more substantial level of grid control and will take its input from any of the structured grid generation codes. The communication link to the outside codes is a data file which contains the grid or section of grid.

  13. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  14. Grid-based mapping: A method for rapidly determining the spatial distributions of small features over very large areas

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas

    2017-06-01

    The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.

  15. The design of the second German national forest inventory

    Treesearch

    Gerald Kandler

    2009-01-01

    In Germany, a sample-based national forest inventory (NFI) took place for the first time from 1986 to 1990 (in West Germany only); the second one took place from 2001 to 2002. The inventory design is based on a systematic distribution of tracts on regular grids of regionally differing width. The primary sampling unit is a quadrangular tract with sides of 150 m. The...

  16. Software Surface Modeling and Grid Generation Steering Committee

    NASA Technical Reports Server (NTRS)

    Smith, Robert E. (Editor)

    1992-01-01

    It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.

  17. Air-core grid for scattered x-ray rejection

    DOEpatents

    Logan, C.M.; Lane, S.M.

    1995-10-03

    The invention is directed to a grid used in x-ray imaging applications to block scattered radiation while allowing the desired imaging radiation to pass through, and to process for making the grid. The grid is composed of glass containing lead oxide, and eliminates the spacer material used in prior known grids, and is therefore, an air-core grid. The glass is arranged in a pattern so that a large fraction of the area is open allowing the imaging radiation to pass through. A small pore size is used and the grid has a thickness chosen to provide high scatter rejection. For example, the grid may be produced with a 200 {micro}m pore size, 80% open area, and 4 mm thickness. 2 figs.

  18. PNNL Future Power Grid Initiative-developed GridOPTICS Software System (GOSS)

    ScienceCinema

    None

    2018-01-16

    The power grid is changing and evolving. One aspect of this change is the growing use of smart meters and other devices, which are producing large volumes of useful data. However, in many cases, the data can’t be translated quickly into actionable guidance to improve grid performance. There's a need for innovative tools. The GridOPTICS(TM) Software System, or GOSS, developed through PNNL's Future Power Grid Initiative, is open source and became publicly available in spring 2014. The value of this middleware is that it easily integrates grid applications with sources of data and facilitates communication between them. Such a capability provides a foundation for developing a range of applications to improve grid management.

  19. Air-core grid for scattered x-ray rejection

    DOEpatents

    Logan, Clinton M.; Lane, Stephen M.

    1995-01-01

    The invention is directed to a grid used in x-ray imaging applications to block scattered radiation while allowing the desired imaging radiation to pass through, and to process for making the grid. The grid is composed of glass containing lead oxide, and eliminates the spacer material used in prior known grids, and is therefore, an air-core grid. The glass is arranged in a pattern so that a large fraction of the area is open allowing the imaging radiation to pass through. A small pore size is used and the grid has a thickness chosen to provide high scatter rejection. For example, the grid may be produced with a 200 .mu.m pore size, 80% open area, and 4 mm thickness.

  20. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  1. Voltage collapse in complex power grids

    PubMed Central

    Simpson-Porco, John W.; Dörfler, Florian; Bullo, Francesco

    2016-01-01

    A large-scale power grid's ability to transfer energy from producers to consumers is constrained by both the network structure and the nonlinear physics of power flow. Violations of these constraints have been observed to result in voltage collapse blackouts, where nodal voltages slowly decline before precipitously falling. However, methods to test for voltage collapse are dominantly simulation-based, offering little theoretical insight into how grid structure influences stability margins. For a simplified power flow model, here we derive a closed-form condition under which a power network is safe from voltage collapse. The condition combines the complex structure of the network with the reactive power demands of loads to produce a node-by-node measure of grid stress, a prediction of the largest nodal voltage deviation, and an estimate of the distance to collapse. We extensively test our predictions on large-scale systems, highlighting how our condition can be leveraged to increase grid stability margins. PMID:26887284

  2. Collective dynamics of 'small-world' networks.

    PubMed

    Watts, D J; Strogatz, S H

    1998-06-04

    Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.

  3. Grid-Enabled High Energy Physics Research using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Mahmood, Akhtar

    2005-04-01

    At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  4. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  5. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  6. CFL3D, FUN3d, and NSU3D Contributions to the Fifth Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Laflin, Kelly R.; Chaffin, Mark S.; Powell, Nicholas; Levy, David W.

    2013-01-01

    Results presented at the Fifth Drag Prediction Workshop using CFL3D, FUN3D, and NSU3D are described. These are calculations on the workshop provided grids and drag adapted grids. The NSU3D results have been updated to reflect an improvement to skin friction calculation on skewed grids. FUN3D results generated after the workshop are included for custom participant generated grids and a grid from a previous workshop. Uniform grid refinement at the design condition shows a tight grouping in calculated drag, where the variation in the pressure component of drag is larger than the skin friction component. At this design condition, A fine-grid drag value was predicted with a smaller drag adjoint adapted grid via tetrahedral adaption to a metric and mixed-element subdivision. The buffet study produced larger variation than the design case, which is attributed to large differences in the predicted side-of-body separation extent. Various modeling and discretization approaches had a strong impact on predicted side-of-body separation. This large wing root separation bubble was not observed in wind tunnel tests indicating that more work is necessary in modeling wing root juncture flows to predict experiments.

  7. The Sensitivity of Numerical Simulations of Cloud-Topped Boundary Layers to Cross-Grid Flow

    NASA Astrophysics Data System (ADS)

    Wyant, Matthew C.; Bretherton, Christopher S.; Blossey, Peter N.

    2018-02-01

    In mesoscale and global atmospheric simulations with large horizontal domains, strong horizontal flow across the grid is often unavoidable, but its effects on cloud-topped boundary layers have received comparatively little study. Here the effects of cross-grid flow on large-eddy simulations of stratocumulus and trade-cumulus marine boundary layers are studied across a range of grid resolutions (horizontal × vertical) between 500 m × 20 m and 35 m × 5 m. Three cases are simulated: DYCOMS nocturnal stratocumulus, BOMEX trade cumulus, and a GCSS stratocumulus-to-trade cumulus case. Simulations are performed with a stationary grid (with 4-8 m s-1 horizontal winds blowing through the cyclic domain) and a moving grid (equivalent to subtracting off a fixed vertically uniform horizontal wind) approximately matching the mean boundary-layer wind speed. For stratocumulus clouds, cross-grid flow produces two primary effects on stratocumulus clouds: a filtering of fine-scale resolved turbulent eddies, which reduces stratocumulus cloud-top entrainment, and a vertical broadening of the stratocumulus-top inversion which enhances cloud-top entrainment. With a coarse (20 m) vertical grid, the former effect dominates and leads to strong increases in cloud cover and LWP, especially as horizontal resolution is coarsened. With a finer (5 m) vertical grid, the latter effect is stronger and leads to small reductions in cloud cover and LWP. For the BOMEX trade cumulus case, cross-grid flow tends to produce fewer and larger clouds with higher LWP, especially for coarser vertical grid spacing. The results presented are robust to choice of scalar advection scheme and Courant number.

  8. Research on the impacts of large-scale electric vehicles integration into power grid

    NASA Astrophysics Data System (ADS)

    Su, Chuankun; Zhang, Jian

    2018-06-01

    Because of its special energy driving mode, electric vehicles can improve the efficiency of energy utilization and reduce the pollution to the environment, which is being paid more and more attention. But the charging behavior of electric vehicles is random and intermittent. If the electric vehicle is disordered charging in a large scale, it causes great pressure on the structure and operation of the power grid and affects the safety and economic operation of the power grid. With the development of V2G technology in electric vehicle, the study of the charging and discharging characteristics of electric vehicles is of great significance for improving the safe operation of the power grid and the efficiency of energy utilization.

  9. Power grid operation risk management: V2G deployment for sustainable development

    NASA Astrophysics Data System (ADS)

    Haddadian, Ghazale J.

    The production, transmission, and delivery of cost--efficient energy to supply ever-increasing peak loads along with a quest for developing a low-carbon economy require significant evolutions in the power grid operations. Lower prices of vast natural gas resources in the United States, Fukushima nuclear disaster, higher and more intense energy consumptions in China and India, issues related to energy security, and recent Middle East conflicts, have urged decisions makers throughout the world to look into other means of generating electricity locally. As the world look to combat climate changes, a shift from carbon-based fuels to non-carbon based fuels is inevitable. However, the variability of distributed generation assets in the electricity grid has introduced major reliability challenges for power grid operators. While spearheading sustainable and reliable power grid operations, this dissertation develops a multi-stakeholder approach to power grid operation design; aiming to address economic, security, and environmental challenges of the constrained electricity generation. It investigates the role of Electric Vehicle (EV) fleets integration, as distributed and mobile storage assets to support high penetrations of renewable energy sources, in the power grid. The vehicle-to-grid (V2G) concept is considered to demonstrate the bidirectional role of EV fleets both as a provider and consumer of energy in securing a sustainable power grid operation. The proposed optimization modeling is the application of Mixed-Integer Linear Programing (MILP) to large-scale systems to solve the hourly security-constrained unit commitment (SCUC) -- an optimal scheduling concept in the economic operation of electric power systems. The Monte Carlo scenario-based approach is utilized to evaluate different scenarios concerning the uncertainties in the operation of power grid system. Further, in order to expedite the real-time solution of the proposed approach for large-scale power systems, it considers a two-stage model using the Benders Decomposition (BD). The numerical simulation demonstrate that the utilization of smart EV fleets in power grid systems would ensure a sustainable grid operation with lower carbon footprints, smoother integration of renewable sources, higher security, and lower power grid operation costs. The results, additionally, illustrate the effectiveness of the proposed MILP approach and its potentials as an optimization tool for sustainable operation of large scale electric power systems.

  10. HIGH VOLTAGE ION SOURCE

    DOEpatents

    Luce, J.S.

    1960-04-19

    A device is described for providing a source of molecular ions having a large output current and with an accelerated energy of the order of 600 kv. Ions are produced in an ion source which is provided with a water-cooled source grid of metal to effect maximum recombination of atomic ions to molecular ions. A very high accelerating voltage is applied to withdraw and accelerate the molecular ions from the source, and means are provided for dumping the excess electrons at the lowest possible potentials. An accelerating grid is placed adjacent to the source grid and a slotted, grounded accelerating electrode is placed adjacent to the accelerating grid. A potential of about 35 kv is maintained between the source grid and accelerating grid, and a potential of about 600 kv is maintained between the accelerating grid and accelerating electrode. In order to keep at a minimum the large number of oscillating electrons which are created when such high voltages are employed in the vicinity of a strong magnetic field, a plurality of high voltage cascaded shields are employed with a conventional electron dumping system being employed between each shield so as to dump the electrons at the lowest possible potential rather than at 600 kv.

  11. The neglected nonlocal effects of deforestation

    NASA Astrophysics Data System (ADS)

    Winckler, Johannes; Reick, Christian; Pongratz, Julia

    2017-04-01

    Deforestation changes surface temperature locally via biogeophysical effects by changing the water, energy and momentum balance. Adding to these locally induced changes (local effects), deforestation at a given location can cause changes in temperature elsewhere (nonlocal effects). Most previous studies have not considered local and nonlocal effects separately, but investigated the total (local plus nonlocal) effects, for which global deforestation was found to cause a global mean cooling. Recent modeling and observational studies focused on the isolated local effects: The local effects are relevant for local living conditions, and they can be obtained from in-situ and satellite observations. Observational studies suggest that the local effects of potential deforestation cause a warming when averaged globally. This contrast between local warming and total cooling indicates that the nonlocal effects of deforestation are causing a cooling and thus counteract the local effects. It is still unclear how the nonlocal effects depend on the spatial scale of deforestation, and whether they still compensate the local warming in a more realistic spatial distribution of deforestation. To investigate this, we use a fully coupled climate model and separate local and nonlocal effects of deforestation in three steps: Starting from a forest world, we simulate deforestation in one out of four grid boxes using a regular spatial pattern and increase the number of deforestation grid boxes step-wise up to three out of four boxes in subsequent simulations. To compare these idealized spatial distributions of deforestation to a more realistic case, we separate local and nonlocal effects in a simulation where deforestation is applied in regions where it occurred historically. We find that the nonlocal effects scale nearly linearly with the number of deforested grid boxes, and the spatial distribution of the nonlocal effects is similar for the regular spatial distribution of deforestation and the more realistic pattern. Globally averaged, the deforestation-induced warming of the local effects is counteracted by the nonlocal effects, which are about three times as strong as the local effects (up to 0.1K local warming versus -0.3K nonlocal cooling). Thus, the nonlocal effects are more cooling than the local effects are warming, and this is valid not only for idealized simulations of large-scale deforestation, but also for a more realistic deforestation scenario. We conclude that the local effects of deforestation only yield an incomplete picture of the total climate effects by biogeophysical pathways. While the local effects capture the direct climatic response at the site of deforestation, the nonlocal effects have to be included if the biogeophysical effects of deforestation are considered for an implementation in climate policies.

  12. Modeling Hydrodynamics on the Wave Group Scale in Topographically Complex Reef Environments

    NASA Astrophysics Data System (ADS)

    Reyns, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    The knowledge of the characteristics of waves and the associated wave-driven currents is important for sediment transport and morphodynamics, nutrient dynamics and larval dispersion within coral reef ecosystems. Reef-lined coasts differ from sandy beaches in that they have a steep offshore slope, that the non-sandy bottom topography is very rough, and that the distance between the point of maximum short wave dissipation and the actual coastline is usually large. At this short wave breakpoint, long waves are released, and these infragravity (IG) scale motions account for the bulk of the water level variance on the reef flat, the lagoon and eventually, the sandy beaches fronting the coast through run-up. These IG energy dominated water level motions are reinforced during extreme events such as cyclones or swells through larger incident band wave heights and low frequency wave resonance on the reef. Recently, a number of hydro(-morpho)dynamic models that have the capability to model these IG waves have successfully been applied to morphologically differing reef environments. One of these models is the XBeach model, which is curvilinear in nature. This poses serious problems when trying to model an entire atoll for example, as it is extremely difficult to build curvilinear grids that are optimal for the simulation of hydrodynamic processes, while maintaining the topology in the grid. One solution to remediate this problem of grid connectivity is the use of unstructured grids. We present an implementation of the wave action balance on the wave group scale with feedback to the flow momentum balance, which is the foundation of XBeach, within the framework of the unstructured Delft3D Flexible Mesh model. The model can be run in stationary as well as in instationary mode, and it can be forced by regular waves, time series or wave spectra. We show how the code is capable of modeling the wave generated flow at a number of topographically complex reef sites and for a number of different forcing conditions, by comparison with field data.

  13. An interpolation-free ALE scheme for unsteady inviscid flows computations with large boundary displacements over three-dimensional adaptive grids

    NASA Astrophysics Data System (ADS)

    Re, B.; Dobrzynski, C.; Guardone, A.

    2017-07-01

    A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.

  14. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2010-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.

  15. Third International Workshop on Grid Simulator Testing of Wind Turbine

    Science.gov Websites

    Drivetrains | Grid Modernization | NRELA> Third International Workshop on Grid Simulator Testing agenda. For technical questions about the workshop, contact Vahan Gevorgian. A photo of a large group of people standing facing the camera for a group photo Attendees and speakers for the Third International

  16. Qualitative Life-Grids: A Proposed Method for Comparative European Educational Research

    ERIC Educational Resources Information Center

    Abbas, Andrea; Ashwin, Paul; McLean, Monica

    2013-01-01

    Drawing upon their large three-year mixed-method study comparing four English university sociology departments, the authors demonstrate the benefits to be gained from concisely recording biographical stories on life-grids. They argue that life-grids have key benefits which are important for comparative European educational research. Some of these…

  17. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  18. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    PubMed

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.

  19. San Antonio, Texas, USA

    NASA Image and Video Library

    1992-07-09

    This sharp, cloud free view of San Antonio, Texas (29.5N, 98.5W) illustrates the classic pattern of western cities. The city has a late nineteenth century Anglo grid pattern overlaid onto an earlier, less regular Hispanic settlement. A well marked central business district having streets laid out north/south and east/west is surrounded by blocks of suburban homes and small businesses set between the older colonial radial transportation routes.

  20. Grid-texture mechanisms in human vision: Contrast detection of regular sparse micro-patterns requires specialist templates.

    PubMed

    Baker, Daniel H; Meese, Tim S

    2016-07-27

    Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.

  1. Grid-texture mechanisms in human vision: Contrast detection of regular sparse micro-patterns requires specialist templates

    PubMed Central

    Baker, Daniel H.; Meese, Tim S.

    2016-01-01

    Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures. PMID:27460430

  2. Surface topography of the Greenland Ice Sheet from satellite radar altimetry

    NASA Technical Reports Server (NTRS)

    Bindschadler, Robert A.; Zwally, H. Jay; Major, Judith A.; Brenner, Anita C.

    1989-01-01

    Surface elevation maps of the southern half of the Greenland subcontinent are produced from radar altimeter data acquired by the Seasat satellite. A summary of the processing procedure and examples of return waveform data are given. The elevation data are used to generate a regular grid which is then computer contoured to provide an elevation contour map. Ancillary maps show the statistical quality of the elevation data and various characteristics of the surface. The elevation map is used to define ice flow directions and delineate the major drainage basins. Regular maps of the Jakobshavns Glacier drainage basin and the ice divide in the vicinity of Crete Station are presented. Altimeter derived elevations are compared with elevations measured both by satellite geoceivers and optical surveying.

  3. Self-Avoiding Walks over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.

  4. Change Detection of Mobile LIDAR Data Using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Liu, Kun; Boehm, Jan; Alis, Christian

    2016-06-01

    Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.

  5. Large Eddy Simulation in a Channel with Exit Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Cziesla, T.; Braun, H.; Biswas, G.; Mitra, N. K.

    1996-01-01

    The influence of the exit boundary conditions (vanishing first derivative of the velocity components and constant pressure) on the large eddy simulation of the fully developed turbulent channel flow has been investigated for equidistant and stretched grids at the channel exit. Results show that the chosen exit boundary conditions introduce some small disturbance which is mostly damped by the grid stretching. The difference between the fully developed turbulent channel flow obtained with LES with periodicity condition and the inlet and exit and the LES with fully developed flow at the inlet and the exit boundary condition is less than 10% for equidistant grids and less than 5% for the case grid stretching. The chosen boundary condition is of interest because it may be used in complex flows with backflow at exit.

  6. Grid-Sphere Electrodes for Contact with Ionospheric Plasma

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.; Poe, Garrett D.

    2010-01-01

    Grid-sphere electrodes have been proposed for use on the positively biased end of electrodynamic space tethers. A grid-sphere electrode is fabricated by embedding a wire mesh in a thin film from which a spherical balloon is formed. The grid-sphere electrode would be deployed from compact stowage by inflating the balloon in space. The thin-film material used to inflate the balloon is formulated to vaporize when exposed to the space environment. This would leave the bare metallic spherical grid electrode attached to the tether, which would present a small cross-sectional area (essentially, the geometric wire shadow area only) to incident neutral atoms and molecules. Most of the neutral particles, which produce dynamic drag when they impact a surface, would pass unimpeded through the open grid spaces. However, partly as a result of buildup of a space charge inside the grid-sphere, and partially, the result of magnetic field effects, the electrode would act almost like a solid surface with respect to the flux of electrons. The net result would be that grid-sphere electrodes would introduce minimal aerodynamic drag, yet have effective electrical-contact surface areas large enough to collect multiampere currents from the ionospheric plasma that are needed for operation of electrodynamic tethers. The vaporizable-balloon concept could also be applied to the deployment of large radio antennas in outer space.

  7. Transfer and alignment of random single-walled carbon nanotube films by contact printing.

    PubMed

    Liu, Huaping; Takagi, Daisuke; Chiashi, Shohei; Homma, Yoshikazu

    2010-02-23

    We present a simple method to transfer large-area random single-walled carbon nanotube (SWCNT) films grown on SiO(2) substrates onto another surface through a simple contact printing process. The transferred random SWCNT films can be assembled into highly ordered, dense regular arrays with high uniformity and reproducibility by sliding the growth substrate during the transfer process. The position of the transferred SWCNT film can be controlled by predefined patterns on the receiver substrates. The process is compatible with a variety of substrates, and even metal meshes for transmission electron microscopy (TEM) can be used as receiver substrates. Thus, suspended web-like SWCNT networks and aligned SWCNT arrays can be formed over the grids of TEM meshes, so that the structures of the transferred SWCNTs can be directly observed by TEM. This simple technique can be used to controllably transfer SWCNTs for property studies, for the fabrication of devices, or even as support films for TEM meshes.

  8. Addressing individual metal ion centers in supramolecules by STS

    NASA Astrophysics Data System (ADS)

    Alam, M. S.; Ako, A. M.; Ruben, M.; Thompson, L. K.; Lehn, J.-M.

    2005-03-01

    As the information of STM measurements arises from electronic structure, separating information on the topography is not straightforward for complex molecules. Scanning tunneling spectroscopy (STS) measurements give information about the molecular energy levels, which are next to the molecules Fermi level. Using a home built STM working under ambient conditions, we succeeded to combine high resolution topography mapping with simultaneous current-voltage characteristics (STS) measurements on single molecules deposited on highly oriented pyrolytic graphite surfaces. We present our recent results on grid-type molecules [Co4L4] (L=4,6-bis(2',2''-bipyridyl-6-yl)pyrimidine) and [Mn9L6] (L=2POAP-2H) as well as on ring-shaped Fe ion chains [Fe6Cl6L6] (L=1-Ecosyliminodiethanol). Small, regular molecule clusters as well as separated single molecules were observed. We found a rather large contrast at the expected location of the metal centers in our molecules, i.e. the location of the individual metal ions in their organic matrix is directly addressable by STS.

  9. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  10. Extension of a Kolmogorov Atmospheric Turbulence Model for Time-Based Simulation Implementation

    NASA Technical Reports Server (NTRS)

    McMinn, John D.

    1997-01-01

    The development of any super/hypersonic aircraft requires the interaction of a wide variety of technical disciplines to maximize vehicle performance. For flight and engine control system design and development on this class of vehicle, realistic mathematical simulation models of atmospheric turbulence, including winds and the varying thermodynamic properties of the atmosphere, are needed. A model which has been tentatively selected by a government/industry group of flight and engine/inlet controls representatives working on the High Speed Civil Transport is one based on the Kolmogorov spectrum function. This report compares the Dryden and Kolmogorov turbulence forms, and describes enhancements that add functionality to the selected Kolmogorov model. These added features are: an altitude variation of the eddy dissipation rate based on Dryden data, the mapping of the eddy dissipation rate database onto a regular latitude and longitude grid, a method to account for flight at large vehicle attitude angles, and a procedure for transitioning smoothly across turbulence segments.

  11. A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks

    PubMed Central

    Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan

    2014-01-01

    Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747

  12. Orientation domains: A mobile grid clustering algorithm with spherical corrections

    NASA Astrophysics Data System (ADS)

    Mencos, Joana; Gratacós, Oscar; Farré, Mercè; Escalante, Joan; Arbués, Pau; Muñoz, Josep Anton

    2012-12-01

    An algorithm has been designed and tested which was devised as a tool assisting the analysis of geological structures solely from orientation data. More specifically, the algorithm was intended for the analysis of geological structures that can be approached as planar and piecewise features, like many folded strata. Input orientation data is expressed as pairs of angles (azimuth and dip). The algorithm starts by considering the data in Cartesian coordinates. This is followed by a search for an initial clustering solution, which is achieved by comparing the results output from the systematic shift of a regular rigid grid over the data. This initial solution is optimal (achieves minimum square error) once the grid size and the shift increment are fixed. Finally, the algorithm corrects for the variable spread that is generally expected from the data type using a reshaped non-rigid grid. The algorithm is size-oriented, which implies the application of conditions over cluster size through all the process in contrast to density-oriented algorithms, also widely used when dealing with spatial data. Results are derived in few seconds and, when tested over synthetic examples, they were found to be consistent and reliable. This makes the algorithm a valuable alternative to the time-consuming traditional approaches available to geologists.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  14. Climate change scenarios of heat waves in Central Europe and their uncertainties

    NASA Astrophysics Data System (ADS)

    Lhotka, Ondřej; Kyselý, Jan; Farda, Aleš

    2018-02-01

    The study examines climate change scenarios of Central European heat waves with a focus on related uncertainties in a large ensemble of regional climate model (RCM) simulations from the EURO-CORDEX and ENSEMBLES projects. Historical runs (1970-1999) driven by global climate models (GCMs) are evaluated against the E-OBS gridded data set in the first step. Although the RCMs are found to reproduce the frequency of heat waves quite well, those RCMs with the coarser grid (25 and 50 km) considerably overestimate the frequency of severe heat waves. This deficiency is improved in higher-resolution (12.5 km) EURO-CORDEX RCMs. In the near future (2020-2049), heat waves are projected to be nearly twice as frequent in comparison to the modelled historical period, and the increase is even larger for severe heat waves. Uncertainty originates mainly from the selection of RCMs and GCMs because the increase is similar for all concentration scenarios. For the late twenty-first century (2070-2099), a substantial increase in heat wave frequencies is projected, the magnitude of which depends mainly upon concentration scenario. Three to four heat waves per summer are projected in this period (compared to less than one in the recent climate), and severe heat waves are likely to become a regular phenomenon. This increment is primarily driven by a positive shift of temperature distribution, but changes in its scale and enhanced temporal autocorrelation of temperature also contribute to the projected increase in heat wave frequencies.

  15. Automatic Integration Testbeds validation on Open Science Grid

    NASA Astrophysics Data System (ADS)

    Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.

    2011-12-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  16. Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hansen, S. E.; Papadopoulos, G. A.

    2017-12-01

    The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.

  17. Generation of a composite grid for turbine flows and consideration of a numerical scheme

    NASA Technical Reports Server (NTRS)

    Choo, Y.; Yoon, S.; Reno, C.

    1986-01-01

    A composite grid was generated for flows in turbines. It consisted of the C-grid (or O-grid) in the immediate vicinity of the blade and the H-grid in the middle of the blade passage between the C-grids and in the upstream region. This new composite grid provides better smoothness, resolution, and orthogonality than any single grid for a typical turbine blade with a large camber and rounded leading and trailing edges. The C-H (or O-H) composite grid has an unusual grid point that is connected to more than four neighboring nodes in two dimensions (more than six neighboring nodes in three dimensions). A finite-volume lower-upper (LU) implicit scheme to be used on this grid poses no problem and requires no special treatment because each interior cell of this composite grid has only four neighboring cells in two dimensions (six cells in three dimensions). The LU implicit scheme was demonstrated to be efficient and robust for external flows in a broad flow regime and can be easily applied to internal flows and extended from two to three dimensions.

  18. Processing LHC data in the UK

    PubMed Central

    Colling, D.; Britton, D.; Gordon, J.; Lloyd, S.; Doyle, A.; Gronbech, P.; Coles, J.; Sansum, A.; Patrick, G.; Jones, R.; Middleton, R.; Kelsey, D.; Cass, A.; Geddes, N.; Clark, P.; Barnby, L.

    2013-01-01

    The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011—the first two significant years of the running of the LHC. PMID:23230163

  19. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  20. Ion beamlet steering for two-grid electrostatic thrusters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Homa, J. M.

    1984-01-01

    An experimental study of ion beamlet steering in which the direction of beamlets emitted from a two grid aperture system is controlled by relative translation of the grids, is described. The results can be used to design electrostatic accelerating devices for which the direction and focus of emerging beamlets are important. Deflection and divergence angle data are presented for two grid systems as a function of the relative lateral displacement of the holes in these grids. At large displacements, accelerator grid impingements become excessive and this determines the maximum allowable displacement and as a result the useful range of beamlet deflection. Beamlet deflection is shown to vary linearly with grid offset angle over this range. The divergence of the beamlets is found to be unaffected by deflection over the useful range of beamlet deflection. The grids of a typical dished grid ion thruster are examined to determine the effects of thermally induced grid distortion and prescribed offsets of grid hole centerlines on the characteristics of the emerging beamlets. The results are used to determine the region on the grid surface where ion beamlet deflections exceed the useful range. Over this region high accelerator grid impingement currents and rapid grid erosion are predicted.

  1. Research of the application of the Low Power Wide Area Network in power grid

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Sui, Hong; Li, Jia; Yao, Jian

    2018-03-01

    Low Power Wide Area Network (LPWAN) technologies developed rapidly in recent years, but these technologies have not make large-scale applications in different application scenarios of power grid. LoRa is a mainstream LPWAN technology. This paper makes a comparison test of the signal coverage of LoRa and other traditional wireless communication technologies in typical signal environment of power grid. Based on the test results, this paper gives an application suggestion of LoRa in power grid services, which can guide the planning and construction of the LPWAN in power grid.

  2. Grid computing in large pharmaceutical molecular modeling.

    PubMed

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  3. Big Geo Data Services: From More Bytes to More Barrels

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2016-04-01

    The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.

  4. Numerical simulation of three dimensional transonic flows

    NASA Technical Reports Server (NTRS)

    Sahu, Jubaraj; Steger, Joseph L.

    1987-01-01

    The three-dimensional flow over a projectile has been computed using an implicit, approximately factored, partially flux-split algorithm. A simple composite grid scheme has been developed in which a single grid is partitioned into a series of smaller grids for applications which require an external large memory device such as the SSD of the CRAY X-MP/48, or multitasking. The accuracy and stability of the composite grid scheme has been tested by numerically simulating the flow over an ellipsoid at angle of attack and comparing the solution with a single grid solution. The flowfield over a projectile at M = 0.96 and 4 deg angle-of-attack has been computed using a fine grid, and compared with experiment.

  5. Uncertainties and implications of applying aggregated data for spatial modelling of atmospheric ammonia emissions.

    PubMed

    Hellsten, S; Dragosits, U; Place, C J; Dore, A J; Tang, Y S; Sutton, M A

    2018-05-09

    Ammonia emissions vary greatly at a local scale, and effects (eutrophication, acidification) occur primarily close to sources. Therefore it is important that spatially distributed emission estimates are located as accurately as possible. The main source of ammonia emissions is agriculture, and therefore agricultural survey statistics are the most important input data to an ammonia emission inventory alongside per activity estimates of emission potential. In the UK, agricultural statistics are collected at farm level, but are aggregated to parish level, NUTS-3 level or regular grid resolution for distribution to users. In this study, the Modifiable Areal Unit Problem (MAUP), associated with such amalgamation, is investigated in the context of assessing the spatial distribution of ammonia sources for emission inventories. England was used as a test area to study the effects of the MAUP. Agricultural survey data at farm level (point data) were obtained under license and amalgamated to different areal units or zones: regular 1-km, 5-km, 10-km grids and parish level, before they were imported into the emission model. The results of using the survey data at different levels of amalgamation were assessed to estimate the effects of the MAUP on the spatial inventory. The analysis showed that the size and shape of aggregation zones applied to the farm-level agricultural statistics strongly affect the location of the emissions estimated by the model. If the zones are too small, this may result in false emission "hot spots", i.e., artificially high emission values that are in reality not confined to the zone to which they are allocated. Conversely, if the zones are too large, detail may be lost and emissions smoothed out, which may give a false impression of the spatial patterns and magnitude of emissions in those zones. The results of the study indicate that the MAUP has a significant effect on the location and local magnitude of emissions in spatial inventories where amalgamated, zonal data are used. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  7. Redistribution population data across a regular spatial grid according to buildings characteristics

    NASA Astrophysics Data System (ADS)

    Calka, Beata; Bielecka, Elzbieta; Zdunkiewicz, Katarzyna

    2016-12-01

    Population data are generally provided by state census organisations at the predefined census enumeration units. However, these datasets very are often required at userdefined spatial units that differ from the census output levels. A number of population estimation techniques have been developed to address these problems. This article is one of those attempts aimed at improving county level population estimates by using spatial disaggregation models with support of buildings characteristic, derived from national topographic database, and average area of a flat. The experimental gridded population surface was created for Opatów county, sparsely populated rural region located in Central Poland. The method relies on geolocation of population counts in buildings, taking into account the building volume and structural building type and then aggregation the people total in 1 km quadrilateral grid. The overall quality of population distribution surface expressed by the mean of RMSE equals 9 persons, and the MAE equals 0.01. We also discovered that nearly 20% of total county area is unpopulated and 80% of people lived on 33% of the county territory.

  8. Multiple-grid convergence acceleration of viscous and inviscid flow computations

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1983-01-01

    A multiple-grid algorithm for use in efficiently obtaining steady solution to the Euler and Navier-Stokes equations is presented. The convergence of a simple, explicit fine-grid solution procedure is accelerated on a sequence of successively coarser grids by a coarse-grid information propagation method which rapidly eliminates transients from the computational domain. This use of multiple-gridding to increase the convergence rate results in substantially reduced work requirements for the numerical solution of a wide range of flow problems. Computational results are presented for subsonic and transonic inviscid flows and for laminar and turbulent, attached and separated, subsonic viscous flows. Work reduction factors as large as eight, in comparison to the basic fine-grid algorithm, were obtained. Possibilities for further performance improvement are discussed.

  9. A new solution-adaptive grid generation method for transonic airfoil flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.; Holst, T. L.

    1981-01-01

    The clustering algorithm is controlled by a second-order, ordinary differential equation which uses the airfoil surface density gradient as a forcing function. The solution to this differential equation produces a surface grid distribution which is automatically clustered in regions with large gradients. The interior grid points are established from this surface distribution by using an interpolation scheme which is fast and retains the desirable properties of the original grid generated from the standard elliptic equation approach.

  10. Small vulnerable sets determine large network cascades in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  11. Small vulnerable sets determine large network cascades in power grids

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-11-17

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  12. Applying multi-resolution numerical methods to geodynamics

    NASA Astrophysics Data System (ADS)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher vigors. However, such configurations have recently been observed. With hot-spots widely-regarded as the surface expression of deep mantle plumes, it is of great importance to ascertain whether or not these conclusions are valid at the dynamical regime of Earth's mantle. Results demonstrate that at these high vigors, steady plumes do arise. However, they do not dominate the planform as in lower vigor cases: they coexist with mobile and ephemeral plumes and display ranging characteristics, which are consistent with hot-spot observations on Earth. Those plumes that do remain steady alter in intensity throughout the simulation, strengthening and weakening over time. Such behavior is caused by an irregular supply of cold material to the core-mantle boundary region, suggesting that subducting slabs are partially responsible for episodic plume magmatism on Earth. With this in mind, the influence of the upper boundary condition upon the planform of mantle convection is further examined. With the modified code, the CPU-time needed to solve a given problem is reduced and, hence, several simulations can be run efficiently, allowing a relatively rapid parameter space mapping of various upper boundary conditions. Results, in accordance with the investigations on upwelling structures, demonstrate that the surface exerts a profound control upon internal dynamics, manifesting itself not only in convective structures, but also in thermal profiles, Nusselt numbers and velocity patterns. Since the majority of geodynamical simulations incorporate a surface condition that is not at all representative of Earth, this is a worrying, yet important conclusion. By failing to address the surface appropriately, geodynamical models, regardless of their sophistication, cannot be truly applicable to Earth. In summary, the techniques developed herein, in both 2- and 3-D, are extremely practical and highly efficient, yielding significant advantages for geodynamical simulations. Indeed, they allow one to solve problems that would otherwise be unfeasible.

  13. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  14. Chimera Grid Tools

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  15. Fine-Scale Survey of Right and Humpback Whale Prey Abundance and Distribution

    DTIC Science & Technology

    2011-09-30

    information, we accomplished: (1) Identification of the prey type (e.g. copepod , krill, fish) and numerical abundance of zooplankton and nekton in...primarily copepods in this area) and nekton (small fish such as sand lance or herring). The general approach is to conduct a regular grid-like...correlated right whale location in the water column with the distribution of copepods measured acoustically which has resulted in a high-profile, peer

  16. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    NASA Astrophysics Data System (ADS)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  17. Towards a consistent framework to oversample multi-sensors, multi-species satellite data into a common grid

    NASA Astrophysics Data System (ADS)

    Sun, K.; Zhu, L.; Gonzalez Abad, G.; Nowlan, C. R.; Miller, C. E.; Huang, G.; Liu, X.; Chance, K.; Yang, K.

    2017-12-01

    It has been well demonstrated that regridding Level 2 products (satellite observations from individual footprints, or pixels) from multiple sensors/species onto regular spatial and temporal grids makes the data more accessible for scientific studies and can even lead to additional discoveries. However, synergizing multiple species retrieved from multiple satellite sensors faces many challenges, including differences in spatial coverage, viewing geometry, and data filtering criteria. These differences will lead to errors and biases if not treated carefully. Operational gridded products are often at 0.25°×0.25° resolution with a global scale, which is too coarse for local heterogeneous emission sources (e.g., urban areas), and at fixed temporal intervals (e.g., daily or monthly). We propose a consistent framework to fully use and properly weight the information of all possible individual satellite observations. A key aspect of this work is an accurate knowledge of the spatial response function (SRF) of the satellite Level 2 pixels. We found that the conventional overlap-area-weighting method (tessellation) is accurate only when the SRF is homogeneous within the parameterized pixel boundary and zero outside the boundary. There will be a tessellation error if the SRF is a smooth distribution, and if this distribution is not properly considered. On the other hand, discretizing the SRF at the destination grid will also induce errors. By balancing these error sources, we found that the SRF should be used in gridding OMI data to 0.2° for fine resolutions. Case studies by merging multiple species and wind data into 0.01° grid will be shown in the presentation.

  18. Dynamic Smagorinsky model on anisotropic grids

    NASA Technical Reports Server (NTRS)

    Scotti, A.; Meneveau, C.; Fatica, M.

    1996-01-01

    Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.

  19. DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphrey, Marty, A

    The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less

  20. The impact of model detail on power grid resilience measures

    NASA Astrophysics Data System (ADS)

    Auer, S.; Kleis, K.; Schultz, P.; Kurths, J.; Hellmann, F.

    2016-05-01

    Extreme events are a challenge to natural as well as man-made systems. For critical infrastructure like power grids, we need to understand their resilience against large disturbances. Recently, new measures of the resilience of dynamical systems have been developed in the complex system literature. Basin stability and survivability respectively assess the asymptotic and transient behavior of a system when subjected to arbitrary, localized but large perturbations in frequency and phase. To employ these methods that assess power grid resilience, we need to choose a certain model detail of the power grid. For the grid topology we considered the Scandinavian grid and an ensemble of power grids generated with a random growth model. So far the most popular model that has been studied is the classical swing equation model for the frequency response of generators and motors. In this paper we study a more sophisticated model of synchronous machines that also takes voltage dynamics into account, and compare it to the previously studied model. This model has been found to give an accurate picture of the long term evolution of synchronous machines in the engineering literature for post fault studies. We find evidence that some stable fix points of the swing equation become unstable when we add voltage dynamics. If this occurs the asymptotic behavior of the system can be dramatically altered, and basin stability estimates obtained with the swing equation can be dramatically wrong. We also find that the survivability does not change significantly when taking the voltage dynamics into account. Further, the limit cycle type asymptotic behaviour is strongly correlated with transient voltages that violate typical operational voltage bounds. Thus, transient voltage bounds are dominated by transient frequency bounds and play no large role for realistic parameters.

  1. Crowdsourcing Vector Surveillance: Using Community Knowledge and Experiences to Predict Densities and Distribution of Outdoor-Biting Mosquitoes in Rural Tanzania.

    PubMed

    Mwangungulu, Stephen Peter; Sumaye, Robert David; Limwagu, Alex Julius; Siria, Doreen Josen; Kaindoa, Emmanuel Wilson; Okumu, Fredros Oketch

    2016-01-01

    Lack of reliable techniques for large-scale monitoring of disease-transmitting mosquitoes is a major public health challenge, especially where advanced geo-information systems are not regularly applicable. We tested an innovative crowd-sourcing approach, which relies simply on knowledge and experiences of residents to rapidly predict areas where disease-transmitting mosquitoes are most abundant. Guided by community-based resource persons, we mapped boundaries and major physical features in three rural Tanzanian villages. We then selected 60 community members, taught them basic map-reading skills, and offered them gridded maps of their own villages (grid size: 200m×200m) so they could identify locations where they believed mosquitoes were most abundant, by ranking the grids from one (highest density) to five (lowest density). The ranks were interpolated in ArcGIS-10 (ESRI-USA) using inverse distance weighting (IDW) method, and re-classified to depict areas people believed had high, medium and low mosquito densities. Finally, we used odor-baited mosquito traps to compare and verify actual outdoor mosquito densities in the same areas. We repeated this process for 12 months, each time with a different group of 60 residents. All entomological surveys depicted similar geographical stratification of mosquito densities in areas classified by community members as having high, medium and low vector abundance. These similarities were observed when all mosquito species were combined, and also when only malaria vectors were considered. Of the 12,412 mosquitoes caught, 60.9% (7,555) were from areas considered by community members as having high mosquito densities, 28% (3,470) from medium density areas, and 11.2% (1,387) from low density areas. This study provides evidence that we can rely on community knowledge and experiences to identify areas where mosquitoes are most abundant or least abundant, even without entomological surveys. This crowd-sourcing method could be further refined and validated to improve community-based planning of mosquito control operations at low-cost.

  2. Crowdsourcing Vector Surveillance: Using Community Knowledge and Experiences to Predict Densities and Distribution of Outdoor-Biting Mosquitoes in Rural Tanzania

    PubMed Central

    Limwagu, Alex Julius; Siria, Doreen Josen; Kaindoa, Emmanuel Wilson; Okumu, Fredros Oketch

    2016-01-01

    Lack of reliable techniques for large-scale monitoring of disease-transmitting mosquitoes is a major public health challenge, especially where advanced geo-information systems are not regularly applicable. We tested an innovative crowd-sourcing approach, which relies simply on knowledge and experiences of residents to rapidly predict areas where disease-transmitting mosquitoes are most abundant. Guided by community-based resource persons, we mapped boundaries and major physical features in three rural Tanzanian villages. We then selected 60 community members, taught them basic map-reading skills, and offered them gridded maps of their own villages (grid size: 200m×200m) so they could identify locations where they believed mosquitoes were most abundant, by ranking the grids from one (highest density) to five (lowest density). The ranks were interpolated in ArcGIS-10 (ESRI-USA) using inverse distance weighting (IDW) method, and re-classified to depict areas people believed had high, medium and low mosquito densities. Finally, we used odor-baited mosquito traps to compare and verify actual outdoor mosquito densities in the same areas. We repeated this process for 12 months, each time with a different group of 60 residents. All entomological surveys depicted similar geographical stratification of mosquito densities in areas classified by community members as having high, medium and low vector abundance. These similarities were observed when all mosquito species were combined, and also when only malaria vectors were considered. Of the 12,412 mosquitoes caught, 60.9% (7,555) were from areas considered by community members as having high mosquito densities, 28% (3,470) from medium density areas, and 11.2% (1,387) from low density areas. This study provides evidence that we can rely on community knowledge and experiences to identify areas where mosquitoes are most abundant or least abundant, even without entomological surveys. This crowd-sourcing method could be further refined and validated to improve community-based planning of mosquito control operations at low-cost. PMID:27253869

  3. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  4. The Space-Wise Global Gravity Model from GOCE Nominal Mission Data

    NASA Astrophysics Data System (ADS)

    Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.

    2011-12-01

    In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.

  5. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  6. National Offshore Wind Energy Grid Interconnection Study Full Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, John P.; Liu, Shu; Ibanez, Eduardo

    2014-07-30

    The National Offshore Wind Energy Grid Interconnection Study (NOWEGIS) considers the availability and potential impacts of interconnecting large amounts of offshore wind energy into the transmission system of the lower 48 contiguous United States.

  7. A graphene oxide-carbon nanotube grid for high-resolution transmission electron microscopy of nanomaterials.

    PubMed

    Zhang, Lina; Zhang, Haoxu; Zhou, Ruifeng; Chen, Zhuo; Li, Qunqing; Fan, Shoushan; Ge, Guanglu; Liu, Renxiao; Jiang, Kaili

    2011-09-23

    A novel grid for use in transmission electron microscopy is developed. The supporting film of the grid is composed of thin graphene oxide films overlying a super-aligned carbon nanotube network. The composite film combines the advantages of graphene oxide and carbon nanotube networks and has the following properties: it is ultra-thin, it has a large flat and smooth effective supporting area with a homogeneous amorphous appearance, high stability, and good conductivity. The graphene oxide-carbon nanotube grid has a distinct advantage when characterizing the fine structure of a mass of nanomaterials over conventional amorphous carbon grids. Clear high-resolution transmission electron microscopy images of various nanomaterials are obtained easily using the new grids.

  8. The construction of power grid operation index system considering the risk of maintenance

    NASA Astrophysics Data System (ADS)

    Tang, Jihong; Wang, Canlin; Jiang, Xinfan; Ye, Jianhui; Pan, Feilai

    2018-02-01

    In recent years, large-scale blackout occurred at home and abroad caused widespread concern about the operation of the grid in the world, and the maintenance risk is an important indicator of grid safety. The barrier operation of the circuit breaker exists in the process of overhaul of the power grid. The operation of the different barrier is of great significance to the change of the power flow, thus affecting the safe operation of the system. Most of the grid operating status evaluation index system did not consider the risk of maintenance, to this end, this paper from the security, economy, quality and cleanliness of the four angles, build the power grid operation index system considering the risk of maintenance.

  9. Grid oscillators

    NASA Technical Reports Server (NTRS)

    Popovic, Zorana B.; Kim, Moonil; Rutledge, David B.

    1988-01-01

    Loading a two-dimensional grid with active devices offers a means of combining the power of solid-state oscillators in the microwave and millimeter-wave range. The grid structure allows a large number of negative resistance devices to be combined. This approach is attractive because the active devices do not require an external locking signal, and the combining is done in free space. In addition, the loaded grid is a planar structure amenable to monolithic integration. Measurements on a 25-MESFET grid at 9.7 GHz show power-combining and frequency-locking without an external locking signal, with an ERP of 37 W. Experimental far-field patterns agree with theoretical results obtained using reciprocity.

  10. Exploiting the Potential of Data Centers in the Smart Grid

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoying; Zhang, Yu-An; Liu, Xiaojing; Cao, Tengfei

    As the number of cloud computing data centers grows rapidly in recent years, from the perspective of smart grid, they are really large and noticeable electric load. In this paper, we focus on the important role and the potential of data centers as controllable loads in the smart grid. We reviewed relevant research in the area of letting data centers participate in the ancillary services market and demand response programs of the grid, and further investigate the possibility of exploiting the impact of data center placement on the grid. Various opportunities and challenges are summarized, which could provide more chances for researches to explore this field.

  11. Mounting structure

    NASA Technical Reports Server (NTRS)

    Ganssle, Eugene Robert (Inventor); Scott, Ralph Richard (Inventor); Williams, Richard Jean (Inventor)

    1978-01-01

    A mounting platform for heat producing instruments operated in a narrow equilibrium temperature range comprises a grid-like structure with relatively large openings therein. The instruments are secured to and thermally coupled with the grid surface facing the instruments. Excess heat from the instruments is selectively radiated to the ambient through openings in the grid, the grid surfaces at these openings exhibiting low thermal emissivity and adsorptivity. The remainder of the grid is maintained at the equilibrium temperature and is covered with a thermal insulating blanket. Thus, the entire system including the platform and instruments is maintained substantially isothermal, whereby the instruments remain in fixed physical relationship to one another.

  12. An optimized top contact design for solar cell concentrators

    NASA Technical Reports Server (NTRS)

    Desalvo, Gregory C.; Barnett, Allen M.

    1985-01-01

    A new grid optimization scheme is developed for point focus solar cell concentrators which employs a separated grid and busbar concept. Ideally, grid lines act as the primary current collectors and receive all of the current from the semiconductor region. Busbars are the secondary collectors which pick up current from the grids and carry it out of the active region of the solar cell. This separation of functions leads to a multithickness metallization design, where the busbars are made larger in cross section than the grids. This enables the busbars to carry more current per unit area of shading, which is advantageous under high solar concentration where large current densities are generated. Optimized grid patterns using this multilayer concept can provide a 1.6 to 20 percent increase in output power efficiency over optimized single thickness grids.

  13. Spacecraft hazard avoidance utilizing structured light

    NASA Technical Reports Server (NTRS)

    Liebe, Carl Christian; Padgett, Curtis; Chapsky, Jacob; Wilson, Daniel; Brown, Kenneth; Jerebets, Sergei; Goldberg, Hannah; Schroeder, Jeffrey

    2006-01-01

    At JPL, a <5 kg free-flying micro-inspector spacecraft is being designed for host-vehicle inspection. The spacecraft includes a hazard avoidance sensor to navigate relative to the vehicle being inspected. Structured light was selected for hazard avoidance because of its low mass and cost. Structured light is a method of remote sensing 3-dimensional structure of the proximity utilizing a laser, a grating, and a single regular APS camera. The laser beam is split into 400 different beams by a grating to form a regular spaced grid of laser beams that are projected into the field of view of an APS camera. The laser source and the APS camera are separated forming the base of a triangle. The distance to all beam intersections of the host are calculated based on triangulation.

  14. FermiGrid - experience and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, K.; Berman, E.; Canal, P.

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less

  15. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of approximation. The advantages of the multiresolution algorithm are that no special treatment is required at the boundaries of the interval, and that the application to functions which are only piecewise continuous (internal boundaries) can be efficiently implemented. In this presentation, Beam's supercompact wavelets are generalized to higher dimensions using multidimensional scaling and wavelet functions rather than alternating the directions as in the 1D version. As a demonstration of actual 3D data compression, supercompact wavelet transforms are applied to a 3D data set for wing tip vortex flow solutions (2.5 million grid points). It is shown that high data compression ratio can be achieved (around 50:1 ratio) in both vector and scalar data set.

  16. Evolution of passive scalar statistics in a spatially developing turbulence

    NASA Astrophysics Data System (ADS)

    Paul, I.; Papadakis, G.; Vassilicos, J. C.

    2018-02-01

    We investigate the evolution of passive scalar statistics in a spatially developing turbulence using direct numerical simulation. Turbulence is generated by a square grid element, which is heated continuously, and the passive scalar is temperature. The square element is the fundamental building block for both regular and fractal grids. We trace the dominant mechanisms responsible for the dynamical evolution of scalar-variance and its dissipation along the bar and grid-element centerlines. The scalar-variance is generated predominantly by the action of the mean scalar gradient behind the bar and is transported laterally by turbulent fluctuations to the grid-element centerline. The scalar-variance dissipation (proportional to the scalar-gradient variance) is produced primarily by the compression of the fluctuating scalar-gradient vector by the turbulent strain rate, while the contribution of mean velocity and scalar fields is negligible. Close to the grid element the scalar spectrum exhibits a well-defined -5 /3 power-law, even though the basic premises of the Kolmogorov-Obukhov-Corrsin theory are not satisfied (the fluctuating scalar field is highly intermittent, inhomogeneous, and anisotropic, and the local Corrsin-microscale-Péclet number is small). At this location, the PDF of scalar gradient production is only slightly skewed towards positive, and the fluctuating scalar-gradient vector aligns only with the compressive strain-rate eigenvector. The scalar-gradient vector is stretched or compressed stronger than the vorticity vector by turbulent strain rate throughout the grid-element centerline. However, the alignment of the former changes much earlier in space than that of the latter, resulting in scalar-variance dissipation to decay earlier along the grid-element centerline compared to the turbulent kinetic energy dissipation. The universal alignment behavior of the scalar-gradient vector is found far downstream, although the local Reynolds and Péclet numbers (based on the Taylor and Corrsin length scales, respectively) are low.

  17. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  18. Benchmarking Memory Performance with the Data Cube Operator

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabanov, Leonid V.

    2004-01-01

    Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.

  19. Power Hardware-in-the-Loop Testing of Multiple Photovoltaic Inverters' Volt-Var Control with Real-Time Grid Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Sudipta; Nelson, Austin; Hoke, Anderson

    2016-12-12

    Traditional testing methods fall short in evaluating interactions between multiple smart inverters providing advanced grid support functions due to the fact that such interactions largely depend on their placements on the electric distribution systems with impedances between them. Even though significant concerns have been raised by the utilities on the effects of such interactions, little effort has been made to evaluate them. In this paper, power hardware-in-the-loop (PHIL) based testing was utilized to evaluate autonomous volt-var operations of multiple smart photovoltaic (PV) inverters connected to a simple distribution feeder model. The results provided in this paper show that depending onmore » volt-var control (VVC) parameters and grid parameters, interaction between inverters and between the inverter and the grid is possible in some extreme cases with very high VVC slopes, fast response times and large VVC response delays.« less

  20. Outcomes of Spatially Fractionated Radiotherapy (GRID) for Bulky Soft Tissue Sarcomas in a Large Animal Model

    PubMed Central

    Gieger, Tracy L.; Karakashian, Alexander A.; Nikolova-Karakashian, Mariana N.; Posner, Lysa P.; Roback, Donald M.; Rivera, Judith N.; Chang, Sha

    2017-01-01

    GRID directs alternating regions of high- and low-dose radiation at tumors. A large animal model mimicking the geometries of human treatments is needed to complement existing rodent systems (eg, microbeam) and clarify the physical and biological attributes of GRID. A pilot study was undertaken in pet dogs with spontaneous soft tissue sarcomas to characterize responses to GRID. Subjects were treated with either 20 Gy (3 dogs) or 25 Gy (3 dogs), delivered using 6 MV X-rays and a commercial GRID collimator. Acute toxicity and tumor responses were assessed 2, 4, and 6 weeks later. Acute Radiation Therapy Oncology Group grade I skin toxicity was observed in 3 of the 6 dogs; none experienced a measurable response, per Response Evaluation Criteria in Solid Tumors. Serum vascular endothelial growth factor, tumor necrosis factor α, and secretory sphingomyelinase were assayed at baseline, 1, 4, 24, and 48 hours after treatment. There was a trend toward platelet-corrected serum vascular endothelial growth factor concentration being lower 1 and 48 hours after GRID than at baseline. There was a significant decrease in secretory sphingomyelinase activity 48 hours after 25 Gy GRID (P = .03). Serum tumor necrosis factor α was quantified measurable at baseline in 4 of the 6 dogs and decreased in each of those subjects at all post-GRID time points. The new information generated by this study includes the observation that high-dose, single fraction application of GRID does not induce measurable reduction in volume of canine soft tissue sarcomas. In contrast to previously published data, these data suggest that GRID may be associated with at least short-term reduction in serum concentration of vascular endothelial growth factor and serum activity of secretory sphingomyelinase. Because GRID can be applied safely, and these tumors can be subsequently surgically resected as part of routine veterinary care, pet dogs with sarcomas are an appealing model for studying the radiobiologic responses to spatially fractionated radiotherapy. PMID:28168937

  1. 15 MW HArdware-in-the-loop Grid Simulation Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rigas, Nikolaos; Fox, John Curtiss; Collins, Randy

    2014-10-31

    The 15MW Hardware-in-the-loop (HIL) Grid Simulator project was to (1) design, (2) construct and (3) commission a state-of-the-art grid integration testing facility for testing of multi-megawatt devices through a ‘shared facility’ model open to all innovators to promote the rapid introduction of new technology in the energy market to lower the cost of energy delivered. The 15 MW HIL Grid Simulator project now serves as the cornerstone of the Duke Energy Electric Grid Research, Innovation and Development (eGRID) Center. This project leveraged the 24 kV utility interconnection and electrical infrastructure of the US DOE EERE funded WTDTF project at themore » Clemson University Restoration Institute in North Charleston, SC. Additionally, the project has spurred interest from other technology sectors, including large PV inverter and energy storage testing and several leading edge research proposals dealing with smart grid technologies, grid modernization and grid cyber security. The key components of the project are the power amplifier units capable of providing up to 20MW of defined power to the research grid. The project has also developed a one of a kind solution to performing fault ride-through testing by combining a reactive divider network and a large power converter into a hybrid method. This unique hybrid method of performing fault ride-through analysis will allow for the research team at the eGRID Center to investigate the complex differences between the alternative methods of performing fault ride-through evaluations and will ultimately further the science behind this testing. With the final goal of being able to perform HIL experiments and demonstration projects, the eGRID team undertook a significant challenge with respect to developing a control system that is capable of communicating with several different pieces of equipment with different communication protocols in real-time. The eGRID team developed a custom fiber optical network that is based upon FPGA hardware that allows for communication between the key real-time interfaces and reduces the latency between these interfaces to acceptable levels for HIL experiments.« less

  2. Information Power Grid (IPG) Tutorial 2003

    NASA Technical Reports Server (NTRS)

    Meyers, George

    2003-01-01

    For NASA and the general community today Grid middleware: a) provides tools to access/use data sources (databases, instruments, ...); b) provides tools to access computing (unique and generic); c) Is an enabler of large scale collaboration. Dynamically responding to needs is a key selling point of a grid. Independent resources can be joined as appropriate to solve a problem. Provide tools to enable the building of a frameworks for application. Provide value added service to the NASA user base for utilizing resources on the grid in new and more efficient ways. Provides tools for development of Frameworks.

  3. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-12-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  4. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  5. A 149 min periodicity underlies the X-ray flaring of Sgr A*

    NASA Astrophysics Data System (ADS)

    Leibowitz, Elia

    2018-03-01

    In a paper in 2017, I have shown that 39 large X-ray flares of Sgr A* that were recorded by Chandra observatory in the year 2012 are concentrated preferably around tick marks of an equi-distance grid on the time axis. The period of this grid as found in that paper is 0.1033 d. In this work I show that the effect can be found among all the large X-ray flares recorded by Chandra and XMM - Newton along 15 yr. The mid-points of all the 71 large flares recorded between years 2000 and 2014 are also tightly grouped around tick marks of a grid with this period, or more likely, 0.1032 d. This result is obtained with a confidence level of at least 3.27σ and very likely of 4.62σ. I find also a possible hint that a similar grid is underlying IR flares of the object. I suggest that the pacemaker in the occurrences of the large X-ray flares of Sgr A* is a mass of the order of a low-mass star or a small planet, in a slightly eccentric Keplerian orbit around the SMBH at the centre of the Galaxy. The radius of this orbit is about 6.6 Schwarzschild radii of the BH.

  6. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  7. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  8. Modelling tidal current energy extraction in large area using a three-dimensional estuary model

    NASA Astrophysics Data System (ADS)

    Chen, Yaling; Lin, Binliang; Lin, Jie

    2014-11-01

    This paper presents a three-dimensional modelling study for simulating tidal current energy extraction in large areas, with a momentum sink term being added into the momentum equations. Due to the limits of computational capacity, the grid size of the numerical model is generally much larger than the turbine rotor diameter. Two models, i.e. a local grid refinement model and a coarse grid model, are employed and an idealized estuary is set up. The local grid refinement model is constructed to simulate the power generation of an isolated turbine and its impacts on hydrodynamics. The model is then used to determine the deployment of turbine farm and quantify a combined thrust coefficient for multiple turbines located in a grid element of coarse grid model. The model results indicate that the performance of power extraction is affected by array deployment, with more power generation from outer rows than inner rows due to velocity deficit influence of upstream turbines. Model results also demonstrate that the large-scale turbine farm has significant effects on the hydrodynamics. The tidal currents are attenuated within the turbine swept area, and both upstream and downstream of the array. While the currents are accelerated above and below turbines, which is contributed to speeding up the wake mixing process behind the arrays. The water levels are heightened in both low and high water levels as the turbine array spanning the full width of estuary. The magnitude of water level change is found to increase with the array expansion, especially at the low water level.

  9. MAGNA (Materially and Geometrically Nonlinear Analysis). Part II. Preprocessor Manual.

    DTIC Science & Technology

    1982-12-01

    AGRID can accept a virtually arbitrary collection of point coor- dinates which lie on a surface of interest, and generate a regular grid of mesh points...in the form of a collection of such patches to be translated into an assemblage of biquadratic surface elements (see Subsection 2.1, Figure 2.2...using IMPRESS can be converted for use with the present preprocessor by means of the IMPRINT translator. IMPRINT is a collection of conversion routines

  10. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    DTIC Science & Technology

    2016-01-01

    reconstruction. The array topology samples the scene on a regular grid of phase centers, using a tiling of Boundary Arrays (BAs). Following a simple correction...hardware. Fig. 1 depicts the multistatic array topology. As seen, the topology is a tiled arrangement of Boundary Arrays (BAs). The BA is a well-known...sparse array layout comprised of two linear transmit arrays, and two linear receive arrays [6]. A slightly different tiled arrangement of BAs was used

  11. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  12. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  13. mantisGRID: a grid platform for DICOM medical images management in Colombia and Latin America.

    PubMed

    Garcia Ruiz, Manuel; Garcia Chaves, Alvin; Ruiz Ibañez, Carlos; Gutierrez Mazo, Jorge Mario; Ramirez Giraldo, Juan Carlos; Pelaez Echavarria, Alejandro; Valencia Diaz, Edison; Pelaez Restrepo, Gustavo; Montoya Munera, Edwin Nelson; Garcia Loaiza, Bernardo; Gomez Gonzalez, Sebastian

    2011-04-01

    This paper presents the mantisGRID project, an interinstitutional initiative from Colombian medical and academic centers aiming to provide medical grid services for Colombia and Latin America. The mantisGRID is a GRID platform, based on open source grid infrastructure that provides the necessary services to access and exchange medical images and associated information following digital imaging and communications in medicine (DICOM) and health level 7 standards. The paper focuses first on the data abstraction architecture, which is achieved via Open Grid Services Architecture Data Access and Integration (OGSA-DAI) services and supported by the Globus Toolkit. The grid currently uses a 30-Mb bandwidth of the Colombian High Technology Academic Network, RENATA, connected to Internet 2. It also includes a discussion on the relational database created to handle the DICOM objects that were represented using Extensible Markup Language Schema documents, as well as other features implemented such as data security, user authentication, and patient confidentiality. Grid performance was tested using the three current operative nodes and the results demonstrated comparable query times between the mantisGRID (OGSA-DAI) and Distributed mySQL databases, especially for a large number of records.

  14. Surfer: An Extensible Pull-Based Framework for Resource Selection and Ranking

    NASA Technical Reports Server (NTRS)

    Zolano, Paul Z.

    2004-01-01

    Grid computing aims to connect large numbers of geographically and organizationally distributed resources to increase computational power; resource utilization, and resource accessibility. In order to effectively utilize grids, users need to be connected to the best available resources at any given time. As grids are in constant flux, users cannot be expected to keep up with the configuration and status of the grid, thus they must be provided with automatic resource brokering for selecting and ranking resources meeting constraints and preferences they specify. This paper presents a new OGSI-compliant resource selection and ranking framework called Surfer that has been implemented as part of NASA's Information Power Grid (IPG) project. Surfer is highly extensible and may be integrated into any grid environment by adding information providers knowledgeable about that environment.

  15. Transonic cascade flow calculations using non-periodic C-type grids

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1991-01-01

    A new kind of C-type grid is proposed for turbomachinery flow calculations. This grid is nonperiodic on the wake and results in minimum skewness for cascades with high turning and large camber. Euler and Reynolds averaged Navier-Stokes equations are discretized on this type of grid using a finite volume approach. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. Jameson's explicit Runge-Kutta scheme is adopted for the integration in time, and computational efficiency is achieved through accelerating strategies such as multigriding and residual smoothing. A detailed numerical study was performed for a turbine rotor and for a vane. A grid dependence analysis is presented and the effect of artificial dissipation is also investigated. Comparison of calculations with experiments clearly demonstrates the advantage of the proposed grid.

  16. Smart grid technologies in local electric grids

    NASA Astrophysics Data System (ADS)

    Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.

    2017-08-01

    The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.

  17. Smoothing effect for spatially distributed renewable resources and its impact on power grid robustness.

    PubMed

    Nagata, Motoki; Hirata, Yoshito; Fujiwara, Naoya; Tanaka, Gouhei; Suzuki, Hideyuki; Aihara, Kazuyuki

    2017-03-01

    In this paper, we show that spatial correlation of renewable energy outputs greatly influences the robustness of the power grids against large fluctuations of the effective power. First, we evaluate the spatial correlation among renewable energy outputs. We find that the spatial correlation of renewable energy outputs depends on the locations, while the influence of the spatial correlation of renewable energy outputs on power grids is not well known. Thus, second, by employing the topology of the power grid in eastern Japan, we analyze the robustness of the power grid with spatial correlation of renewable energy outputs. The analysis is performed by using a realistic differential-algebraic equations model. The results show that the spatial correlation of the energy resources strongly degrades the robustness of the power grid. Our results suggest that we should consider the spatial correlation of the renewable energy outputs when estimating the stability of power grids.

  18. Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  19. Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1999-01-01

    The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.

  20. Decay of grid turbulence in superfluid helium-4: Mesh dependence

    NASA Astrophysics Data System (ADS)

    Yang, J.; Ihas, G. G.

    2018-03-01

    Temporal decay of grid turbulence is experimentally studied in superfluid 4He in a large square channel. The second sound attenuation method is used to measure the turbulent vortex line density (L) with a phase locked tracking technique to minimize frequency shift effects induced by temperature fluctuations. Two different grids (0.8 mm and 3.0 mm mesh) are pulled to generate turbulence. Different power laws for decaying behavior are predicted by a theory. According to this theory, L should decay as t‑11/10 when the length scale of energy containing eddies grows from the grid mesh size to the size of the channel. At later time, after the energy containing eddy size becomes comparable to the channel, L should follow t‑3/2. Our recent experimental data exhibit evidence for t‑11/10 during the early time and t‑2 instead of t‑3/2 for later time. Moreover, a consistent bump/plateau feature is prominent between the two decay regimes for smaller (0.8 mm) grid mesh holes but absent with a grid mesh hole of 3.0 mm. This implies that in the large channel different types of turbulence are generated, depending on mesh hole size (mesh Reynolds number) compared to channel Reynolds number.

  1. Formation of Virtual Organizations in Grids: A Game-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Carroll, Thomas E.; Grosu, Daniel

    The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.

  2. Navier-Stokes solution of transonic cascade flows using nonperiodic C-type grids

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1992-01-01

    A new kind of C-type grid is proposed, this grid is non-periodic on the wake and allows minimum skewness for cascades with high turning and large camber. Reynolds-averaged Navier-Stokes equations are solved on this type of grid using a finite volume discretization and a full multigrid method which uses Runge-Kutta stepping as the driving scheme. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. A detailed numerical study is proposed for a highly loaded transonic blade. A grid independence analysis is presented in terms of pressure distribution, exit flow angles, and loss coefficient. Comparison with experiments clearly demonstrates the capability of the proposed procedure.

  3. FermiGrid—experience and future plans

    NASA Astrophysics Data System (ADS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.

    2008-07-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.

  4. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  5. An interactive grid generation procedure for axial and radial flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Beach, Timothy A.

    1989-01-01

    A combination algebraic/elliptic technique is presented for the generation of three dimensional grids about turbo-machinery blade rows for both axial and radial flow machinery. The technique is built around use of an advanced engineering workstation to construct several two dimensional grids interactively on predetermined blade-to-blade surfaces. A three dimensional grid is generated by interpolating these surface grids onto an axisymmetric grid. On each blade-to-blade surface, a grid is created using algebraic techniques near the blade to control orthogonality within the boundary layer region and elliptic techniques in the mid-passage to achieve smoothness. The interactive definition of bezier curves as internal boundaries is the key to simple construction. This procedure lends itself well to zonal grid construction, an important example being the tip clearance region. Calculations done to date include a space shuttle main engine turbopump blade, a radial inflow turbine blade, and the first stator of the United Technologies Research Center large scale rotating rig. A finite Navier-Stokes solver was used in each case.

  6. Occurrence and countermeasures of urban power grid accident

    NASA Astrophysics Data System (ADS)

    Wei, Wang; Tao, Zhang

    2018-03-01

    With the advance of technology, the development of network communication and the extensive use of power grids, people can get to know power grid accidents around the world through the network timely. Power grid accidents occur frequently. Large-scale power system blackout and casualty accidents caused by electric shock are also fairly commonplace. All of those accidents have seriously endangered the property and personal safety of the country and people, and the development of society and economy is severely affected by power grid accidents. Through the researches on several typical cases of power grid accidents at home and abroad in recent years and taking these accident cases as the research object, this paper will analyze the three major factors that cause power grid accidents at present. At the same time, combining with various factors and impacts caused by power grid accidents, the paper will put forward corresponding solutions and suggestions to prevent the occurrence of the accident and lower the impact of the accident.

  7. The impact of the topology on cascading failures in a power grid model

    NASA Astrophysics Data System (ADS)

    Koç, Yakup; Warnier, Martijn; Mieghem, Piet Van; Kooij, Robert E.; Brazier, Frances M. T.

    2014-05-01

    Cascading failures are one of the main reasons for large scale blackouts in power transmission grids. Secure electrical power supply requires, together with careful operation, a robust design of the electrical power grid topology. Currently, the impact of the topology on grid robustness is mainly assessed by purely topological approaches, that fail to capture the essence of electric power flow. This paper proposes a metric, the effective graph resistance, to relate the topology of a power grid to its robustness against cascading failures by deliberate attacks, while also taking the fundamental characteristics of the electric power grid into account such as power flow allocation according to Kirchhoff laws. Experimental verification on synthetic power systems shows that the proposed metric reflects the grid robustness accurately. The proposed metric is used to optimize a grid topology for a higher level of robustness. To demonstrate its applicability, the metric is applied on the IEEE 118 bus power system to improve its robustness against cascading failures.

  8. Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence

    NASA Astrophysics Data System (ADS)

    Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem

    2017-04-01

    Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings from the 3DSGT by the time of the conference. {Acknowledgements}: This work has been supported partly by the EuHIT grant, 'Turbulence Generated by Sparse 3D Multi-Scale Grid (M3SG)', 2017. {References} [1] S. Laizet, J. C. Vassilicos. DNS of Fractal-Generated Turbulence. Flow Turbulence Combust 87:673705, (2011). [2] N. A. Malik. Sparse 3D Multi-Scale Grid Turbulence Generator. USPTO Application no. 14/710,531, Patent Pending, (2015). [3] J. Tellez, M. Gomez, B. Russo, J.M. Redondo. Surface Flow Image Velocimetry (SFIV) for hydraulics applications. 18th Int. Symposium on the Application of Laser Imaging Techniques in Fluid Mechanics, Lisbon, Portugal (2016).

  9. Application scenario analysis of Power Grid Marketing Large Data

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhang, Yuan; Zhang, Qianyu

    2018-01-01

    In recent years, large data has become an important strategic asset in the commercial economy, and its efficient management and application has become the focus of government, enterprise and academia. Power grid marketing data covers real data of electricity and other energy consumption and consumption costs and so on, which is closely related to each customer and the overall economic operation. Fully tap the inherent value of marketing data is of great significance for power grid company to make rapid and efficient response to the market demand and improve service level. The development of large data technology provides a new technical scheme for the development of marketing business under the new situation. Based on the study on current situation of marketing business, marketing information system and marketing data, this paper puts forward the application direction of marketing data and designed typical scenes for internal and external applications.

  10. GRAMS: A Grid of RSG and AGB Models

    NASA Astrophysics Data System (ADS)

    Srinivasan, S.; Sargent, B. A.; Meixner, M.

    2011-09-01

    We present a grid of oxygen- and carbon-rich circumstellar dust radiative transfer models for asymptotic giant branch (AGB) and red supergiant (RSG) stars. The grid samples a large region of the relevant parameter space, and it allows for a quick calculation of bolometric fluxes and dust mass-loss rates from multi-wavelength photometry. This method of fitting observed spectral energy distributions (SEDs) is preferred over detailed radiative transfer calculations, especially for large data sets such as the SAGE (Surveying the Agents of a Galaxy's Evolution) survey of the Magellanic Clouds. The mass-loss rates calculated for SAGE data will allow us to quantify the dust returned to the interstellar medium (ISM) by the entire AGB population. The total injection rate provides an important constraint for models of galactic chemical evolution. Here, we discuss our carbon star models and compare the results to SAGE observations in the Large Magellanic Cloud (LMC).

  11. Utilizing data grid architecture for the backup and recovery of clinical image data.

    PubMed

    Liu, Brent J; Zhou, M Z; Documet, J

    2005-01-01

    Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models. However, there has been limited investigation into the impact of this emerging technology in medical imaging and informatics. In particular, PACS technology, an established clinical image repository system, while having matured significantly during the past ten years, still remains weak in the area of clinical image data backup. Current solutions are expensive or time consuming and the technology is far from foolproof. Many large-scale PACS archive systems still encounter downtime for hours or days, which has the critical effect of crippling daily clinical operations. In this paper, a review of current backup solutions will be presented along with a brief introduction to grid technology. Finally, research and development utilizing the grid architecture for the recovery of clinical image data, in particular, PACS image data, will be presented. The focus of this paper is centered on applying a grid computing architecture to a DICOM environment since DICOM has become the standard for clinical image data and PACS utilizes this standard. A federation of PACS can be created allowing a failed PACS archive to recover its image data from others in the federation in a seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid is composed of one research laboratory and two clinical sites. The Globus 3.0 Toolkit (Co-developed by the Argonne National Laboratory and Information Sciences Institute, USC) for developing the core and user level middleware is utilized to achieve grid connectivity. The successful implementation and evaluation of utilizing data grid architecture for clinical PACS data backup and recovery will provide an understanding of the methodology for using Data Grid in clinical image data backup for PACS, as well as establishment of benchmarks for performance from future grid technology improvements. In addition, the testbed can serve as a road map for expanded research into large enterprise and federation level data grids to guarantee CA (Continuous Availability, 99.999% up time) in a variety of medical data archiving, retrieval, and distribution scenarios.

  12. Conceptual Design of the Everglades Depth Estimation Network (EDEN) Grid

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    INTRODUCTION The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). Ground elevation data for the greater Everglades and the digital ground elevation models derived from them form the foundation for all EDEN water depth and associated ecologic/hydrologic modeling (Jones, 2004, Jones and Price, 2007). To use EDEN water depth and duration information most effectively, it is important to be able to view and manipulate information on elevation data quality and other land cover and habitat characteristics across the Everglades region. These requirements led to the development of the geographic data layer described in this techniques and methods report. Relying on extensive experience in GIS data development, distribution, and analysis, a great deal of forethought went into the design of the geographic data layer used to index elevation and other surface characteristics for the Greater Everglades region. To allow for simplicity of design and use, the EDEN area was broken into a large number of equal-sized rectangles ('Cells') that in total are referred to here as the 'grid'. Some characteristics of this grid, such as the size of its cells, its origin, the area of Florida it is designed to represent, and individual grid cell identifiers, could not be changed once the grid database was developed. Therefore, these characteristics were selected to design as robust a grid as possible and to ensure the grid's long-term utility. It is desirable to include all pertinent information known about elevation and elevation data collection as grid attributes. Also, it is very important to allow for efficient grid post-processing, sub-setting, analysis, and distribution. This document details the conceptual design of the EDEN grid spatial parameters and cell attribute-table content.

  13. The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.

    PubMed

    Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J

    2017-11-01

    Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Earth Observation taken by the Expedition 33 crew

    NASA Image and Video Library

    2012-11-15

    ISS033-E-021663 (15 Nov. 2012) --- A nighttime view of the Dallas, Texas metropolitan area is featured in this image photographed by an Expedition 33 crew member on the International Space Station. Most metropolitan areas of the western United States are configured over large areas with a regular street grid pattern that is highly recognizable from space, particularly at night. The northern Dallas metropolitan area exhibits this pattern. The north-south and east-west grid pattern of major surface streets is highlighted by orange street lighting, lending a fishnet-like appearance to the urban area. Within this framework, smaller residential and commercial buildings and streets give green-gray stipple patterns to some blocks. The airplane terminals of Dallas-Fort Worth International Airport are lit with golden-yellow lights, and surrounded by relatively dark runways and fields at left. Likewise, the runways of Dallas Love Field at lower center are recognizable by their darkness. Other dark areas within the metro region are open space, parks, and water bodies. Dallas is part of the larger Dallas-Fort Worth-Arlington Metropolitan Statistical Area (population 6,526,548 in 2011) as recognized by the U.S. Census Bureau; to most Texans, it is simply ?DFW? or ?the Metroplex.? Several of the larger urban areas within DFW are visible in this image; Dallas, Garland and Richardson all have populations above 100,000 according to the North Central Texas Council of Governments. While historically the economy of Dallas was based in the oil and cotton industries, today commerce, technology, healthcare, and transportation are the major economic drivers.

  15. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  16. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  17. NPP-VIIRS DNB-based reallocating subpopulations to mercury in Urumqi city cluster, central Asia

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Feng, X. B.; Dai, W.; Li, P.; Ju, C. Y.; Bao, Z. D.; Han, Y. L.

    2017-02-01

    Accurate and update assignment of population-related environmental matters onto fine grid cells in oasis cities of arid areas remains challenging. We present the approach based on Suomi National Polar-orbiting Partnership (S-NPP) -Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) to reallocate population onto a regular finer surface. The number of potential population to the mercury were reallocated onto 0.1x0.1 km reference grid in Urumqi city cluster of China’s Xinjiang, central Asia. The result of Monte Carlo modelling indicated that the range of 0.5 to 2.4 million people was reliable. The study highlights that the NPP-VIIRS DNB-based multi-layered, dasymetric, spatial method enhances our abilities to remotely estimate the distribution and size of target population at the street-level scale and has the potential to transform control strategies for epidemiology, public policy and other socioeconomic fields.

  18. A deep learning-based reconstruction of cosmic ray-induced air showers

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Glombitza, J.; Walz, D.

    2018-01-01

    We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.

  19. Compressed Sensing in On-Grid MIMO Radar.

    PubMed

    Minner, Michael F

    2015-01-01

    The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.

  20. A dynamic analysis of rotary combustion engine seals

    NASA Technical Reports Server (NTRS)

    Knoll, J.; Vilmann, C. R.; Schock, H. J.; Stumpf, R. P.

    1984-01-01

    Real time work cell pressures are incorporated into a dynamic analysis of the gas sealing grid in Rotary Combustion Engines. The analysis which utilizes only first principal concepts accounts for apex seal separation from the crochoidal bore, apex seal shifting between the sides of its restraining channel, and apex seal rotation within the restraining channel. The results predict that apex seals do separate from the trochoidal bore and shift between the sides of their channels. The results also show that these two motions are regularly initiated by a seal rotation. The predicted motion of the apex seals compares favorably with experimental results. Frictional losses associated with the sealing grid are also calculated and compare well with measurements obtained in a similar engine. A comparison of frictional losses when using steel and carbon apex seals has also been made as well as friction losses for single and dual side sealing.

  1. Lambda Data Grid: Communications Architecture in Support of Grid Computing

    DTIC Science & Technology

    2006-12-21

    number of paradigm shifts in the 20th century, including the growth of large geographically dispersed teams and the use of simulations and computational...get results. The work in this thesis automates the orchestration of networks with other resources, better utilizing all resources in a time efficient...domains, over transatlantic links in around minute. The main goal of this thesis is to build a new grid-computing paradigm that fully harnesses the

  2. A weakly-compressible Cartesian grid approach for hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.

    2017-11-01

    The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.

  3. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  4. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  5. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  6. An Analysis of Performance Enhancement Techniques for Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  7. Dynamics of flows, fluctuations, and global instability under electrode biasing in a linear plasma device

    NASA Astrophysics Data System (ADS)

    Desjardins, T. R.; Gilmore, M.

    2016-05-01

    Grid biasing is utilized in a large-scale helicon plasma to modify an existing instability. It is shown both experimentally and with a linear stability analysis to be a hybrid drift-Kelvin-Helmholtz mode. At low magnetic field strengths, coherent fluctuations are present, while at high magnetic field strengths, the plasma is broad-band turbulent. Grid biasing is used to drive the once-coherent fluctuations to a broad-band turbulent state, as well as to suppress them. There is a corresponding change in the flow shear. When a high positive bias (10Te) is applied to the grid electrode, a large-scale ( n ˜/n ≈50 % ) is excited. This mode has been identified as the potential relaxation instability.

  8. An efficient predictor-corrector-based dynamic mesh method for multi-block structured grid with extremely large deformation and its applications

    NASA Astrophysics Data System (ADS)

    Guo, Tongqing; Chen, Hao; Lu, Zhiliang

    2018-05-01

    Aiming at extremely large deformation, a novel predictor-corrector-based dynamic mesh method for multi-block structured grid is proposed. In this work, the dynamic mesh generation is completed in three steps. At first, some typical dynamic positions are selected and high-quality multi-block grids with the same topology are generated at those positions. Then, Lagrange interpolation method is adopted to predict the dynamic mesh at any dynamic position. Finally, a rapid elastic deforming technique is used to correct the small deviation between the interpolated geometric configuration and the actual instantaneous one. Compared with the traditional methods, the results demonstrate that the present method shows stronger deformation ability and higher dynamic mesh quality.

  9. Testing & Validating: 3D Seismic Travel Time Tomography (Detailed Shallow Subsurface Imaging)

    NASA Astrophysics Data System (ADS)

    Marti, David; Marzan, Ignacio; Alvarez-Marron, Joaquina; Carbonell, Ramon

    2016-04-01

    A detailed full 3 dimensional P wave seismic velocity model was constrained by a high-resolution seismic tomography experiment. A regular and dense grid of shots and receivers was use to image a 500x500x200 m volume of the shallow subsurface. 10 GEODE's resulting in a 240 channels recording system and a 250 kg weight drop were used for the acquisition. The recording geometry consisted in 10x20m geophone grid spacing, and a 20x20 m stagered source spacing. A total of 1200 receivers and 676 source points. The study area is located within the Iberian Meseta, in Villar de Cañas (Cuenca, Spain). The lithological/geological target consisted in a Neogen sedimentary sequence formed from bottom to top by a transition from gyspum to silstones. The main objectives consisted in resolving the underground structure: contacts/discontinuities; constrain the 3D geometry of the lithology (possible cavities, faults/fractures). These targets were achieved by mapping the 3D distribution of the physical properties (P-wave velocity). The regularly space dense acquisition grid forced to acquire the survey in different stages and with a variety of weather conditions. Therefore, a careful quality control was required. More than a half million first arrivals were inverted to provide a 3D Vp velocity model that reached depths of 120 m in the areas with the highest ray coverage. An extended borehole campaign, that included borehole geophysical measurements in some wells provided unique tight constraints on the lithology an a validation scheme for the tomographic results. The final image reveals a laterally variable structure consisting of four different lithological units. In this methodological validation test travel-time tomography features a high capacity of imaging in detail the lithological contrasts for complex structures located at very shallow depths.

  10. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guba, O.; Taylor, M. A.; Ullrich, P. A.

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  11. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  12. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  13. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  14. Network gateway security method for enterprise Grid: a literature review

    NASA Astrophysics Data System (ADS)

    Sujarwo, A.; Tan, J.

    2017-03-01

    The computational Grid has brought big computational resources closer to scientists. It enables people to do a large computational job anytime and anywhere without any physical border anymore. However, the massive and spread of computer participants either as user or computational provider arise problems in security. The challenge is on how the security system, especially the one which filters data in the gateway could works in flexibility depends on the registered Grid participants. This paper surveys what people have done to approach this challenge, in order to find the better and new method for enterprise Grid. The findings of this paper is the dynamically controlled enterprise firewall to secure the Grid resources from unwanted connections with a new firewall controlling method and components.

  15. Contribution of concentrator photovoltaic installations to grid stability and power quality

    NASA Astrophysics Data System (ADS)

    del Toro García, Xavier; Roncero-Sánchez, Pedro; Torres, Alfonso Parreño; Vázquez, Javier

    2012-10-01

    Large-scale integration of Photovoltaic (PV) generation systems, including Concentrator Photovoltaic (CPV) technologies, will require the contribution and support of these technologies to the management and stability of the grid. New regulations and grid codes for PV installations in countries such as Spain have recently included dynamic voltage control support during faults. The PV installation must stay connected to the grid during voltage dips and inject reactive power in order to enhance the stability of the system. The existing PV inverter technologies based on the Voltage-Source Converter (VSC) are in general well suited to provide advanced grid-support characteristics. Nevertheless, new advanced control schemes and monitoring techniques will be necessary to meet the most demanding requirements.

  16. [Analysis on difference of richness of traditional Chinese medicine resources in Chongqing based on grid technology].

    PubMed

    Zhang, Xiao-Bo; Qu, Xian-You; Li, Meng; Wang, Hui; Jing, Zhi-Xian; Liu, Xiang; Zhang, Zhi-Wei; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    After the end of the national and local medicine resources census work, a large number of Chinese medicine resources and distribution of data will be summarized. The species richness between the regions is a valid indicator for objective reflection of inter-regional resources of Chinese medicine. Due to the large difference in the size of the county area, the assessment of the intercropping of the resources of the traditional Chinese medicine by the county as a statistical unit will lead to the deviation of the regional abundance statistics. Based on the rule grid or grid statistical methods, the size of the statistical unit due to different can be reduced, the differences in the richness of traditional Chinese medicine resources are caused. Taking Chongqing as an example, based on the existing survey data, the difference of richness of traditional Chinese medicine resources under different grid scale were compared and analyzed. The results showed that the 30 km grid could be selected and the richness of Chinese medicine resources in Chongqing could reflect the objective situation of intercropping resources richness in traditional Chinese medicine better. Copyright© by the Chinese Pharmaceutical Association.

  17. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  18. Wide-angle display-type retarding field analyzer with high energy and angular resolutions

    NASA Astrophysics Data System (ADS)

    Muro, Takayuki; Ohkochi, Takuo; Kato, Yukako; Izumi, Yudai; Fukami, Shun; Fujiwara, Hidenori; Matsushita, Tomohiro

    2017-12-01

    Deployments of spherical grids to obtain high energy and angular resolutions for retarding field analyzers (RFAs) having acceptance angles as large as or larger than ±45° were explored under the condition of using commercially available microchannel plates with effective diameters of approximately 100 mm. As a result of electron trajectory simulations, a deployment of three spherical grids with significantly different grid separations instead of conventional equidistant separations showed an energy resolving power (E/ΔE) of 3200 and an angular resolution of 0.6°. The mesh number of the wire mesh retarding grid used for the simulation was 250. An RFA constructed with the simulated design experimentally showed an E/ΔE of 1100 and an angular resolution of 1°. Using the RFA and synchrotron radiation of 900 eV, photoelectron diffraction (PED) measurements were performed for single-crystal graphite. A clear C 1s PED pattern was observed even when the differential energy of the RFA was set at 0.5 eV. Further improvement of the energy resolution was theoretically examined under the assumption of utilizing a retarding grid fabricated by making a large number of radially directed cylindrical holes through a partial spherical shell instead of using a wire mesh retarding grid. An E/ΔE of 14 500 was predicted for a hole design with a diameter of 60 μm and a depth of 100 μm. A retarding grid with this hole design and a holed area corresponding to an acceptance angle of ±7° was fabricated. An RFA constructed with this retarding grid experimentally showed an E/ΔE of 1800. Possible reasons for the experimental E/ΔE lower than the theoretical values are discussed.

  19. The Mass-loss Return from Evolved Stars to the Large Magellanic Cloud. IV. Construction and Validation of a Grid of Models for Oxygen-rich AGB Stars, Red Supergiants, and Extreme AGB Stars

    NASA Astrophysics Data System (ADS)

    Sargent, Benjamin A.; Srinivasan, S.; Meixner, M.

    2011-02-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the "Grid of Red supergiant and Asymptotic giant branch star ModelS" (GRAMS). This model grid explores four parameters—stellar effective temperature from 2100 K to 4700 K luminosity from 103 to 106 L sun; dust shell inner radii of 3, 7, 11, and 15 R star; and 10.0 μm optical depth from 10-4 to 26. From an initial grid of ~1200 2Dust models, we create a larger grid of ~69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  20. Achieving a 100% Renewable Grid: Operating Electric Power Systems with Extremely High Levels of Variable Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin; Johnson, Brian; Zhang, Yingchen

    What does it mean to achieve a 100% renewable grid? Several countries already meet or come close to achieving this goal. Iceland, for example, supplies 100% of its electricity needs with either geothermal or hydropower. Other countries that have electric grids with high fractions of renewables based on hydropower include Norway (97%), Costa Rica (93%), Brazil (76%), and Canada (62%). Hydropower plants have been used for decades to create a relatively inexpensive, renewable form of energy, but these systems are limited by natural rainfall and geographic topology. Around the world, most good sites for large hydropower resources have already beenmore » developed. So how do other areas achieve 100% renewable grids? Variable renewable energy (VRE), such as wind and solar photovoltaic (PV) systems, will be a major contributor, and with the reduction in costs for these technologies during the last five years, large-scale deployments are happening around the world.« less

  1. Solar-blind deep-UV band-pass filter (250 - 350 nm) consisting of a metal nano-grid fabricated by nanoimprint lithography.

    PubMed

    Li, Wen-Di; Chou, Stephen Y

    2010-01-18

    We designed, fabricated and demonstrated a solar-blind deep-UV pass filter, that has a measured optical performance of a 27% transmission peak at 290 nm, a pass-band width of 100 nm (from 250 to 350 nm), and a 20dB rejection ratio between deep-UV wavelength and visible wavelength. The filter consists of an aluminum nano-grid, which was made by coating 20 nm Al on a SiO(2) square grid with 190 nm pitch, 30 nm linewidth and 250 nm depth. The performances agree with a rigorous coupled wave analysis. The wavelength for the peak transmission and the pass-bandwidth can be tuned through adjusting the metal nano-grid dimensions. The filter was fabricated by nanoimprint lithography, hence is large area and low cost. Combining with Si photodetectors, the filter offers simple yet effective and low cost solar-blind deep-UV detection at either a single device or large-area complex integrated imaging array level.

  2. Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane

    2017-06-02

    Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less

  3. Improving Barotropic Tides by Two-way Nesting High and Low Resolution Domains

    NASA Astrophysics Data System (ADS)

    Jeon, C. H.; Buijsman, M. C.; Wallcraft, A. J.; Shriver, J. F.; Hogan, P. J.; Arbic, B. K.; Richman, J. G.

    2017-12-01

    In a realistically forced global ocean model, relatively large sea-surface-height root-mean-square (RMS) errors are observed in the North Atlantic near the Hudson Strait. These may be associated with large tidal resonances interacting with coastal bathymetry that are not correctly represented with a low resolution grid. This issue can be overcome by using high resolution grids, but at a high computational cost. In this paper we apply two-way nesting as an alternative solution. This approach applies high resolution to the area with large RMS errors and a lower resolution to the rest. It is expected to improve the tidal solution as well as reduce the computational cost. To minimize modification of the original source codes of the ocean circulation model (HYCOM), we apply the coupler OASIS3-MCT. This coupler is used to exchange barotropic pressures and velocity fields through its APIs (Application Programming Interface) between the parent and the child components. The developed two-way nesting framework has been validated with an idealized test case where the parent and the child domains have identical grid resolutions. The result of the idealized case shows very small RMS errors between the child and parent solutions. We plan to show results for a case with realistic tidal forcing in which the resolution of the child grid is three times that of the parent grid. The numerical results of this realistic case are compared to TPXO data.

  4. Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids

    NASA Astrophysics Data System (ADS)

    Sezer-Uzol, Nilay

    In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.

  5. Large temporal scale and capacity subsurface bulk energy storage with CO2

    NASA Astrophysics Data System (ADS)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  6. Probabilistic Learning by Rodent Grid Cells

    PubMed Central

    Cheung, Allen

    2016-01-01

    Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population readout of a set of probabilistic spatial computations. PMID:27792723

  7. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  8. Radiative feedback and cosmic molecular gas: the role of different radiative sources

    NASA Astrophysics Data System (ADS)

    Maio, Umberto; Petkova, Margarita; De Lucia, Gabriella; Borgani, Stefano

    2016-08-01

    We present results from multifrequency radiative hydrodynamical chemistry simulations addressing primordial star formation and related stellar feedback from various populations of stars, stellar spectral energy distributions (SEDs) and initial mass functions. Spectra for massive stars, intermediate-mass stars and regular solar-like stars are adopted over a grid of 150 frequency bins and consistently coupled with hydrodynamics, heavy-element pollution and non-equilibrium species calculations. Powerful massive Population III stars are found to be able to largely ionize H and, subsequently, He and He+, causing an inversion of the equation of state and a boost of the Jeans masses in the early intergalactic medium. Radiative effects on star formation rates are between a factor of a few and 1 dex, depending on the SED. Radiative processes are responsible for gas heating and photoevaporation, although emission from soft SEDs has minor impacts. These findings have implications for cosmic gas preheating, primordial direct-collapse black holes, the build-up of `cosmic fossils' such as low-mass dwarf galaxies, the role of active galactic nuclei during reionization, the early formation of extended discs and angular-momentum catastrophe.

  9. Bas-relief generation using adaptive histogram equalization.

    PubMed

    Sun, Xianfang; Rosin, Paul L; Martin, Ralph R; Langbein, Frank C

    2009-01-01

    An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

  10. Fast Fourier transform discrete dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Graham, J. T.; Rollett, A. D.; LeSar, R.

    2016-12-01

    Discrete dislocation dynamics simulations have been generally limited to modeling systems described by isotropic elasticity. Effects of anisotropy on dislocation interactions, which can be quite large, have generally been ignored because of the computational expense involved when including anisotropic elasticity. We present a different formalism of dislocation dynamics in which the dislocations are represented by the deformation tensor, which is a direct measure of the slip in the lattice caused by the dislocations and can be considered as an eigenstrain. The stresses arising from the dislocations are calculated with a fast Fourier transform (FFT) method, from which the forces are determined and the equations of motion are solved. Use of the FFTs means that the stress field is only available at the grid points, which requires some adjustments/regularizations to be made to the representation of the dislocations and the calculation of the force on individual segments, as is discussed hereinafter. A notable advantage of this approach is that there is no computational penalty for including anisotropic elasticity. We review the method and apply it in a simple dislocation dynamics calculation.

  11. A Survey of Spatial and Seasonal Water Isotope Variability on the Juneau Icefield, Alaksa

    NASA Astrophysics Data System (ADS)

    Dennis, D.; Carter, A.; Clinger, A. E.; Eads, O. L.; Gotwals, S.; Gunderson, J.; Hollyday, A. E.; Klein, E. S.; Markle, B. R.; Timms, J. R.

    2015-12-01

    The depletion of stable oxygen-hydrogen isotopes (δ18O and δH) is well correlated with temperature change, which is driven by variation in topography, climate, and atmospheric circulation. This study presents a survey of the spatial and seasonal variability of isotopic signatures on the Juneau Icefield (JI), Alaska, USA which spans over 3,000 square-kilometers. To examine small scale variability in the previous year's accumulation, samples were taken at regular intervals from snow pits and a one square-kilometer surficial grid. Surface snow samples were collected across the icefield to evaluate large scale variability, ranging approximately 1,000 meters in elevation and 100 kilometers in distance. Individual precipitation events were also sampled to track percolation throughout the snowpack and temperature correlations. A survey of this extent has never been undertaken on the JI. Samples were analyzed in the field using a Los Gatos laser isotope analyzer. This survey helps us better understand isotope fractionation on temperate glaciers in coastal environments and provides preliminary information on the suitability of the JI for a future ice core drilling project.

  12. A Large, Free-Standing Wire Grid for Microwave Variable-delay Polarization Modulation

    NASA Technical Reports Server (NTRS)

    Voellmer, George

    2008-01-01

    One technique for mapping the polarization signature of the cosmic microwave background uses large, polarizing grids in reflection. We present the system requirements, the fabrication, assembly, and alignment procedures, and the test results for the polarizing grid component of a 50 cm clear aperture, Variable-delay Polarization Modulator (VPM). This grid is being built and tested at the Goddard Space Flight Center as part of the Polarimeter for Observing Inflationary Cosmology at the Reionization Epoch (POINCARE). VPMs modulate the polarized component of a radiation source by splitting the incoming beam into two orthogonal polarization components using a free-standing wire grid. The path length difference between these components is varied with a translating mirror, and then they are recombined. This precision instrumentation technique can be used to encode and demodulate the cosmic microwave background's polarization signature. For the demonstration instrument, 64 micrometer diameter tungsten wires are being assembled into a 200 pm pitch, free-standing wire grid with a 50 cm clear aperture, and an expected overall flatness better than 30 micrometers. A rectangular, aluminum stretching frame holds the wires with sufficient tension to achieve a minimum resonant frequency of 185 Hz, allowing VPM mirror translation frequencies of several Hz. A lightly loaded, flattening ring with a 50 cm inside diameter rests against the wires and brings them into accurate planarity.

  13. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  14. Widespread extreme drought events in Iberia and their relationship with North Atlantic moisture flux deficit

    NASA Astrophysics Data System (ADS)

    Liberato, Margarida L. R.; Montero, Irene; Russo, Ana; Gouveia, Célia; Ramos, Alexandre M.; Trigo, Ricardo M.

    2015-04-01

    Droughts represent one of the most frequent climatic extreme events on the Iberian Peninsula, often with widespread negative ecological and environmental impacts, resulting in major socio-economic damages such as large decreases in hydroelectricity and agricultural productions or increasing forest fire risk. Unlike other weather driven extreme events, droughts duration could be from few months to several years. Here we employ a recently developed climatic drought index, the Standardized Precipitation Evapotranspiration Index (SPEI; Vicente-Serrano et al. 2010a), based on the simultaneous use of precipitation and temperature fields. This index holds the advantage of combining a multi-scalar character with the capacity to include the effects of temperature variability on drought assessment (Vicente-Serrano et al., 2010a). In this study the SPEI was computed using the Climatic Research Unit (CRU) TS3.21 High Resolution Gridded Data (0.5°) for the period 1901-2012. At this resolution the study region of Iberian Peninsula corresponds to a square of 30x30 grid pixels. The CRU Potential Evapotranspiration (PET) was used, through the Penmann-Monteith equation and the log-logistic probability distribution. This formulation allows a very good fit to the series of differences between precipitation and PET (Vicente-Serrano et al., 2010b), using monthly averages of daily maximum and minimum temperature data and also monthly precipitation records. The parameters were estimated by means of the L-moment method. The application of multi-scalar indices to the high-resolution datasets allows identifying whether the Iberian Peninsula is in hydric stress and also whether drought is installed. Based on the gridded SPEI datasets, spanning from 1901 to 2012, obtained for timescales 6, 12, 18 and 24 months, an objective method is applied for ranking the most extensive extreme drought events that occurred on the Iberian Peninsula. This objective method is based on the evaluation of the drought's magnitude, which is obtained after considering the area affected - defined by SPEI values over a certain threshold (in this case SPEI < -1.28) - as well as its intensity in each grid point. Different rankings are presented for the different timescales considering both the entire Iberian Peninsula and Portugal. Furthermore we used the NCEP/NCAR reanalysis in the 1948-2012 period, namely, the geopotential height, temperature, wind and specific humidity fields at all pressure levels and mean sea level pressure (MSLP) and total column water vapour (TCWV) for the Euro-Atlantic sector (60° W to 40° E, 20° N to 70° N) at full temporal (six hourly) and spatial (2.5° regular horizontal grid) resolutions available as well as the globally gridded monthly precipitation products of the Global Precipitation Climatology Centre (GPCC), to analyse the large-scale conditions associated with the most extreme droughts in Iberia. Results show that during these drought periods there is a clear moisture deficit over the region, with permanent negative anomalies of TCWV. Additionally, in these occasions, the zonal moisture transport is more intense over the northern Atlantic and less intense on the subtropics while the meridional moisture transport is intensified, in accordance with the barotropic structure of HGT anomalies. Vicente-Serrano, S.M., Beguería, S., and López-Moreno, J.I. (2010a). A Multi-scalar drought index sensitive to global warming: The Standardized Precipitation Evapotranspiration Index - SPEI. Journal of Climate, 23, 1696-1718. Vicente-Serrano, S.M., Beguería, S., López-Moreno, J.I., Angulo, M., and El Kenawy, A. (2010b). A new global 0.5° gridded dataset (1901-2006) of a multiscalar drought index: comparison with current drought index datasets based on the Palmer Drought Severity Index. Journal of Hydrometeorology, 11, 1033-1043 Acknowledgements: This work was partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project QSECA (PTDC/AAGGLO/4155/2012).

  15. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  16. A Quadtree-gridding LBM with Immersed Boundary for Two-dimension Viscous Flows

    NASA Astrophysics Data System (ADS)

    Yao, Jieke; Feng, Wenliang; Chen, Bin; Zhou, Wei; Cao, Shikun

    2017-07-01

    An un-uniform quadtree grids lattice Boltzmann method (LBM) with immersed boundary is presented in this paper. In overlapping for different level grids, temporal and spatial interpolation are necessary to ensure the continuity of physical quantity. In order to take advantage of the equation for temporal and spatial step in the same level grids, equal interval interpolation, which is simple to apply to any refined boundary grids in the LBM, is adopted in temporal and spatial aspects to obtain second-order accuracy. The velocity correction, which can guarantee more preferably no-slip boundary condition than the direct forcing method and the momentum exchange method in the traditional immersed-boundary LBM, is used for solid boundary to make the best of Cartesian grid. In present quadtree-gridding immersed-boundary LBM, large eddy simulation (LES) is adopted to simulate the flows over obstacle in higher Reynolds number (Re). The incompressible viscous flows over circular cylinder are carried out, and a great agreement is obtained.

  17. Fast and accurate grid representations for atom-based docking with partner flexibility.

    PubMed

    de Vries, Sjoerd J; Zacharias, Martin

    2017-06-30

    Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  19. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE PAGES

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...

    2017-08-24

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  20. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  1. Application of a lower-upper implicit scheme and an interactive grid generation for turbomachinery flow field simulations

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Soh, Woo-Yung; Yoon, Seokkwan

    1989-01-01

    A finite-volume lower-upper (LU) implicit scheme is used to simulate an inviscid flow in a tubine cascade. This approximate factorization scheme requires only the inversion of sparse lower and upper triangular matrices, which can be done efficiently without extensive storage. As an implicit scheme it allows a large time step to reach the steady state. An interactive grid generation program (TURBO), which is being developed, is used to generate grids. This program uses the control point form of algebraic grid generation which uses a sparse collection of control points from which the shape and position of coordinate curves can be adjusted. A distinct advantage of TURBO compared with other grid generation programs is that it allows the easy change of local mesh structure without affecting the grid outside the domain of independence. Sample grids are generated by TURBO for a compressor rotor blade and a turbine cascade. The turbine cascade flow is simulated by using the LU implicit scheme on the grid generated by TURBO.

  2. GreenView and GreenLand Applications Development on SEE-GRID Infrastructure

    NASA Astrophysics Data System (ADS)

    Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor

    2010-05-01

    The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central Eastern Europe) regions. On the other hand, GreenLand is used for generating maps for different vegetation indexes (e.g. NDVI, EVI, SAVI, GEMI) based on Landsat satellite images. Both applications are using interpolation and random value generation algorithms, but also specific formulas for computing vegetation index values. The GreenView and GreenLand applications have been experimented over the SEE-GRID infrastructure and the performance evaluation is reported in [6]. The improvement of the execution time (obtained through a better parallelization of jobs), the extension of geographical areas to other parts of the Earth, and new user interaction techniques on spatial data and large set of satellite images are the goals of the future work. References [1] GreenView application on Wiki, http://wiki.egee-see.org/index.php/GreenView [2] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [3] Gorgan D., Stefanut T., Bâcu V., Mihon D., Grid based Environment Application Development Methodology, SCICOM, 7th International Conference on "Large-Scale Scientific Computations", 4-8 June, 2009, Sozopol, Bulgaria, (To be published by Springer), (2009). [4] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [5] Mihon D., Bacu V., Stefanut T., Gorgan D., "Grid Based Environment Application Development - GreenView Application". ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27 Aug, 2009 Cluj-Napoca. Published by IEEE Computer Press, pp. 275-282 (2009). [6] Danut Mihon, Victor Bacu, Dorian Gorgan, Róbert Mészáros, Györgyi Gelybó, Teodor Stefanut, Practical Considerations on the GreenView Application Development and Execution over SEE-GRID. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 167-175 (2009).

  3. Demonstration of Essential Reliability Services by a 300-MW Solar Photovoltaic Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loutan, Clyde; Klauer, Peter; Chowdhury, Sirajul

    The California Independent System Operator (CAISO), First Solar, and the National Renewable Energy Laboratory (NREL) conducted a demonstration project on a large utility-scale photovoltaic (PV) power plant in California to test its ability to provide essential ancillary services to the electric grid. With increasing shares of solar- and wind-generated energy on the electric grid, traditional generation resources equipped with automatic governor control (AGC) and automatic voltage regulation controls -- specifically, fossil thermal -- are being displaced. The deployment of utility-scale, grid-friendly PV power plants that incorporate advanced capabilities to support grid stability and reliability is essential for the large-scale integrationmore » of PV generation into the electric power grid, among other technical requirements. A typical PV power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. In this way, PV power plants can be used to mitigate the impact of variability on the grid, a role typically reserved for conventional generators. In August 2016, testing was completed on First Solar's 300-MW PV power plant, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to use grid-friendly controls to provide essential reliability services. These data showed how the development of advanced power controls can enable PV to become a provider of a wide range of grid services, including spinning reserves, load following, voltage support, ramping, frequency response, variability smoothing, and frequency regulation to power quality. Specifically, the tests conducted included various forms of active power control such as AGC and frequency regulation; droop response; and reactive power, voltage, and power factor controls. This project demonstrated that advanced power electronics and solar generation can be controlled to contribute to system-wide reliability. It was shown that the First Solar plant can provide essential reliability services related to different forms of active and reactive power controls, including plant participation in AGC, primary frequency control, ramp rate control, and voltage regulation. For AGC participation in particular, by comparing the PV plant testing results to the typical performance of individual conventional technologies, we showed that regulation accuracy by the PV plant is 24-30 points better than fast gas turbine technologies. The plant's ability to provide volt-ampere reactive control during periods of extremely low power generation was demonstrated as well. The project team developed a pioneering demonstration concept and test plan to show how various types of active and reactive power controls can leverage PV generation's value from being a simple variable energy resource to a resource that provides a wide range of ancillary services. With this project's approach to a holistic demonstration on an actual, large, utility-scale, operational PV power plant and dissemination of the obtained results, the team sought to close some gaps in perspectives that exist among various stakeholders in California and nationwide by providing real test data.« less

  4. ARC SDK: A toolbox for distributed computing and data applications

    NASA Astrophysics Data System (ADS)

    Skou Andersen, M.; Cameron, D.; Lindemann, J.

    2014-06-01

    Grid middleware suites provide tools to perform the basic tasks of job submission and retrieval and data access, however these tools tend to be low-level, operating on individual jobs or files and lacking in higher-level concepts. User communities therefore generally develop their own application-layer software catering to their specific communities' needs on top of the Grid middleware. It is thus important for the Grid middleware to provide a friendly, well documented and simple to use interface for the applications to build upon. The Advanced Resource Connector (ARC), developed by NorduGrid, provides a Software Development Kit (SDK) which enables applications to use the middleware for job and data management. This paper presents the architecture and functionality of the ARC SDK along with an example graphical application developed with the SDK. The SDK consists of a set of libraries accessible through Application Programming Interfaces (API) in several languages. It contains extensive documentation and example code and is available on multiple platforms. The libraries provide generic interfaces and rely on plugins to support a given technology or protocol and this modular design makes it easy to add a new plugin if the application requires supporting additional technologies.The ARC Graphical Clients package is a graphical user interface built on top of the ARC SDK and the Qt toolkit and it is presented here as a fully functional example of an application. It provides a graphical interface to enable job submission and management at the click of a button, and allows data on any Grid storage system to be manipulated using a visual file system hierarchy, as if it were a regular file system.

  5. Reduction of peak energy demand based on smart appliances energy consumption adjustment

    NASA Astrophysics Data System (ADS)

    Powroźnik, P.; Szulim, R.

    2017-08-01

    In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.

  6. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  7. Design of Energy Storage Management System Based on FPGA in Micro-Grid

    NASA Astrophysics Data System (ADS)

    Liang, Yafeng; Wang, Yanping; Han, Dexiao

    2018-01-01

    Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.

  8. Smart Grid Status and Metrics Report Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balducci, Patrick J.; Antonopoulos, Chrissi A.; Clements, Samuel L.

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papersmore » covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.« less

  9. Multi-Resolution Unstructured Grid-Generation for Geophysical Applications on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2015-01-01

    An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.

  10. Research on wind power grid-connected operation and dispatching strategies of Liaoning power grid

    NASA Astrophysics Data System (ADS)

    Han, Qiu; Qu, Zhi; Zhou, Zhi; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Li, Jinze; Ling, Zhaowei

    2018-02-01

    As a kind of clean energy, wind power has gained rapid development in recent years. Liaoning Province has abundant wind resources and the total installed capacity of wind power is in the forefront. With the large-scale wind power grid-connected operation, the contradiction between wind power utilization and peak load regulation of power grid has been more prominent. To this point, starting with the power structure and power grid installation situation of Liaoning power grid, the distribution and the space-time output characteristics of wind farm, the prediction accuracy, the curtailment and the off-grid situation of wind power are analyzed. Based on the deep analysis of the seasonal characteristics of power network load, the composition and distribution of main load are presented. Aiming at the problem between the acceptance of wind power and power grid adjustment, the scheduling strategies are given, including unit maintenance scheduling, spinning reserve, energy storage equipment settings by the analysis of the operation characteristics and the response time of thermal power units and hydroelectric units, which can meet the demand of wind power acceptance and provide a solution to improve the level of power grid dispatching.

  11. Autonomous Energy Grids | Grid Modernization | NREL

    Science.gov Websites

    control themselves using advanced machine learning and simulation to create resilient, reliable, and affordable optimized energy systems. Current frameworks to monitor, control, and optimize large-scale energy of optimization theory, control theory, big data analytics, and complex system theory and modeling to

  12. Power Grid Data Analysis with R and Hadoop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Gibson, Tara D.; Kleese van Dam, Kerstin

    This book chapter presents an approach to analysis of large-scale time-series sensor information based on our experience with power grid data. We use the R-Hadoop Integrated Programming Environment (RHIPE) to analyze a 2TB data set and present code and results for this analysis.

  13. FLAME: A platform for high performance computing of complex systems, applied for three case studies

    DOE PAGES

    Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...

    2011-01-01

    FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.

  14. Evolution of user analysis on the grid in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.; ATLAS Collaboration

    2017-10-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  15. Large-scale 3D inversion of marine controlled source electromagnetic data using the integral equation method

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.

    2009-12-01

    The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.

  16. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  17. Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    PubMed Central

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486

  18. Solar Energy Grid Integration Systems (SEGIS): adding functionality while maintaining reliability and economics

    NASA Astrophysics Data System (ADS)

    Bower, Ward

    2011-09-01

    An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.

  19. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    PubMed

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  20. MIB Galerkin method for elliptic interface problems.

    PubMed

    Xia, Kelin; Zhan, Meng; Wei, Guo-Wei

    2014-12-15

    Material interfaces are omnipresent in the real-world structures and devices. Mathematical modeling of material interfaces often leads to elliptic partial differential equations (PDEs) with discontinuous coefficients and singular sources, which are commonly called elliptic interface problems. The development of high-order numerical schemes for elliptic interface problems has become a well defined field in applied and computational mathematics and attracted much attention in the past decades. Despite of significant advances, challenges remain in the construction of high-order schemes for nonsmooth interfaces, i.e., interfaces with geometric singularities, such as tips, cusps and sharp edges. The challenge of geometric singularities is amplified when they are associated with low solution regularities, e.g., tip-geometry effects in many fields. The present work introduces a matched interface and boundary (MIB) Galerkin method for solving two-dimensional (2D) elliptic PDEs with complex interfaces, geometric singularities and low solution regularities. The Cartesian grid based triangular elements are employed to avoid the time consuming mesh generation procedure. Consequently, the interface cuts through elements. To ensure the continuity of classic basis functions across the interface, two sets of overlapping elements, called MIB elements, are defined near the interface. As a result, differentiation can be computed near the interface as if there is no interface. Interpolation functions are constructed on MIB element spaces to smoothly extend function values across the interface. A set of lowest order interface jump conditions is enforced on the interface, which in turn, determines the interpolation functions. The performance of the proposed MIB Galerkin finite element method is validated by numerical experiments with a wide range of interface geometries, geometric singularities, low regularity solutions and grid resolutions. Extensive numerical studies confirm the designed second order convergence of the MIB Galerkin method in the L ∞ and L 2 errors. Some of the best results are obtained in the present work when the interface is C 1 or Lipschitz continuous and the solution is C 2 continuous.

  1. Nonvacuum, maskless fabrication of a flexible metal grid transparent conductor by low-temperature selective laser sintering of nanoparticle ink.

    PubMed

    Hong, Sukjoon; Yeo, Junyeob; Kim, Gunho; Kim, Dongkyu; Lee, Habeom; Kwon, Jinhyeong; Lee, Hyungman; Lee, Phillip; Ko, Seung Hwan

    2013-06-25

    We introduce a facile approach to fabricate a metallic grid transparent conductor on a flexible substrate using selective laser sintering of metal nanoparticle ink. The metallic grid transparent conductors with high transmittance (>85%) and low sheet resistance (30 Ω/sq) are readily produced on glass and polymer substrates at large scale without any vacuum or high-temperature environment. Being a maskless direct writing method, the shape and the parameters of the grid can be easily changed by CAD data. The resultant metallic grid also showed a superior stability in terms of adhesion and bending. This transparent conductor is further applied to the touch screen panel, and it is confirmed that the final device operates firmly under continuous mechanical stress.

  2. Economic performance and sustainability of HealthGrids: evidence from two case studies.

    PubMed

    Dobrev, Alexander; Scholz, Stefan; Zegners, Dainis; Stroetmann, Karl A; Semler, Sebastian C

    2009-01-01

    Financial sustainability is not a driving force of HealthGrids today, as a previous desk research survey of 22 international HealthGrid projects has showed. The majority of applications are project based, which puts a time limit of funding, but also of goals and objectives. Given this situation, we analysed two initiatives, WISDOM and MammoGrid from an economic, cost-benefit perspective, and evaluated the potential for these initiatives to be brought to market as self-financing, sustainable services. We conclude that the topic of HealthGrids should be pursued further because of the substantial potential for net gains to society at large. The most significant hurdle to sustainability - the discrepancy between social benefits and private incentives - can be solved by sound business models.

  3. Modal Analysis for Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less

  4. Large-Eddy Simulation of Turbulent Wall-Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.

    1996-01-01

    Large-eddy simulations of a turbulent boundary layer with Reynolds number based on displacement thickness equal to 3500 were performed with two grid resolutions. The computations were continued for sufficient time to obtain frequency spectra with resolved frequencies that correspond to the most important structural frequencies on an aircraft fuselage. The turbulent stresses were adequately resolved with both resolutions. Detailed quantitative analysis of a variety of statistical quantities associated with the wall-pressure fluctuations revealed similar behavior for both simulations. The primary differences were associated with the lack of resolution of the high-frequency data in the coarse-grid calculation and the increased jitter (due to the lack of multiple realizations for averaging purposes) in the fine-grid calculation. A new curve fit was introduced to represent the spanwise coherence of the cross-spectral density.

  5. Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1997-01-01

    Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.

  6. The cosmic web in CosmoGrid void regions

    NASA Astrophysics Data System (ADS)

    Rieder, Steven; van de Weygaert, Rien; Cautun, Marius; Beygu, Burcu; Portegies Zwart, Simon

    2016-10-01

    We study the formation and evolution of the cosmic web, using the high-resolution CosmoGrid ΛCDM simulation. In particular, we investigate the evolution of the large-scale structure around void halo groups, and compare this to observations of the VGS-31 galaxy group, which consists of three interacting galaxies inside a large void. The structure around such haloes shows a great deal of tenuous structure, with most of such systems being embedded in intra-void filaments and walls. We use the Nexus+} algorithm to detect walls and filaments in CosmoGrid, and find them to be present and detectable at every scale. The void regions embed tenuous walls, which in turn embed tenuous filaments. We hypothesize that the void galaxy group of VGS-31 formed in such an environment.

  7. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  8. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele

    2000-01-01

    The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.

  9. Energy Storage for the Power Grid

    ScienceCinema

    Imhoff, Carl; Vaishnav, Dave; Wang, Wei

    2018-05-30

    The iron vanadium redox flow battery was developed by researchers at Pacific Northwest National Laboratory as a solution to large-scale energy storage for the power grid. This technology provides the energy industry and the nation with a reliable, stable, safe, and low-cost storage alternative for a cleaner, efficient energy future.

  10. The data storage grid: the next generation of fault-tolerant storage for backup and disaster recovery of clinical images

    NASA Astrophysics Data System (ADS)

    King, Nelson E.; Liu, Brent; Zhou, Zheng; Documet, Jorge; Huang, H. K.

    2005-04-01

    Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models that can address the problem of fault-tolerant storage for backup and recovery of clinical images. We have researched and developed a novel Data Grid testbed involving several federated PAC systems based on grid architecture. By integrating a grid computing architecture to the DICOM environment, a failed PACS archive can recover its image data from others in the federation in a timely and seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid architecture representing three federated PAC systems, the Fault-Tolerant PACS archive server at the Image Processing and Informatics Laboratory, Marina del Rey, the clinical PACS at Saint John's Health Center, Santa Monica, and the clinical PACS at the Healthcare Consultation Center II, USC Health Science Campus, will be presented. The successful demonstration of the Data Grid in the testbed will provide an understanding of the Data Grid concept in clinical image data backup as well as establishment of benchmarks for performance from future grid technology improvements and serve as a road map for expanded research into large enterprise and federation level data grids to guarantee 99.999 % up time.

  11. Lipschitz regularity results for nonlinear strictly elliptic equations and applications

    NASA Astrophysics Data System (ADS)

    Ley, Olivier; Nguyen, Vinh Duc

    2017-10-01

    Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.

  12. Mascons, GRACE, and Time-variable Gravity

    NASA Technical Reports Server (NTRS)

    Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.

    2006-01-01

    The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.

  13. Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.

    2018-03-01

    We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.

  14. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  15. High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities

    NASA Astrophysics Data System (ADS)

    Britt, Darrell Steven, Jr.

    Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.

  16. WISDOM-II: screening against multiple targets implicated in malaria using computational grid infrastructures.

    PubMed

    Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent

    2009-05-01

    Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.

  17. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  18. The Herschel view of GAS in Protoplanetary Systems (GASPS). First comparisons with a large grid of models

    NASA Astrophysics Data System (ADS)

    Pinte, C.; Woitke, P.; Ménard, F.; Duchêne, G.; Kamp, I.; Meeus, G.; Mathews, G.; Howard, C. D.; Grady, C. A.; Thi, W.-F.; Tilling, I.; Augereau, J.-C.; Dent, W. R. F.; Alacid, J. M.; Andrews, S.; Ardila, D. R.; Aresu, G.; Barrado, D.; Brittain, S.; Ciardi, D. R.; Danchi, W.; Eiroa, C.; Fedele, D.; de Gregorio-Monsalvo, I.; Heras, A.; Huelamo, N.; Krivov, A.; Lebreton, J.; Liseau, R.; Martin-Zaïdi, C.; Mendigutía, I.; Montesinos, B.; Mora, A.; Morales-Calderon, M.; Nomura, H.; Pantin, E.; Pascucci, I.; Phillips, N.; Podio, L.; Poelman, D. R.; Ramsay, S.; Riaz, B.; Rice, K.; Riviere-Marichalar, P.; Roberge, A.; Sandell, G.; Solano, E.; Vandenbussche, B.; Walker, H.; Williams, J. P.; White, G. J.; Wright, G.

    2010-07-01

    The Herschel GASPS key program is a survey of the gas phase of protoplanetary discs, targeting 240 objects which cover a large range of ages, spectral types, and disc properties. To interpret this large quantity of data and initiate self-consistent analyses of the gas and dust properties of protoplanetary discs, we have combined the capabilities of the radiative transfer code MCFOST with the gas thermal balance and chemistry code ProDiMo to compute a grid of ≈300 000 disc models (DENT). We present a comparison of the first Herschel/GASPS line and continuum data with the predictions from the DENT grid of models. Our objective is to test some of the main trends already identified in the DENT grid, as well as to define better empirical diagnostics to estimate the total gas mass of protoplanetary discs. Photospheric UV radiation appears to be the dominant gas-heating mechanism for Herbig stars, whereas UV excess and/or X-rays emission dominates for T Tauri stars. The DENT grid reveals the complexity in the analysis of far-IR lines and the difficulty to invert these observations into physical quantities. The combination of Herschel line observations with continuum data and/or with rotational lines in the (sub-)millimetre regime, in particular CO lines, is required for a detailed characterisation of the physical and chemical properties of circumstellar discs. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  19. Battery-powered, electrocuting trap for stable flies (Diptera: Muscidae).

    PubMed

    Pickens, L G

    1991-11-01

    A solar-charged, battery-powered, electrocuting grid was combined with a white plywood base to make a portable, pulsed-current, pest-electrocuting device that attracted and killed stable flies, Stomoxys calcitrans (L.), outdoors. The grid was powered once every 1-2 s by a 0.016-s pulse of 60-Hz alternating current of 4 mA and 9,500 V. Power was turned off at night by a photoresistor. The trap functioned continuously for 14 d with an unrecharged 12-V, 18A/h lawn-tractor battery and killed as many as 4,000 flies per day. Solar cells were used to charge a single 12-V battery continuously that operated 12 grids for a period of 90 d. The grid did not short circuit for any length of time even during heavy rainstorms or when large insects were killed. The incorporation of moiré patterns and the utilization of the correct size, orientation, and placement of wires made the electrocuting grid itself attractive to stable flies. The traps were spaced at distances of up to 120 m from the battery and pulse circuit. The electrocuting traps were more effective than sticky traps and avoided the problems associated with chemicals. They are well suited for use around calf pens, dog kennels, or large animal shelters.

  20. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    PubMed Central

    2014-01-01

    Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516

  1. Development of a Regional Structured and Unstructured Grid Methodology for Chemically Reactive Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Stefanski, Douglas Lawrence

    A finite volume method for solving the Reynolds Averaged Navier-Stokes (RANS) equations on unstructured hybrid grids is presented. Capabilities for handling arbitrary mixtures of reactive gas species within the unstructured framework are developed. The modeling of turbulent effects is carried out via the 1998 Wilcox k -- o model. This unstructured solver is incorporated within VULCAN -- a multi-block structured grid code -- as part of a novel patching procedure in which non-matching interfaces between structured blocks are replaced by transitional unstructured grids. This approach provides a fully-conservative alternative to VULCAN's non-conservative patching methods for handling such interfaces. In addition, the further development of the standalone unstructured solver toward large-eddy simulation (LES) applications is also carried out. Dual time-stepping using a Crank-Nicholson formulation is added to recover time-accuracy, and modeling of sub-grid scale effects is incorporated to provide higher fidelity LES solutions for turbulent flows. A switch based on the work of Ducros, et al., is implemented to transition from a monotonicity-preserving flux scheme near shocks to a central-difference method in vorticity-dominated regions in order to better resolve small-scale turbulent structures. The updated unstructured solver is used to carry out large-eddy simulations of a supersonic constrained mixing layer.

  2. A tool for optimization of the production and user analysis on the Grid, C. Grigoras for the ALICE Collaboration

    NASA Astrophysics Data System (ADS)

    Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores

    2011-12-01

    With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.

  3. Sino/American cooperation for rural electrification in China

    NASA Astrophysics Data System (ADS)

    Wallace, William L.; Tsuo, Y. Simon

    1997-02-01

    Rapid growth in economic development, coupled with the absence of an electric grid in large areas of the rural countryside, have created a need for new energy sources both in urban centers and rural areas in China. There is a very large need for new sources of energy for rural electrification in China as represented by 120 million people in remote regions who do not have access to an electric grid and by over 300 coastal islands in China that are unelectrified. In heavily populated regions in China where there is an electric grid, there are still severe shortages of electric power and limited access to the grid by village populations. In order to meet energy demands in rural China, renewable energy in the form of solar, wind, and biomass resources are being utilized as a cost effective alternative to grid extension and use of diesel and gasoline generators. An Energy Efficiency and Renewable Energy Protocol Agreement was signed by the U.S. Department of Energy with the Chinese State Science and Technology Commission in Beijing in February, 1995. Under this agreement, projects using photovoltaics for rural electrification are being conducted in Gansu Province in western China and Inner Mongolia in northern China, providing the basis for much wider deployment and use of photovoltaics for meeting the growing rural energy demands of China.

  4. Effect of von Karman Vortex Shedding on Regular and Open-slit V-gutter Stabilized Turbulent Premixed Flames

    DTIC Science & Technology

    2012-04-01

    Both flame lengths shrink and large scale disruptions occur downstream with vortex shedding carrying reaction zones. Flames in both flameholders...9) the flame structure changes dramatically for both regular and open-slit V-gutter. Both flame lengths shrink and large scale disruptions occur...reduces the flame length . However, qualitatively the open-slit V-gutter appears to be more sensitive than the regular V-gutter. Both flames remain

  5. The GILDA t-Infrastructure: grid training activities in Africa and future opportunities

    NASA Astrophysics Data System (ADS)

    Ardizzone, V.; Barbera, R.; Ciuffo, L.; Giorgio, E.

    2009-04-01

    Scientists, educators, and students from many parts of the worlds are not able to take advantage of ICT because the digital divide is growing and prevents less developed countries to exploit its benefits. Instead of becoming more empowered and involved in worldwide developments, they are becoming increasingly marginalised as the world of education and science becomes increasingly Internet-dependent. The Grid Infn Laboratory for Dissemination Activities (GILDA) spreads since almost five years the awareness of Grid technology to a large audience, training new communities and fostering new organisations to provide resources. The knowledge dissemination process guided by the training activities is a key factor to ensure that all users can fully understand the characteristics of the Grid services offered by large existing e-Infrastructure. GILDA is becoming a "de facto" standard in training infrastructures (t-Infrastructures) and it is adopted by many grid projects worldwide. In this contribution we will report on the latest status of GILDA services and on the training activities recently carried out in sub-Saharan Africa (Malawi and South Africa). Particular care will be devoted to show how GILDA can be "cloned" to satisfy both education and research demands of African Organisations. The opportunities to benefit from GILDA in the framework of the EPIKH project as well as the plans of the European Commission on grid training and education for the 2010-2011 calls of its 7th Framework Programme will be presented and discussed.

  6. A VO-Driven Astronomical Data Grid in China

    NASA Astrophysics Data System (ADS)

    Cui, C.; He, B.; Yang, Y.; Zhao, Y.

    2010-12-01

    With the implementation of many ambitious observation projects, including LAMOST, FAST, and Antarctic observatory at Doom A, observational astronomy in China is stepping into a brand new era with emerging data avalanche. In the era of e-Science, both these cutting-edge projects and traditional astronomy research need much more powerful data management, sharing and interoperability. Based on data-grid concept, taking advantages of the IVOA interoperability technologies, China-VO is developing a VO-driven astronomical data grid environment to enable multi-wavelength science and large database science. In the paper, latest progress and data flow of the LAMOST, architecture of the data grid, and its supports to the VO are discussed.

  7. Vacuum insulation of the high energy negative ion source for fusion application.

    PubMed

    Kojima, A; Hanada, M; Hilmi, A; Inoue, T; Watanabe, K; Taniguchi, M; Kashiwagi, M; Umeda, N; Tobari, H; Kobayashi, S; Yamano, Y; Grisham, L R

    2012-02-01

    Vacuum insulation on a large size negative ion accelerator with multiple extraction apertures and acceleration grids for fusion application was experimentally examined and designed. In the experiment, vacuum insulation characteristics were investigated in the JT-60 negative ion source with >1000 apertures on the grid with the surface area of ∼2 m(2). The sustainable voltages varied with a square root of the gap lengths between the grids, and decreased with number of the apertures and with the surface area of the grids. Based on the obtained results, the JT-60SA (super advanced) negative ion source is designed to produce 22 A, 500 keV D(-) ion beams for 100 s.

  8. Construction method of pre assembled unit of bolt sphere grid

    NASA Astrophysics Data System (ADS)

    Hu, L. W.; Guo, F. L.; Wang, J. L.; Bu, F. M.

    2018-03-01

    The traditional construction of bolt sphere grid has many disadvantages, such as high cost, large amount of work at high altitude and long construction period, in order to make up for these shortcomings, in this paper, a new and applicable construction method is explored: setting up local scaffolding, installing the bolt sphere grid starting frame on the local scaffolding, then the pre assembled unit of bolt sphere grid is assembled on the ground, using small hoisting equipment to lift pre assembled unit to high altitude and install. Compared with the traditional installation method, the construction method has strong practicability and high economic efficiency, and has achieved good social and economic benefits.

  9. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  10. GridWise Standards Mapping Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosquet, Mia L.

    ''GridWise'' is a concept of how advanced communications, information and controls technology can transform the nation's energy system--across the spectrum of large scale, central generation to common consumer appliances and equipment--into a collaborative network, rich in the exchange of decision making information and an abundance of market-based opportunities (Widergren and Bosquet 2003) accompanying the electric transmission and distribution system fully into the information and telecommunication age. This report summarizes a broad review of standards efforts which are related to GridWise--those which could ultimately contribute significantly to advancements toward the GridWise vision, or those which represent today's current technological basis uponmore » which this vision must build.« less

  11. New datasets for quantifying snow-vegetation-atmosphere interactions in boreal birch and conifer forests

    NASA Astrophysics Data System (ADS)

    Reid, T. D.; Essery, R.; Rutter, N.; Huntley, B.; Baxter, R.; Holden, R.; King, M.; Hancock, S.; Carle, J.

    2012-12-01

    Boreal forests exert a strong influence on weather and climate by modifying the surface energy and radiation balance. However, global climate and numerical weather prediction models use forest parameter values from simple look-up tables or maps that are derived from limited satellite data, on large grid scales. In reality, Arctic landscapes are inherently heterogeneous, with highly variable land cover types and structures on a variety of spatial scales. There is value in collecting detailed field data for different areas of vegetation cover, to assess the accuracy of large-scale assumptions. To address these issues, a consortium of researchers funded by the UK's Natural Environment Research Council have collected extensive data on radiation, meteorology, snow cover and canopy structure at two contrasting Arctic forest sites. The chosen study sites were an area of boreal birch forest near Abisko, Sweden in March/April 2011 and mixed conifer forest at Sodankylä, Finland in March/April 2012. At both sites, arrays comprising ten shortwave pyranometers and four longwave pyrgeometers were deployed for periods of up to 50 days, under forest plots of varying canopy structures and densities. In addition, downwelling longwave irradiance and global and diffuse shortwave irradiances were recorded at nearby open sites representing the top-of-canopy conditions. Meteorological data were recorded at all sub-canopy and open sites using automatic weather stations. Over the same periods, tree skin temperatures were measured on selected trees using contact thermocouples, infrared thermocouples and thermal imagery. Canopy structure was accurately quantified through manual surveys, extensive hemispherical photography and terrestrial laser scans of every study plot. Sub-canopy snow depth and snow water equivalent were measured on fine-scale grids at each study plot. Regular site maintenance ensured a high quality dataset covering the important Arctic spring period. The data have several applications, for example in forest ecology, canopy radiative transfer models, snow hydrological modelling, and land surface schemes, for a variety of canopy types from sparse, leafless birch to dense pine and spruce. The work also allows the comparison of modern, highly detailed methods such as laser scanning and thermal imagery with older, well-established data collection methods. By combining these data with airborne and satellite remote sensing data, snow-vegetation-atmosphere interactions could be estimated over a wide area of the heterogeneous boreal landscape. This could improve estimates of crucial parameters such as land surface albedo on the grid scales required for global or regional weather and climate models.

  12. Numerical modeling of flow and transport in the far-field of a generic nuclear waste repository in fractured crystalline rock using updated fracture continuum model

    NASA Astrophysics Data System (ADS)

    Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.

    2016-12-01

    Disposal of high-level radioactive waste in a deep geological repository in crystalline host rock is one of the potential options for long term isolation. Characterization of the natural barrier system is an important component of the disposal option. In this study we present numerical modeling of flow and transport in fractured crystalline rock using an updated fracture continuum model (FCM). The FCM is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The original method by McKenna and Reeves (2005) has been updated to provide capabilities that enhance representation of fractured rock. As reported in Hadgu et al. (2015) the method was first modified to include fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation. More recently the FCM has been extended to include three different methods. (1) The Sequential Gaussian Simulation (SGSIM) method uses spatial correlation to generate fractures and define their properties for FCM (2) The ELLIPSIM method randomly generates a specified number of ellipses with properties defined by probability distributions. Each ellipse represents a single fracture. (3) Direct conversion of discrete fracture network (DFN) output. Test simulations were conducted to simulate flow and transport using ELLIPSIM and direct conversion of DFN methods. The simulations used a 1 km x 1km x 1km model domain and a structured with grid block of size of 10 m x 10m x 10m, resulting in a total of 106 grid blocks. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the different methods were applied to generate representative permeability fields. The PFLOTRAN (Hammond et al., 2014) code was used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains. SAND2016-7509 A

  13. Using MERRA Gridded Innovations for Quantifying Uncertainties in Analysis Fields and Diagnosing Observing System Inhomogeneities

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo; Redder, Christopher

    2010-01-01

    MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.

  14. Using MERRA Gridded Innovation for Quantifying Uncertainties in Analysis Fields and Diagnosing Observing System Inhomogeneities

    NASA Astrophysics Data System (ADS)

    da Silva, A.; Redder, C. R.

    2010-12-01

    MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The Project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.

  15. European Forest Cover During the Past 12,000 Years: A Palynological Reconstruction Based on Modern Analogs and Remote Sensing

    PubMed Central

    Zanon, Marco; Davis, Basil A. S.; Marquer, Laurent; Brewer, Simon; Kaplan, Jed O.

    2018-01-01

    Characterization of land cover change in the past is fundamental to understand the evolution and present state of the Earth system, the amount of carbon and nutrient stocks in terrestrial ecosystems, and the role played by land-atmosphere interactions in influencing climate. The estimation of land cover changes using palynology is a mature field, as thousands of sites in Europe have been investigated over the last century. Nonetheless, a quantitative land cover reconstruction at a continental scale has been largely missing. Here, we present a series of maps detailing the evolution of European forest cover during last 12,000 years. Our reconstructions are based on the Modern Analog Technique (MAT): a calibration dataset is built by coupling modern pollen samples with the corresponding satellite-based forest-cover data. Fossil reconstructions are then performed by assigning to every fossil sample the average forest cover of its closest modern analogs. The occurrence of fossil pollen assemblages with no counterparts in modern vegetation represents a known limit of analog-based methods. To lessen the influence of no-analog situations, pollen taxa were converted into plant functional types prior to running the MAT algorithm. We then interpolate site-specific reconstructions for each timeslice using a four-dimensional gridding procedure to create continuous gridded maps at a continental scale. The performance of the MAT is compared against methodologically independent forest-cover reconstructions produced using the REVEALS method. MAT and REVEALS estimates are most of the time in good agreement at a trend level, yet MAT regularly underestimates the occurrence of densely forested situations, requiring the application of a bias correction procedure. The calibrated MAT-based maps draw a coherent picture of the establishment of forests in Europe in the Early Holocene with the greatest forest-cover fractions reconstructed between ∼8,500 and 6,000 calibrated years BP. This forest maximum is followed by a general decline in all parts of the continent, likely as a result of anthropogenic deforestation. The continuous spatial and temporal nature of our reconstruction, its continental coverage, and gridded format make it suitable for climate, hydrological, and biogeochemical modeling, among other uses. PMID:29568303

  16. European Forest Cover During the Past 12,000 Years: A Palynological Reconstruction Based on Modern Analogs and Remote Sensing.

    PubMed

    Zanon, Marco; Davis, Basil A S; Marquer, Laurent; Brewer, Simon; Kaplan, Jed O

    2018-01-01

    Characterization of land cover change in the past is fundamental to understand the evolution and present state of the Earth system, the amount of carbon and nutrient stocks in terrestrial ecosystems, and the role played by land-atmosphere interactions in influencing climate. The estimation of land cover changes using palynology is a mature field, as thousands of sites in Europe have been investigated over the last century. Nonetheless, a quantitative land cover reconstruction at a continental scale has been largely missing. Here, we present a series of maps detailing the evolution of European forest cover during last 12,000 years. Our reconstructions are based on the Modern Analog Technique (MAT): a calibration dataset is built by coupling modern pollen samples with the corresponding satellite-based forest-cover data. Fossil reconstructions are then performed by assigning to every fossil sample the average forest cover of its closest modern analogs. The occurrence of fossil pollen assemblages with no counterparts in modern vegetation represents a known limit of analog-based methods. To lessen the influence of no-analog situations, pollen taxa were converted into plant functional types prior to running the MAT algorithm. We then interpolate site-specific reconstructions for each timeslice using a four-dimensional gridding procedure to create continuous gridded maps at a continental scale. The performance of the MAT is compared against methodologically independent forest-cover reconstructions produced using the REVEALS method. MAT and REVEALS estimates are most of the time in good agreement at a trend level, yet MAT regularly underestimates the occurrence of densely forested situations, requiring the application of a bias correction procedure. The calibrated MAT-based maps draw a coherent picture of the establishment of forests in Europe in the Early Holocene with the greatest forest-cover fractions reconstructed between ∼8,500 and 6,000 calibrated years BP. This forest maximum is followed by a general decline in all parts of the continent, likely as a result of anthropogenic deforestation. The continuous spatial and temporal nature of our reconstruction, its continental coverage, and gridded format make it suitable for climate, hydrological, and biogeochemical modeling, among other uses.

  17. Improving Grid Resilience through Informed Decision-making (IGRID)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, Laurie; Stamber, Kevin L.; Jeffers, Robert Fredric

    The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for themore » foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.« less

  18. DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.

    2010-03-01

    Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.

  19. Optimal response to attacks on the open science grids.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunay, M.; Leyffer, S.; Linderoth, J. T.

    2011-01-01

    Cybersecurity is a growing concern, especially in open grids, where attack propagation is easy because of prevalent collaborations among thousands of users and hundreds of institutions. The collaboration rules that typically govern large science experiments as well as social networks of scientists span across the institutional security boundaries. A common concern is that the increased openness may allow malicious attackers to spread more readily around the grid. We consider how to optimally respond to attacks in open grid environments. To show how and why attacks spread more readily around the grid, we first discuss how collaborations manifest themselves in themore » grids and form the collaboration network graph, and how this collaboration network graph affects the security threat levels of grid participants. We present two mixed-integer program (MIP) models to find the optimal response to attacks in open grid environments, and also calculate the threat level associated with each grid participant. Given an attack scenario, our optimal response model aims to minimize the threat levels at unaffected participants while maximizing the uninterrupted scientific production (continuing collaborations). By adopting some of the collaboration rules (e.g., suspending a collaboration or shutting down a site), the model finds optimal response to subvert an attack scenario.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Thomas M.; Boudreau, Marie-Claude; Helsen, Lieve

    Recent advances in information and communications technology (ICT) have initiated development of a smart electrical grid and smart buildings. Buildings consume a large portion of the total electricity production worldwide, and to fully develop a smart grid they must be integrated with that grid. Buildings can now be 'prosumers' on the grid (both producers and consumers), and the continued growth of distributed renewable energy generation is raising new challenges in terms of grid stability over various time scales. Buildings can contribute to grid stability by managing their overall electrical demand in response to current conditions. Facility managers must balance demandmore » response requests by grid operators with energy needed to maintain smooth building operations. For example, maintaining thermal comfort within an occupied building requires energy and, thus an optimized solution balancing energy use with indoor environmental quality (adequate thermal comfort, lighting, etc.) is needed. Successful integration of buildings and their systems with the grid also requires interoperable data exchange. However, the adoption and integration of newer control and communication technologies into buildings can be problematic with older legacy HVAC and building control systems. Public policy and economic structures have not kept up with the technical developments that have given rise to the budding smart grid, and further developments are needed in both technical and non-technical areas.« less

  1. Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.

  2. Organic light-emitting diodes using novel embedded al gird transparent electrodes

    NASA Astrophysics Data System (ADS)

    Peng, Cuiyun; Chen, Changbo; Guo, Kunping; Tian, Zhenghao; Zhu, Wenqing; Xu, Tao; Wei, Bin

    2017-03-01

    This work demonstrates a novel transparent electrode using embedded Al grids fabricated by a simple and cost-effective approach using photolithography and wet etching. The optical and electrical properties of Al grids versus grid geometry have been systematically investigated, it was found that Al grids exhibited a low sheet resistance of 70 Ω □-1 and a light transmission of 69% at 550 nm with advantages in terms of processing conditions and material cost as well as potential to large scale fabrication. Indium Tin Oxide-free green organic light-emitting diodes (OLED) based on Al grids transparent electrodes was demonstrated, yielding a power efficiency >15 lm W-1 and current efficiency >39 cd A-1 at a brightness of 2396 cd m-2. Furthermore, a reduced efficiency roll-off and higher brightness have been achieved compared with ITO-base device.

  3. Wildfire spread, hazard and exposure metric raster grids for central Catalonia.

    PubMed

    Alcasena, Fermín J; Ager, Alan A; Salis, Michele; Day, Michelle A; Vega-Garcia, Cristina

    2018-04-01

    We provide 40 m resolution wildfire spread, hazard and exposure metric raster grids for the 0.13 million ha fire-prone Bages County in central Catalonia (northeastern Spain) corresponding to node influence grid (NIG), crown fraction burned (CFB) and fire transmission to residential houses (TR). Fire spread and behavior data (NIG, CFB and fire perimeters) were generated with fire simulation modeling considering wildfire season extreme fire weather conditions (97 th percentile). Moreover, CFB was also generated for prescribed fire (Rx) mild weather conditions. The TR smoothed grid was obtained with a geospatial analysis considering large fire perimeters and individual residential structures located within the study area. We made these raster grids available to assist in the optimization of wildfire risk management plans within the study area and to help mitigate potential losses from catastrophic events.

  4. Turbulence decay downstream of an active grid

    NASA Astrophysics Data System (ADS)

    Bewley, Gregory; Bodenschatz, Eberhard

    2015-11-01

    A grid in a wind tunnel stirs up turbulence that has a certain large-scale structure. The moving parts in a so-called ``active grid'' can be programmed to produce different structures. We use a special active grid in which each of 129 paddles on the grid has its own position-controlled servomotor that can move independently of the others. We observe among other things that the anisotropy in the amplitude of the velocity fluctuations and in the correlation lengths can be set and varied with an algorithm that oscillates the paddles in a specified way. The variation in the anisotropies that we observe can be explained by our earlier analysis of anisotropic ``soccer ball'' turbulence (Bewley, Chang and Bodenschatz 2012, Phys. Fluids). We define the influence of this variation in structure on the downstream evolution of the turbulence. with Eberhard Bodenschatz and others.

  5. Integration of HTS Cables in the Future Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future transmission grid will obtain electrical power generated by decentralized renewable sources, together with large scale generation units located at the coastal region. In this way electrical power has to be distributed and transmitted over longer distances from generation to end user. Potential grid issues like: amount of distributed power, grid stability and electrical loss dissipation merit particular attention. High temperature superconductors (HTS) can play an important role in solving these grid problems. Advantages to integrate HTS components at transmission voltages are numerous: more transmittable power together with less emissions, intrinsic fault current limiting capability, lower ac loss, better control of power flow, reduced footprint, less magnetic field emissions, etc. The main obstacle at present is the relatively high price of HTS conductor. However as the price goes down, initial market penetration of several HTS components (e.g.: cables, fault current limiters) is expected by year 2015. In the full paper we present selected ways to integrate EHV AC HTS cables depending on a particular future grid scenario in the Netherlands.

  6. Mathematical modeling of polymer flooding using the unstructured Voronoi grid

    NASA Astrophysics Data System (ADS)

    Kireev, T. F.; Bulgakova, G. T.; Khatmullin, I. F.

    2017-12-01

    Effective recovery of unconventional oil reserves necessitates development of enhanced oil recovery techniques such as polymer flooding. The study investigated the model of polymer flooding with effects of adsorption and water salinity. The model takes into account six components that include elements of the classic black oil model. These components are polymer, salt, water, dead oil, dry gas and dissolved gas. Solution of the problem is obtained by finite volume method on unstructured Voronoi grid using fully implicit scheme and the Newton’s method. To compare several different grid configurations numerical simulation of polymer flooding is performed. The oil rates obtained by a hexagonal locally refined Voronoi grid are shown to be more accurate than the oil rates obtained by a rectangular grid with the same number of cells. The latter effect is caused by high solution accuracy near the wells due to the local grid refinement. Minimization of the grid orientation effect caused by the hexagonal pattern is also demonstrated. However, in the inter-well regions with large Voronoi cells flood front tends to flatten and the water breakthrough moment is smoothed.

  7. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    PubMed Central

    Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  8. Energy Systems Integration News - November 2016 | Energy Systems

    Science.gov Websites

    visualization. NREL Study Finds Integrated Utility Control Can Improve Grid Voltage Regulation Beyond Advanced large solar photovoltaic (PV) system is connected to the electric grid, a centralized control system at more PV power is being fed into the line than is being used, leading to voltage control issues and

  9. Active power control of solar PV generation for large interconnection frequency regulation and oscillation damping

    DOE PAGES

    Liu, Yong; Zhu, Lin; Zhan, Lingwei; ...

    2015-06-23

    Because of zero greenhouse gas emission and decreased manufacture cost, solar photovoltaic (PV) generation is expected to account for a significant portion of future power grid generation portfolio. Because it is indirectly connected to the power grid via power electronic devices, solar PV generation system is fully decoupled from the power grid, which will influence the interconnected power grid dynamic characteristics as a result. In this study, the impact of solar PV penetration on large interconnected power system frequency response and inter-area oscillation is evaluated, taking the United States Eastern Interconnection (EI) as an example. Furthermore, based on the constructedmore » solar PV electrical control model with additional active power control loops, the potential contributions of solar PV generation to power system frequency regulation and oscillation damping are examined. The advantages of solar PV frequency support over that of wind generator are also discussed. Finally, simulation results demonstrate that solar PV generations can effectively work as ‘actuators’ in alleviating the negative impacts they bring about.« less

  10. A secure and efficiently searchable health information architecture.

    PubMed

    Yasnoff, William A

    2016-06-01

    Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Current Grid operation and future role of the Grid

    NASA Astrophysics Data System (ADS)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.

  12. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  13. Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling

    NASA Astrophysics Data System (ADS)

    Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.

    2013-09-01

    Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows that water barriers are better preserved with the new method. This research confirms the idea that topographical information, mainly the boundary locations and object classes, can enrich the height grid for this hydrological application.

  14. Smooth information flow in temperature climate network reflects mass transport

    NASA Astrophysics Data System (ADS)

    Hlinka, Jaroslav; Jajcay, Nikola; Hartman, David; Paluš, Milan

    2017-03-01

    A directed climate network is constructed by Granger causality analysis of air temperature time series from a regular grid covering the whole Earth. Using winner-takes-all network thresholding approach, a structure of a smooth information flow is revealed, hidden to previous studies. The relevance of this observation is confirmed by comparison with the air mass transfer defined by the wind field. Their close relation illustrates that although the information transferred due to the causal influence is not a physical quantity, the information transfer is tied to the transfer of mass and energy.

  15. Analysis models for the estimation of oceanic fields

    NASA Technical Reports Server (NTRS)

    Carter, E. F.; Robinson, A. R.

    1987-01-01

    A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.

  16. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  17. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  18. A New Approach to Modeling Jupiter's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Katoh, Y.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2017-12-01

    The scales in planetary magnetospheres range from 10s of planetary radii to kilometers. For a number of years we have studied the magnetospheres of Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations. However, we have not been able to reach even the limits of the MHD approximation because of the large amount of computer resources required. Recently thanks to the progress in supercomputer systems, we have obtained the capability to simulate Jupiter's magnetosphere with 1000 times the number of grid points used in our previous simulations. This has allowed us to combine the high resolution global simulation with a micro-scale simulation of the Jovian magnetosphere. In particular we can combine a hybrid (kinetic ions and fluid electrons) simulation with the MHD simulation. In addition, the new capability enables us to run multi-parameter survey simulations of the Jupiter-solar wind system. In this study we performed a high-resolution simulation of Jovian magnetosphere to connect with the hybrid simulation, and lower resolution simulations under the various solar wind conditions to compare with Hisaki and Juno observations. In the high-resolution simulation we used a regular Cartesian gird with 0.15 RJ grid spacing and placed the inner boundary at 7 RJ. From these simulation settings, we provide the magnetic field out to around 20 RJ from Jupiter as a background field for the hybrid simulation. For the first time we have been able to resolve Kelvin Helmholtz waves on the magnetopause. We have investigated solar wind dynamic pressures between 0.01 and 0.09 nPa for a number of IMF values. These simulation data are open for the registered users to download the raw data. We have compared the results of these simulations with Hisaki auroral observations.

  19. Creative computing with Landlab: an open-source toolkit for building, coupling, and exploring two-dimensional numerical models of Earth-surface dynamics

    NASA Astrophysics Data System (ADS)

    Hobley, Daniel E. J.; Adams, Jordan M.; Nudurupati, Sai Siddhartha; Hutton, Eric W. H.; Gasparini, Nicole M.; Istanbulluoglu, Erkan; Tucker, Gregory E.

    2017-01-01

    The ability to model surface processes and to couple them to both subsurface and atmospheric regimes has proven invaluable to research in the Earth and planetary sciences. However, creating a new model typically demands a very large investment of time, and modifying an existing model to address a new problem typically means the new work is constrained to its detriment by model adaptations for a different problem. Landlab is an open-source software framework explicitly designed to accelerate the development of new process models by providing (1) a set of tools and existing grid structures - including both regular and irregular grids - to make it faster and easier to develop new process components, or numerical implementations of physical processes; (2) a suite of stable, modular, and interoperable process components that can be combined to create an integrated model; and (3) a set of tools for data input, output, manipulation, and visualization. A set of example models built with these components is also provided. Landlab's structure makes it ideal not only for fully developed modelling applications but also for model prototyping and classroom use. Because of its modular nature, it can also act as a platform for model intercomparison and epistemic uncertainty and sensitivity analyses. Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software. Landlab also offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules. We illustrate the principles of component coupling in Landlab using a model of landform evolution, a cellular ecohydrologic model, and a flood-wave routing model.

  20. The importance of topography controlled sub-grid process heterogeneity in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.

    2015-12-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

Top