NASA Astrophysics Data System (ADS)
Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.
2018-07-01
Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.
Detector shape in hexagonal sampling grids
NASA Astrophysics Data System (ADS)
Baronti, Stefano; Capanni, Annalisa; Romoli, Andrea; Santurri, Leonardo; Vitulli, Raffaele
2001-12-01
Recent improvements in CCD technology make hexagonal sampling attractive for practical applications and bring a new interest on this topic. In the following the performances of hexagonal sampling are analyzed under general assumptions and compared with the performances of conventional rectangular sampling. This analysis will take into account both the lattice form (squared, rectangular, hexagonal, and regular hexagonal), and the pixel shape. The analyzed hexagonal grid will not based a-priori on a regular hexagon tessellation, i.e., no constraints will be made on the ratio between the sampling frequencies in the two spatial directions. By assuming an elliptic support for the spectrum of the signal being sampled, sampling conditions will be expressed for a generic hexagonal sampling grid, and a comaprison with the well-known sampling conditions for a comparable rectangular lattice will be performed. Further, by considering for sake of clarity a spectrum with a circular support, the comparison will be performed under the assumption of same number of pixels for unity of surface, and the particular case of regular hexagonal sampling grid will also be considered. Regular hexagonal lattice with regular hexagonal sensitivity shape of the detector elements will result as the best trade-off between the proposed sampling requirement. Concerning the detector shape, the hexagonal is more advantageous than the rectangular. To show that a figure of merit is defined which takes into account that the MTF (modulation transfer function) of a hexagonal detector is not separable, conversely from that of a rectangular detector. As a final result, octagonal shape detectors are compared to those with rectangular and hexagonal shape in the two hypotheses of equal and ideal fill factor, respectively.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Optimizing "self-wicking" nanowire grids.
Wei, Hui; Dandey, Venkata P; Zhang, Zhening; Raczkowski, Ashleigh; Rice, Willam J; Carragher, Bridget; Potter, Clinton S
2018-05-01
We have developed a self-blotting TEM grid for use with a novel instrument for vitrifying samples for cryo-electron microscopy (cryoEM). Nanowires are grown on the copper surface of the grid using a simple chemical reaction and the opposite smooth side is used to adhere to a holey sample substrate support, for example carbon or gold. When small volumes of sample are applied to the nanowire grids the wires effectively act as blotting paper to rapidly wick away the liquid, leaving behind a thin film. In this technical note, we present a detailed description of how we make these grids using a variety of substrates fenestrated with either lacey or regularly spaced holes. We explain how we characterize the quality of the grids and we describe their behavior under a variety of conditions. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
Schnek: A C++ library for the development of parallel simulation codes on regular grids
NASA Astrophysics Data System (ADS)
Schmitz, Holger
2018-05-01
A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.
Hexagonal Pixels and Indexing Scheme for Binary Images
NASA Technical Reports Server (NTRS)
Johnson, Gordon G.
2004-01-01
A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-10-01
In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.
The design of the second German national forest inventory
Gerald Kandler
2009-01-01
In Germany, a sample-based national forest inventory (NFI) took place for the first time from 1986 to 1990 (in West Germany only); the second one took place from 2001 to 2002. The inventory design is based on a systematic distribution of tracts on regular grids of regionally differing width. The primary sampling unit is a quadrangular tract with sides of 150 m. The...
Incompressible flow simulations on regularized moving meshfree grids
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2017-11-01
A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
GLOBAL GRIDS FROM RECURSIVE DIAMOND SUBDIVISIONS OF THE SURFACE OF AN OCTAHEDRON OR ICOSAHEDRON
In recent years a number of methods have been developed for subdividing the surface of the earth to meet the needs of applications in dynamic modeling, survey sampling, and information storage and display. One set of methods uses the surfaces of Platonic solids, or regular polyhe...
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
Hesford, Andrew J.; Waag, Robert C.
2010-01-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2010-10-01
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Hesford, Andrew J; Waag, Robert C
2010-10-20
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
3D data processing with advanced computer graphics tools
NASA Astrophysics Data System (ADS)
Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott
2012-09-01
Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.
On the Surprising Salience of Curvature in Grouping by Proximity
ERIC Educational Resources Information Center
Strother, Lars; Kubovy, Michael
2006-01-01
The authors conducted 3 experiments to explore the roles of curvature, density, and relative proximity in the perceptual organization of ambiguous dot patterns. To this end, they developed a new family of regular dot patterns that tend to be perceptually grouped into parallel contours, dot-sampled structured grids (DSGs). DSGs are similar to the…
Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction
2016-01-01
reconstruction. The array topology samples the scene on a regular grid of phase centers, using a tiling of Boundary Arrays (BAs). Following a simple correction...hardware. Fig. 1 depicts the multistatic array topology. As seen, the topology is a tiled arrangement of Boundary Arrays (BAs). The BA is a well-known...sparse array layout comprised of two linear transmit arrays, and two linear receive arrays [6]. A slightly different tiled arrangement of BAs was used
Quadtree of TIN: a new algorithm of dynamic LOD
NASA Astrophysics Data System (ADS)
Zhang, Junfeng; Fei, Lifan; Chen, Zhen
2009-10-01
Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.
Coverage-maximization in networks under resource constraints.
Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy
2010-06-01
Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.
Mu, Guangyu; Liu, Ying; Wang, Limin
2015-01-01
The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
Yaski, Osnat; Portugali, Juval; Eilam, David
2012-04-01
The physical structure of the surrounding environment shapes the paths of progression, which in turn reflect the structure of the environment and the way that it shapes behavior. A regular and coherent physical structure results in paths that extend over the entire environment. In contrast, irregular structure results in traveling over a confined sector of the area. In this study, rats were tested in a dark arena in which half the area contained eight objects in a regular grid layout, and the other half contained eight objects in an irregular layout. In subsequent trials, a salient landmark was placed first within the irregular half, and then within the grid. We hypothesized that rats would favor travel in the area with regular order, but found that activity in the area with irregular object layout did not differ from activity in the area with grid layout, even when the irregular half included a salient landmark. Thus, the grid impact in one arena half extended to the other half and overshadowed the presumed impact of the salient landmark. This could be explained by mechanisms that control spatial behavior, such as grid cells and odometry. However, when objects were spaced irregularly over the entire arena, the salient landmark became dominant and the paths converged upon it, especially from objects with direct access to the salient landmark. Altogether, three environmental properties: (i) regular and predictable structure; (ii) salience of landmarks; and (iii) accessibility, hierarchically shape the paths of progression in a dark environment. Copyright © 2012 Elsevier B.V. All rights reserved.
Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.
2014-01-01
Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894
Effects of high-frequency damping on iterative convergence of implicit viscous solver
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko
2017-11-01
This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.
CFD analysis of turbopump volutes
NASA Technical Reports Server (NTRS)
Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken
1993-01-01
An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.
Crăciun, Cora
2014-08-01
CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.
A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece
2017-05-12
Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less
NASA Technical Reports Server (NTRS)
Swinbank, Richard; Purser, James
2006-01-01
Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.
If Pythagoras Had a Geoboard...
ERIC Educational Resources Information Center
Ewbank, William A.
1973-01-01
Finding areas on square grid and on isometric grid geoboards is explained, then the Pythagorean Theorem is investigated when regular n-gons and when similar figures are erected on the sides of a right triangle. (DT)
Method of assembly of molecular-sized nets and scaffolding
Michl, Josef; Magnera, Thomas F.; David, Donald E.; Harrison, Robin M.
1999-01-01
The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures.
Method of assembly of molecular-sized nets and scaffolding
Michl, J.; Magnera, T.F.; David, D.E.; Harrison, R.M.
1999-03-02
The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures. 9 figs.
Variable Grid Traveltime Tomography for Near-surface Seismic Imaging
NASA Astrophysics Data System (ADS)
Cai, A.; Zhang, J.
2017-12-01
We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.
NASA Astrophysics Data System (ADS)
Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando
2016-04-01
Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.
The ARM Best Estimate 2-dimensional Gridded Surface
Xie,Shaocheng; Qi, Tang
2015-06-15
The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.
Distributed Wavelet Transform for Irregular Sensor Network Grids
2005-01-01
implement it in a multi-hop, wireless sensor network ; and illustrate with several simulations. The new transform performs on par with conventional wavelet methods in a head-to-head comparison on a regular grid of sensor nodes.
Global Static Indexing for Real-Time Exploration of Very Large Regular Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pascucci, V; Frank, R
2001-07-23
In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less
NASA Astrophysics Data System (ADS)
Megalingam, Mariammal; Hari Prakash, N.; Solomon, Infant; Sarma, Arun; Sarma, Bornali
2017-04-01
Experimental evidence of different kinds of oscillations in floating potential fluctuations of glow discharge magnetized plasma is being reported. A spherical gridded cage is inserted into the ambient plasma volume for creating plasma bubbles. Plasma is produced between a spherical mesh grid and chamber. The spherical mesh grid of 80% optical transparency is connected to the positive terminal of power supply and considered as anode. Two Langmuir probes are kept in the ambient plasma to measure the floating potential fluctuations in different positions within the system, viz., inside and outside the spherical mesh grid. At certain conditions of discharge voltage (Vd) and magnetic field, irregular to regular mode appears, and it shows chronological changes with respect to magnetic field. Further various nonlinear analyses such as Recurrence Plot, Hurst exponent, and Lyapunov exponent have been carried out to investigate the dynamics of oscillation at a range of discharge voltages and external magnetic fields. Determinism, entropy, and Lmax are important measures of Recurrence Quantification Analysis which indicate an irregular to regular transition in the dynamics of the fluctuations. Furthermore, behavior of the plasma oscillation is characterized by the technique called multifractal detrended fluctuation analysis to explore the nature of the fluctuations. It reveals that it has a multifractal nature and behaves as a long range correlated process.
NCAR global model topography generation software for unstructured grids
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.
2015-06-01
It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.
netCDF Operators for Rapid Analysis of Measured and Modeled Swath-like Data
NASA Astrophysics Data System (ADS)
Zender, C. S.
2015-12-01
Swath-like data (hereafter SLD) are defined by non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data on curvilinear grids. Researchers and data centers want user-friendly, fast, and powerful methods to specify, extract, serve, manipulate, and thus analyze, SLD. To meet these needs, large research-oriented agencies and modeling center such as NASA, DOE, and NOAA increasingly employ the netCDF Operators (NCO), an open-source scientific data analysis software package applicable to netCDF and HDF data. NCO includes extensive, fast, parallelized regridding features to facilitate analysis and intercomparison of SLD and model data. Remote sensing, weather and climate modeling and analysis communities face similar problems in handling SLD including how to easily: 1. Specify and mask irregular regions such as ocean basins and political boundaries in SLD (and rectangular) grids. 2. Bin, interpolate, average, or re-map SLD to regular grids. 3. Derive secondary data from given quality levels of SLD. These common tasks require a data extraction and analysis toolkit that is SLD-friendly and, like NCO, familiar in all these communities. With NCO users can 1. Quickly project SLD onto the most useful regular grids for intercomparison. 2. Access sophisticated statistical and regridding functions that are robust to missing data and allow easy specification of quality control metrics. These capabilities improve interoperability, software-reuse, and, because they apply to SLD, minimize transmission, storage, and handling of unwanted data. While SLD analysis still poses many challenges compared to regularly gridded, rectangular data, the custom analyses scripts SLD once required are now shorter, more powerful, and user-friendly.
A multi-resolution approach to electromagnetic modeling.
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-04-01
We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
Sound-field measurement with moving microphones
Katzberg, Fabrice; Mazur, Radoslaw; Maass, Marco; Koch, Philipp; Mertins, Alfred
2017-01-01
Closed-room scenarios are characterized by reverberation, which decreases the performance of applications such as hands-free teleconferencing and multichannel sound reproduction. However, exact knowledge of the sound field inside a volume of interest enables the compensation of room effects and allows for a performance improvement within a wide range of applications. The sampling of sound fields involves the measurement of spatially dependent room impulse responses, where the Nyquist-Shannon sampling theorem applies in the temporal and spatial domains. The spatial measurement often requires a huge number of sampling points and entails other difficulties, such as the need for exact calibration of a large number of microphones. In this paper, a method for measuring sound fields using moving microphones is presented. The number of microphones is customizable, allowing for a tradeoff between hardware effort and measurement time. The goal is to reconstruct room impulse responses on a regular grid from data acquired with microphones between grid positions, in general. For this, the sound field at equidistant positions is related to the measurements taken along the microphone trajectories via spatial interpolation. The benefits of using perfect sequences for excitation, a multigrid recovery, and the prospects for reconstruction by compressed sensing are presented. PMID:28599533
Analysis models for the estimation of oceanic fields
NASA Technical Reports Server (NTRS)
Carter, E. F.; Robinson, A. R.
1987-01-01
A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Unstructured viscous grid generation by advancing-front method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
1993-01-01
A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.
Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.
Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos
2010-07-01
To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
Grid scale drives the scale and long-term stability of place maps
Mallory, Caitlin S; Hardcastle, Kiah; Bant, Jason S; Giocomo, Lisa M
2018-01-01
Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally-defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here, we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory. PMID:29335607
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
Application of wavefield compressive sensing in surface wave tomography
NASA Astrophysics Data System (ADS)
Zhan, Zhongwen; Li, Qingyang; Huang, Jianping
2018-06-01
Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.
Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.
2018-02-01
We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.
Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.
2018-06-01
We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.
Membrane potential dynamics of grid cells
Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.
2014-01-01
During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984
NASA Astrophysics Data System (ADS)
Gärdenäs, A.; Jarvis, N.; Alavi, G.
The spatial variability of soil characteristics was studied in a small agricultural catch- ment (Vemmenhög, 9 km2) at the field and catchment scales. This analysis serves as a basis for assumptions concerning upscaling approaches used to model pesticide leaching from the catchment with the MACRO model (Jarvis et al., this meeting). The work focused on the spatial variability of two key soil properties for pesticide fate in soil, organic carbon and clay content. The Vemmenhög catchment (9 km2) is formed in a glacial till deposit in southernmost Sweden. The landscape is undulating (30 - 65 m a.s.l.) and 95 % of the area is used for crop production (winter rape, winter wheat, sugar beet and spring barley). The climate is warm temperate. Soil samples for or- ganic C and texture were taken on a small regular grid at Näsby Farm, (144 m x 144 m, sampling distance: 6-24 m, 77 points) and on an irregular large grid covering the whole catchment (sampling distance: 333 m, 46 points). At the field scale, it could be shown that the organic C content was strongly related to landscape position and height (R2= 73 %, p < 0.001, n=50). The organic C content of hollows in the landscape is so high that they contribute little to the total loss of pesticides (Jarvis et al., this meeting). Clay content is also related to landscape position, being larger at the hilltop locations resulting in lower near-saturated hydraulic conductivity. Hence, macropore flow can be expected to be more pronounced (see also Roulier & Jarvis, this meeting). The variability in organic C was similar for the field and catchment grids, which made it possible to krige the organic C content of the whole catchment using data from both grids and an uneven lag distance.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
High throughput profile-profile based fold recognition for the entire human proteome.
McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T
2006-06-07
In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
Operation quality assessment model for video conference system
NASA Astrophysics Data System (ADS)
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
2018-01-01
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
C library for topological study of the electronic charge density.
Vega, David; Aray, Yosslen; Rodríguez, Jesús
2012-12-05
The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.
Measuring Skew in Average Surface Roughness as a Function of Surface Preparation
NASA Technical Reports Server (NTRS)
Stahl, Mark T.
2015-01-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces grinding saving both time and money and allows the science requirements to be better defined. In this study various materials are polished from a fine grind to a fine polish. Each sample's RMS surface roughness is measured at 81 locations in a 9x9 square grid using a Zygo white light interferometer at regular intervals during the polishing process. Each data set is fit with various standard distributions and tested for goodness of fit. We show that the skew in the RMS data changes as a function of polishing time.
An economic passive sampling method to detect particulate pollutants using magnetic measurements.
Cao, Liwan; Appel, Erwin; Hu, Shouyun; Ma, Mingming
2015-10-01
Identifying particulate matter (PM) emitted from industrial processes into the atmosphere is an important issue in environmental research. This paper presents a passive sampling method using simple artificial samplers that maintains the advantage of bio-monitoring, but overcomes some of its disadvantages. The samplers were tested in a heavily polluted area (Linfen, China) and compared to results from leaf samples. Spatial variations of magnetic susceptibility from artificial passive samplers and leaf samples show very similar patterns. Scanning electron microscopy suggests that the collected PM are mostly in the range of 2-25 μm; frequent occurrence of spherical shape indicates industrial combustion dominates PM emission. Magnetic properties around power plants show different features than other plants. This sampling method provides a suitable and economic tool for semi-quantifying temporal and spatial distribution of air quality; they can be installed in a regular grid and calibrate the weight of PM. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kansa, E.J.; Axelrod, M.C.; Kercher, J.R.
1994-05-01
Our current research into the response of natural ecosystems to a hypothesized climatic change requires that we have estimates of various meteorological variables on a regularly spaced grid of points on the surface of the earth. Unfortunately, the bulk of the world`s meteorological measurement stations is located at airports that tend to be concentrated on the coastlines of the world or near populated areas. We can also see that the spatial density of the station locations is extremely non-uniform with the greatest density in the USA, followed by Western Europe. Furthermore, the density of airports is rather sparse in desertmore » regions such as the Sahara, the Arabian, Gobi, and Australian deserts; likewise the density is quite sparse in cold regions such as Antarctica Northern Canada, and interior northern Russia. The Amazon Basin in Brazil has few airports. The frequency of airports is obviously related to the population centers and the degree of industrial development of the country. We address the following problem here. Given values of meteorological variables, such as maximum monthly temperature, measured at the more than 5,500 airport stations, interpolate these values onto a regular grid of terrestrial points spaced by one degree in both latitude and longitude. This is known as the scattered data problem.« less
Dynamic Testing and Automatic Repair of Reconfigurable Wiring Harnesses
2006-11-27
Switch An M ×N grid of switches configured to provide a M -input, N -output routing network. Permutation Network A permutation network performs an...wiring reduces the effective advantage of their reduced switch count, particularly when considering that regular grids (crossbar switches being a...are connected to. The outline circuit shown in Fig. 20 shows how a suitable ‘discovery probe’ might be implemented. The circuit shows a UART
Integrating bathymetric and topographic data
NASA Astrophysics Data System (ADS)
Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat
2017-11-01
The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.
Hosseini, Marjan; Kerachian, Reza
2017-09-01
This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.
Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids
NASA Astrophysics Data System (ADS)
Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.
2017-12-01
Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.
NASA Astrophysics Data System (ADS)
Nhu Y, Do
2018-03-01
Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.
Modelling effects on grid cells of sensory input during self‐motion
Raudies, Florian; Hinman, James R.
2016-01-01
Abstract The neural coding of spatial location for memory function may involve grid cells in the medial entorhinal cortex, but the mechanism of generating the spatial responses of grid cells remains unclear. This review describes some current theories and experimental data concerning the role of sensory input in generating the regular spatial firing patterns of grid cells, and changes in grid cell firing fields with movement of environmental barriers. As described here, the influence of visual features on spatial firing could involve either computations of self‐motion based on optic flow, or computations of absolute position based on the angle and distance of static visual cues. Due to anatomical selectivity of retinotopic processing, the sensory features on the walls of an environment may have a stronger effect on ventral grid cells that have wider spaced firing fields, whereas the sensory features on the ground plane may influence the firing of dorsal grid cells with narrower spacing between firing fields. These sensory influences could contribute to the potential functional role of grid cells in guiding goal‐directed navigation. PMID:27094096
NASA Technical Reports Server (NTRS)
Homemdemello, Luiz S.
1992-01-01
An assembly planner for tetrahedral truss structures is presented. To overcome the difficulties due to the large number of parts, the planner exploits the simplicity and uniformity of the shapes of the parts and the regularity of their interconnection. The planning automation is based on the computational formalism known as production system. The global data base consists of a hexagonal grid representation of the truss structure. This representation captures the regularity of tetrahedral truss structures and their multiple hierarchies. It maps into quadratic grids and can be implemented in a computer by using a two-dimensional array data structure. By maintaining the multiple hierarchies explicitly in the model, the choice of a particular hierarchy is only made when needed, thus allowing a more informed decision. Furthermore, testing the preconditions of the production rules is simple because the patterned way in which the struts are interconnected is incorporated into the topology of the hexagonal grid. A directed graph representation of assembly sequences allows the use of both graph search and backtracking control strategies.
NASA Astrophysics Data System (ADS)
Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.
2013-03-01
The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDzORINCA model for 137Cs reinforces the importance of atmospheric modeling in emergency cases to gather information for protecting the population from the adverse effects of radiation.
NASA Astrophysics Data System (ADS)
Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.
2013-07-01
The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDZORINCA model for 137Cs reinforces the importance of atmospheric modelling in emergency cases to gather information for protecting the population from the adverse effects of radiation.
Simulating incompressible flow on moving meshfree grids using General Finite Differences (GFD)
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2016-11-01
We simulate incompressible flow around an oscillating cylinder at different Reynolds numbers using General Finite Differences (GFD) on a meshfree grid. We evolve the meshfree grid by treating each grid node as a particle. To compute velocities and accelerations, we consider the particles at a particular instance as Eulerian observation points. The incompressible Navier-Stokes equations are directly discretized using GFD with boundary conditions enforced using a sharp interface treatment. Cloud sizes are set such that the local approximations use only 16 neighbors. To enforce incompressibility, we apply a semi-implicit approximate projection method. To prevent overlapping particles and formation of voids in the grid, we propose a particle regularization scheme based on a local minimization principle. We validate the GFD results for an oscillating cylinder against the lattice Boltzmann method and find good agreement. Financial support provided by National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Soil Sampling Techniques For Alabama Grain Fields
NASA Technical Reports Server (NTRS)
Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.
2003-01-01
Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.
Bøcher, Peder Klith; McCloy, Keith R
2006-02-01
In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.
NASA Astrophysics Data System (ADS)
Lino, A. C. L.; Dal Fabbro, I. M.
2008-04-01
The conception of a tridimensional digital model of solid figures and plant organs started from topographic survey of virtual surfaces [1], followed by topographic survey of solid figures [2], fruit surface survey [3] and finally the generation of a 3D digital model [4] as presented by [1]. In this research work, i.e. step number [4] tested objects included cylinders, cubes, spheres and fruits. A Ronchi grid named G1 was generated in a PC, from which other grids referred as G2, G3, and G4 were set out of phase by 1/4, 1/2 and 3/4 of period from G1. Grid G1 was then projected onto the samples surface. Projected grid was named Gd. The difference between Gd and G1 followed by filtration generated de moiré fringes M1 and so on, obtaining the fringes M2, M3 and M4 from Gd. Fringes are out of phase one from each other by 1/4 of period, which were processed by the Rising Sun Moiré software to produce packed phase and further on, the unpacked fringes. Tested object was placed on a goniometer and rotate to generate four surfaces topography. These four surveyed surfaces were assembled by means of a SCILAB software, obtaining a three column matrix, corresponding to the object coordinates xi, also having elevation values and coordinates corrected as well. The work includes conclusions on the reliability of the proposed method as well as the setup simplicity and of low cost.
Sampling Scattered Data Onto Rectangular Grids for Volume Visualization
1989-12-01
30 4.4 Building A Rectangular Grid ..... ................ 30 4.5 Sampling Methds ...... ...................... 34 4.6...dimensional data have been developed recently. In computational fluid flow analysis, methods for constructing three dimen- sional numerical grids are...structure of rectangular grids. Because finite element analysis is useful in fields other than fluid flow analysis and the numerical grid has promising
An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP
NASA Astrophysics Data System (ADS)
Moncet, J. L.
2015-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from climatological or numerical forecast data, those sources are problematic for climate applications due to the imprint of biases from past climate analyses or from model error.
Evaluation of global equal-area mass grid solutions from GRACE
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron
2015-04-01
The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.
Self-Avoiding Walks Over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
El Sebai, T; Lagacherie, B; Soulas, G; Martin-Laurent, F
2007-02-01
We assessed the spatial variability of isoproturon mineralization in relation to that of physicochemical and biological parameters in fifty soil samples regularly collected along a sampling grid delimited across a 0.36 ha field plot (40 x 90 m). Only faint relationships were observed between isoproturon mineralization and the soil pH, microbial C biomass, and organic nitrogen. Considerable spatial variability was observed for six of the nine parameters tested (isoproturon mineralization rates, organic nitrogen, genetic structure of the microbial communities, soil pH, microbial biomass and equivalent humidity). The map of isoproturon mineralization rates distribution was similar to that of soil pH, microbial biomass, and organic nitrogen but different from those of structure of the microbial communities and equivalent humidity. Geostatistics revealed that the spatial heterogeneity in the rate of degradation of isoproturon corresponded to that of soil pH and microbial biomass.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
The abrupt development of adult-like grid cell firing in the medial entorhinal cortex
Wills, Thomas J.; Barry, Caswell; Cacucci, Francesca
2012-01-01
Understanding the development of the neural circuits subserving specific cognitive functions such as navigation remains a central problem in neuroscience. Here, we characterize the development of grid cells in the medial entorhinal cortex, which, by nature of their regularly spaced firing fields, are thought to provide a distance metric to the hippocampal neural representation of space. Grid cells emerge at the time of weaning in the rat, at around 3 weeks of age. We investigated whether grid cells in young rats are functionally equivalent to those observed in the adult as soon as they appear, or if instead they follow a gradual developmental trajectory. We find that, from the very youngest ages at which reproducible grid firing is observed (postnatal day 19): grid cells display adult-like firing fields that tessellate to form a coherent map of the local environment; that this map is universal, maintaining its internal structure across different environments; and that grid cells in young rats, as in adults, also encode a representation of direction and speed. To further investigate the developmental processes leading up to the appearance of grid cells, we present data from individual medial entorhinal cortex cells recorded across more than 1 day, spanning the period before and after the grid firing pattern emerged. We find that increasing spatial stability of firing was correlated with increasing gridness. PMID:22557949
High-density grids for efficient data collection from multiple crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto
Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassettemore » or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into theBlu-Ice/DCSSexperimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. As a result, crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures.« less
High-density grids for efficient data collection from multiple crystals
Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto; Barnes, Christopher O.; Bonagura, Christopher A.; Brehmer, Winnie; Brunger, Axel T.; Calero, Guillermo; Caradoc-Davies, Tom T.; Chatterjee, Ruchira; Degrado, William F.; Fraser, James S.; Ibrahim, Mohamed; Kern, Jan; Kobilka, Brian K.; Kruse, Andrew C.; Larsson, Karl M.; Lemke, Heinrik T.; Lyubimov, Artem Y.; Manglik, Aashish; McPhillips, Scott E.; Norgren, Erik; Pang, Siew S.; Soltis, S. M.; Song, Jinhu; Thomaston, Jessica; Tsai, Yingssu; Weis, William I.; Woldeyes, Rahel A.; Yachandra, Vittal; Yano, Junko; Zouni, Athina; Cohen, Aina E.
2016-01-01
Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassette or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into the Blu-Ice/DCSS experimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. Crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures. PMID:26894529
High-density grids for efficient data collection from multiple crystals
Baxter, Elizabeth L.; Aguila, Laura; Alonso-Mori, Roberto; ...
2015-11-03
Higher throughput methods to mount and collect data from multiple small and radiation-sensitive crystals are important to support challenging structural investigations using microfocus synchrotron beamlines. Furthermore, efficient sample-delivery methods are essential to carry out productive femtosecond crystallography experiments at X-ray free-electron laser (XFEL) sources such as the Linac Coherent Light Source (LCLS). To address these needs, a high-density sample grid useful as a scaffold for both crystal growth and diffraction data collection has been developed and utilized for efficient goniometer-based sample delivery at synchrotron and XFEL sources. A single grid contains 75 mounting ports and fits inside an SSRL cassettemore » or uni-puck storage container. The use of grids with an SSRL cassette expands the cassette capacity up to 7200 samples. Grids may also be covered with a polymer film or sleeve for efficient room-temperature data collection from multiple samples. New automated routines have been incorporated into theBlu-Ice/DCSSexperimental control system to support grids, including semi-automated grid alignment, fully automated positioning of grid ports, rastering and automated data collection. Specialized tools have been developed to support crystallization experiments on grids, including a universal adaptor, which allows grids to be filled by commercial liquid-handling robots, as well as incubation chambers, which support vapor-diffusion and lipidic cubic phase crystallization experiments. Experiments in which crystals were loaded into grids or grown on grids using liquid-handling robots and incubation chambers are described. As a result, crystals were screened at LCLS-XPP and SSRL BL12-2 at room temperature and cryogenic temperatures.« less
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Using deconvolution to improve the metrological performance of the grid method
NASA Astrophysics Data System (ADS)
Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis
2013-06-01
The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.
NASA Astrophysics Data System (ADS)
Bogunović, Igor; Pereira, Paulo; Đurđević, Boris
2017-04-01
Information on spatial distribution of soil nutrients in agroecosystems is critical for improving productivity and reducing environmental pressures in intensive farmed soils. In this context, spatial prediction of soil properties should be accurate. In this study we analyse 704 data of soil available phosphorus (AP) and potassium (AK); the data derive from soil samples collected across three arable fields in Baranja region (Croatia) in correspondence of different soil types: Cambisols (169 samples), Chernozems (131 samples) and Gleysoils (404 samples). The samples are collected in a regular sampling grid (distance 225 x 225 m). Several geostatistical techniques (Inverse Distance to a Weight (IDW) with the power of 1, 2 and 3; Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multiquadratic (MTQ), Completely Regularized Spline (CRS), Spline with Tension (SPT) and Thin Plate Spline (TPS); and Local Polynomial (LP) with the power of 1 and 2; two geostatistical techniques -Ordinary Kriging - OK and Simple Kriging - SK) were tested in order to evaluate the most accurate spatial variability maps using criteria of lowest RMSE during cross validation technique. Soil parameters varied considerably throughout the studied fields and their coefficient of variations ranged from 31.4% to 37.7% and from 19.3% to 27.1% for soil AP and AK, respectively. The experimental variograms indicate a moderate spatial dependence for AP and strong spatial dependence for all three locations. The best spatial predictor for AP at Chernozem field was Simple kriging (RMSE=61.711), and for AK inverse multiquadratic (RMSE=44.689). The least accurate technique was Thin plate spline (AP) and Inverse distance to a weight with a power of 1 (AK). Radial basis function models (Spline with Tension for AP at Gleysoil and Cambisol and Completely Regularized Spline for AK at Gleysol) were the best predictors, while Thin Plate Spline models were the least accurate in all three cases. The best interpolator for AK at Cambisol was the local polynomial with the power of 2 (RMSE=33.943), while the least accurate was Thin Plate Spline (RMSE=39.572).
Improving sub-grid scale accuracy of boundary features in regional finite-difference models
Panday, Sorab; Langevin, Christian D.
2012-01-01
As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.
Nine martian years of dust optical depth observations: A reference dataset
NASA Astrophysics Data System (ADS)
Montabone, Luca; Forget, Francois; Kleinboehl, Armin; Kass, David; Wilson, R. John; Millour, Ehouarn; Smith, Michael; Lewis, Stephen; Cantor, Bruce; Lemmon, Mark; Wolff, Michael
2016-07-01
We present a multi-annual reference dataset of the horizontal distribution of airborne dust from martian year 24 to 32 using observations of the martian atmosphere from April 1999 to June 2015 made by the Thermal Emission Spectrometer (TES) aboard Mars Global Surveyor, the Thermal Emission Imaging System (THEMIS) aboard Mars Odyssey, and the Mars Climate Sounder (MCS) aboard Mars Reconnaissance Orbiter (MRO). Our methodology to build the dataset works by gridding the available retrievals of column dust optical depth (CDOD) from TES and THEMIS nadir observations, as well as the estimates of this quantity from MCS limb observations. The resulting (irregularly) gridded maps (one per sol) were validated with independent observations of CDOD by PanCam cameras and Mini-TES spectrometers aboard the Mars Exploration Rovers "Spirit" and "Opportunity", by the Surface Stereo Imager aboard the Phoenix lander, and by the Compact Reconnaissance Imaging Spectrometer for Mars aboard MRO. Finally, regular maps of CDOD are produced by spatially interpolating the irregularly gridded maps using a kriging method. These latter maps are used as dust scenarios in the Mars Climate Database (MCD) version 5, and are useful in many modelling applications. The two datasets (daily irregularly gridded maps and regularly kriged maps) for the nine available martian years are publicly available as NetCDF files and can be downloaded from the MCD website at the URL: http://www-mars.lmd.jussieu.fr/mars/dust_climatology/index.html
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*
Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171
REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.
Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng
2011-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
Methods for Data-based Delineation of Spatial Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, John E.
In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from amore » kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.« less
NASA Astrophysics Data System (ADS)
Frolov, Sergey; Garau, Bartolame; Bellingham, James
2014-08-01
Regular grid ("lawnmower") survey is a classical strategy for synoptic sampling of the ocean. Is it possible to achieve a more effective use of available resources if one takes into account a priori knowledge about variability in magnitudes of uncertainty and decorrelation scales? In this article, we develop and compare the performance of several path-planning algorithms: optimized "lawnmower," a graph-search algorithm (A*), and a fully nonlinear genetic algorithm. We use the machinery of the best linear unbiased estimator (BLUE) to quantify the ability of a vehicle fleet to synoptically map distribution of phytoplankton off the central California coast. We used satellite and in situ data to specify covariance information required by the BLUE estimator. Computational experiments showed that two types of sampling strategies are possible: a suboptimal space-filling design (produced by the "lawnmower" and the A* algorithms) and an optimal uncertainty-aware design (produced by the genetic algorithm). Unlike the space-filling designs that attempted to cover the entire survey area, the optimal design focused on revisiting areas of high uncertainty. Results of the multivehicle experiments showed that fleet performance predictors, such as cumulative speed or the weight of the fleet, predicted the performance of a homogeneous fleet well; however, these were poor predictors for comparing the performance of different platforms.
A Survey of Spatial and Seasonal Water Isotope Variability on the Juneau Icefield, Alaksa
NASA Astrophysics Data System (ADS)
Dennis, D.; Carter, A.; Clinger, A. E.; Eads, O. L.; Gotwals, S.; Gunderson, J.; Hollyday, A. E.; Klein, E. S.; Markle, B. R.; Timms, J. R.
2015-12-01
The depletion of stable oxygen-hydrogen isotopes (δ18O and δH) is well correlated with temperature change, which is driven by variation in topography, climate, and atmospheric circulation. This study presents a survey of the spatial and seasonal variability of isotopic signatures on the Juneau Icefield (JI), Alaska, USA which spans over 3,000 square-kilometers. To examine small scale variability in the previous year's accumulation, samples were taken at regular intervals from snow pits and a one square-kilometer surficial grid. Surface snow samples were collected across the icefield to evaluate large scale variability, ranging approximately 1,000 meters in elevation and 100 kilometers in distance. Individual precipitation events were also sampled to track percolation throughout the snowpack and temperature correlations. A survey of this extent has never been undertaken on the JI. Samples were analyzed in the field using a Los Gatos laser isotope analyzer. This survey helps us better understand isotope fractionation on temperate glaciers in coastal environments and provides preliminary information on the suitability of the JI for a future ice core drilling project.
The FORBIO Climate data set for climate analyses
NASA Astrophysics Data System (ADS)
Delvaux, C.; Journée, M.; Bertrand, C.
2015-06-01
In the framework of the interdisciplinary FORBIO Climate research project, the Royal Meteorological Institute of Belgium is in charge of providing high resolution gridded past climate data (i.e. temperature and precipitation). This climate data set will be linked to the measurements on seedlings, saplings and mature trees to assess the effects of climate variation on tree performance. This paper explains how the gridded daily temperature (minimum and maximum) data set was generated from a consistent station network between 1980 and 2013. After station selection, data quality control procedures were developed and applied to the station records to ensure that only valid measurements will be involved in the gridding process. Thereafter, the set of unevenly distributed validated temperature data was interpolated on a 4 km × 4 km regular grid over Belgium. The performance of different interpolation methods has been assessed. The method of kriging with external drift using correlation between temperature and altitude gave the most relevant results.
Cryo-electron microscopy and cryo-electron tomography of nanoparticles.
Stewart, Phoebe L
2017-03-01
Cryo-transmission electron microscopy (cryo-TEM or cryo-EM) and cryo-electron tomography (cryo-ET) offer robust and powerful ways to visualize nanoparticles. These techniques involve imaging of the sample in a frozen-hydrated state, allowing visualization of nanoparticles essentially as they exist in solution. Cryo-TEM grid preparation can be performed with the sample in aqueous solvents or in various organic and ionic solvents. Two-dimensional (2D) cryo-TEM provides a direct way to visualize the polydispersity within a nanoparticle preparation. Fourier transforms of cryo-TEM images can confirm the structural periodicity within a sample. While measurement of specimen parameters can be performed with 2D TEM images, determination of a three-dimensional (3D) structure often facilitates more spatially accurate quantization. 3D structures can be determined in one of two ways. If the nanoparticle has a homogeneous structure, then 2D projection images of different particles can be averaged using a computational process referred to as single particle reconstruction. Alternatively, if the nanoparticle has a heterogeneous structure, then a structure can be generated by cryo-ET. This involves collecting a tilt-series of 2D projection images for a defined region of the grid, which can be used to generate a 3D tomogram. Occasionally it is advantageous to calculate both a single particle reconstruction, to reveal the regular portions of a nanoparticle structure, and a cryo-electron tomogram, to reveal the irregular features. A sampling of 2D cryo-TEM images and 3D structures are presented for protein based, DNA based, lipid based, and polymer based nanoparticles. WIREs Nanomed Nanobiotechnol 2017, 9:e1417. doi: 10.1002/wnan.1417 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Multivariate Spline Algorithms for CAGD
NASA Technical Reports Server (NTRS)
Boehm, W.
1985-01-01
Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.
NASA Astrophysics Data System (ADS)
Martínez-Casasnovas, J. A.; Ramos, M. C.
2009-04-01
As suggested by previous research in the field of precision viticulture, intra-field yield variability is dependent on the variation of soil properties, and in particular the soil moisture content. Since the mapping in detail of this soil property for precision viticulture applications is highly costly, the objective of the present research is to analyse its relationship with the normalised difference vegetation index from high resolution satellite images to the use it in the definition of vineyard zonal management. The final aim is to improve irrigation in commercial vineyard blocks for better management of inputs and to deliver a more homogeneous fruit to the winery. The study was carried out in a vineyard block located in Raimat (NE Spain, Costers del Segre Designation of Origin). This is a semi-arid area with continental Mediterranean climate and a total annual precipitation between 300-400 mm. The vineyard block (4.5 ha) is planted with Syrah vines in a 3x2 m pattern. The vines are irrigated by means of drips under a partial root drying schedule. Initially, the irrigation sectors had a quadrangular distribution, with a size of about 1 ha each. Yield is highly variable within the block, presenting a coefficient of variation of 24.9%. For the measurement of the soil moisture content a regular sampling grid of 30 x 40 m was defined. This represents a sample density of 8 samples ha-1. At the nodes of the grid, TDR (Time Domain Reflectometer) probe tubes were permanently installed up to the 80 cm or up to reaching a contrasting layer. Multi-temporal measures were taken at different depths (each 20 cm) between November 2006 and December 2007. For each date, a map of the variability of the profile soil moisture content was interpolated by means of geostatistical analysis: from the measured values at the grid points the experimental variograms were computed and modelled and global block kriging (10 m squared blocks) undertaken with a grid spacing of 3 m x 3 m. On the other hand, three Quickbird-2 satellite images where acquired and processed to monitor plant vigour. The dates of images acquisition were: 29-07-2004, 13-07-2005 and 13-07-2006. They are within the range of
Soil Moisture Monitoring using Surface Electrical Resistivity measurements
NASA Astrophysics Data System (ADS)
Calamita, Giuseppe; Perrone, Angela; Brocca, Luca; Straface, Salvatore
2017-04-01
The relevant role played by the soil moisture (SM) for global and local natural processes results in an explicit interest for its spatial and temporal estimation in the vadose zone coming from different scientific areas - i.e. eco-hydrology, hydrogeology, atmospheric research, soil and plant sciences, etc... A deeper understanding of natural processes requires the collection of data on a higher number of points at increasingly higher spatial scales in order to validate hydrological numerical simulations. In order to take the best advantage of the Electrical Resistivity (ER) data with their non-invasive and cost-effective properties, sequential Gaussian geostatistical simulations (sGs) can be applied to monitor the SM distribution into the soil by means of a few SM measurements and a densely regular ER grid of monitoring. With this aim, co-located SM measurements using mobile TDR probes (MiniTrase), and ER measurements, obtained by using a four-electrode device coupled with a geo-resistivimeter (Syscal Junior), were collected during two surveys carried out on a 200 × 60 m2 area. Two time surveys were carried out during which Data were collected at a depth of around 20 cm for more than 800 points adopting a regular grid sampling scheme with steps (5 m) varying according to logistic and soil compaction constrains. The results of this study are robust due to the high number of measurements available for either variables which strengthen the confidence in the covariance function estimated. Moreover, the findings obtained using sGs show that it is possible to estimate soil moisture variations in the pedological zone by means of time-lapse electrical resistivity and a few SM measurements.
An RBF-FD closest point method for solving PDEs on surfaces
NASA Astrophysics Data System (ADS)
Petras, A.; Ling, L.; Ruuth, S. J.
2018-10-01
Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.
Huang, Weiquan; Fang, Tao; Luo, Li; Zhao, Lin; Che, Fengzhu
2017-07-03
The grid strapdown inertial navigation system (SINS) used in polar navigation also includes three kinds of periodic oscillation errors as common SINS are based on a geographic coordinate system. Aiming ships which have the external information to conduct a system reset regularly, suppressing the Schuler periodic oscillation is an effective way to enhance navigation accuracy. The Kalman filter based on the grid SINS error model which applies to the ship is established in this paper. The errors of grid-level attitude angles can be accurately estimated when the external velocity contains constant error, and then correcting the errors of the grid-level attitude angles through feedback correction can effectively dampen the Schuler periodic oscillation. The simulation results show that with the aid of external reference velocity, the proposed external level damping algorithm based on the Kalman filter can suppress the Schuler periodic oscillation effectively. Compared with the traditional external level damping algorithm based on the damping network, the algorithm proposed in this paper can reduce the overshoot errors when the state of grid SINS is switched from the non-damping state to the damping state, and this effectively improves the navigation accuracy of the system.
Colony mapping: A new technique for monitoring crevice-nesting seabirds
Renner, H.M.; Renner, M.; Reynolds, J.H.; Harping, A.M.A.; Jones, I.L.; Irons, D.B.; Byrd, G.V.
2006-01-01
Monitoring populations of auklets and other crevice-nesting seabirds remains problematic, although numerous methods have been attempted since the mid-1960s. Anecdotal evidence suggests several large auklet colonies have recently decreased in both abundance and extent, concurrently with vegetation encroachment and succession. Quantifying changes in the geographical extent of auklet colonies may be a useful alternative to monitoring population size directly. We propose a standardized method for colony mapping using a randomized systematic grid survey with two components: a simple presence/absence survey and an auklet evidence density survey. A quantitative auklet evidence density index was derived from the frequency of droppings and feathers. This new method was used to map the colony on St. George Island in the southeastern Bering Sea and results were compared to previous colony mapping efforts. Auklet presence was detected in 62 of 201 grid cells (each grid cell = 2500 m2) by sampling a randomly placed 16 m2 plot in each cell; estimated colony area = 155 000 m2. The auklet evidence density index varied by two orders of magnitude across the colony and was strongly correlated with means of replicated counts of birds socializing on the colony surface. Quantitatively mapping all large auklet colonies is logistically feasible using this method and would provide an important baseline for monitoring colony status. Regularly monitoring select colonies using this method may be the best means of detecting changes in distribution and population size of crevice-nesting seabirds. ?? The Cooper Ornithological Society 2006.
Can fractal objects operate as efficient inline mixers?
NASA Astrophysics Data System (ADS)
Laizet, Sylvain; Vassilicos, John; Turbulence, Mixing; Flow Control Group Team
2011-11-01
Recently, Hurst & Vassilicos, PoF 2007, Seoud & Vassilicos, PoF 2007, Mazellier & Vassilicos, PoF, 2010 used different multiscale grids to generate turbulence in a wind tunnel and have shown that complex multiscale boundary/initial conditions can drastically influence the behaviour of a turbulent flow, but that the detailled specific nature of the multiscale geometry matters too. Multiscale (fractal) objects can be designed to be immersed in any fluid flow where there is a need to control and design the turbulence generated by the object. Different types of multiscale objects can be designed as different types of energy-efficient mixers with varying degrees of high turbulent intensities, small pressure drop and downstream distance from the grid where the turbulence is most vigorous. Here, we present a 3D DNS study of the stirring and mixing of a passive scalar by turbulence generated with either a fractal square grid or a regular grid in the presence of a mean scalar gradient. The results show that: (1) there is a linear increase for the passive scalar variance for both grids, (2) the passive scalar variance is ten times bigger for the fractal grid, (3) the passive scalar flux is constant after the production region for both grids, (4) the passive scalar flux is enhanced by an order of magnitude for the fractal grid. We acknowledge support from EPSRC, UK.
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Efficient Delaunay Tessellation through K-D Tree Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less
Pulsed laser-induced formation of silica nanogrids
2014-01-01
Silica grids with micron to sub-micron mesh sizes and wire diameters of 50 nm are fabricated on fused silica substrates. They are formed by single-pulse structured excimer laser irradiation of a UV-absorbing silicon suboxide (SiO x ) coating through the transparent substrate. A polydimethylsiloxane (PDMS) superstrate (cover layer) coated on top of the SiO x film prior to laser exposure serves as confinement for controlled laser-induced structure formation. At sufficiently high laser fluence, this process leads to grids consisting of a periodic loop network connected to the substrate at regular positions. By an additional high-temperature annealing, the residual SiO x is oxidized, and a pure SiO2 grid is obtained. PACS 81.07.-b; 81.07.Gf; 81.65.Cf PMID:24581305
SymPix: A Spherical Grid for Efficient Sampling of Rotationally Invariant Operators
NASA Astrophysics Data System (ADS)
Seljebotn, D. S.; Eriksen, H. K.
2016-02-01
We present SymPix, a special-purpose spherical grid optimized for efficiently sampling rotationally invariant linear operators. This grid is conceptually similar to the Gauss-Legendre (GL) grid, aligning sample points with iso-latitude rings located on Legendre polynomial zeros. Unlike the GL grid, however, the number of grid points per ring varies as a function of latitude, avoiding expensive oversampling near the poles and ensuring nearly equal sky area per grid point. The ratio between the number of grid points in two neighboring rings is required to be a low-order rational number (3, 2, 1, 4/3, 5/4, or 6/5) to maintain a high degree of symmetries. Our main motivation for this grid is to solve linear systems using multi-grid methods, and to construct efficient preconditioners through pixel-space sampling of the linear operator in question. As a benchmark and representative example, we compute a preconditioner for a linear system that involves the operator \\widehat{{\\boldsymbol{D}}}+{\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}}, where \\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}} may be described as both local and rotationally invariant operators, and {\\boldsymbol{N}} is diagonal in the pixel domain. For a bandwidth limit of {{\\ell }}{max} = 3000, we find that our new SymPix implementation yields average speed-ups of 360 and 23 for {\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}}, respectively, compared with the previous state-of-the-art implementation.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Cohen, Aina E.; Baxter, Elizabeth L.
2018-01-16
An X-ray data collection grid device is provided that includes a magnetic base that is compatible with robotic sample mounting systems used at synchrotron beamlines, a grid element fixedly attached to the magnetic base, where the grid element includes at least one sealable sample window disposed through a planar synchrotron-compatible material, where the planar synchrotron-compatible material includes at least one automated X-ray positioning and fluid handling robot fiducial mark.
De Miguel, Eduardo; Barrio-Parra, Fernando; Elío, Javier; Izquierdo-Díaz, Miguel; García-González, Jerónimo Emilio; Mazadiego, Luis Felipe; Medina, Rafael
2018-06-02
The applicability of radon ( 222 Rn) measurements to delineate non-aqueous phase liquids (NAPL) contamination in subsoil is discussed at a site with lithological discontinuities through a blind test. Three alpha spectroscopy monitors were used to measure radon in soil air in a 25,000-m 2 area, following a regular sampling design with a 20-m 2 grid. Repeatability and reproducibility of the results were assessed by means of duplicate measurements in six sampling positions. Furthermore, three points not affected by oil spills were sampled to estimate radon background concentration in soil air. Data histograms, Q-Q plots, variograms, and cluster analysis allowed to recognize two data populations, associated with the possible path of a fault and a lithological discontinuity. Even though the concentration of radon in soil air was dominated by this discontinuity, the characterization of the background emanation in each lithological unit allowed to distinguish areas potentially affected by NAPL, thus justifying the application of radon emanometry as a screening technique for the delineation of NAPL plumes in sites with lithological discontinuities.
NASA Astrophysics Data System (ADS)
Zhai, Xiaofang; Zhu, Xinyan; Xiao, Zhifeng; Weng, Jie
2009-10-01
Historically, cellular automata (CA) is a discrete dynamical mathematical structure defined on spatial grid. Research on cellular automata system (CAS) has focused on rule sets and initial condition and has not discussed its adjacency. Thus, the main focus of our study is the effect of adjacency on CA behavior. This paper is to compare rectangular grids with hexagonal grids on their characteristics, strengths and weaknesses. They have great influence on modeling effects and other applications including the role of nearest neighborhood in experimental design. Our researches present that rectangular and hexagonal grids have different characteristics. They are adapted to distinct aspects, and the regular rectangular or square grid is used more often than the hexagonal grid. But their relative merits have not been widely discussed. The rectangular grid is generally preferred because of its symmetry, especially in orthogonal co-ordinate system and the frequent use of raster from Geographic Information System (GIS). However, in terms of complex terrain, uncertain and multidirectional region, we have preferred hexagonal grids and methods to facilitate and simplify the problem. Hexagonal grids can overcome directional warp and have some unique characteristics. For example, hexagonal grids have a simpler and more symmetric nearest neighborhood, which avoids the ambiguities of the rectangular grids. Movement paths or connectivity, the most compact arrangement of pixels, make hexagonal appear great dominance in the process of modeling and analysis. The selection of an appropriate grid should be based on the requirements and objectives of the application. We use rectangular and hexagonal grids respectively for developing city model. At the same time we make use of remote sensing images and acquire 2002 and 2005 land state of Wuhan. On the base of city land state in 2002, we make use of CA to simulate reasonable form of city in 2005. Hereby, these results provide a proof of concept for hexagonal which has great dominance.
On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets
NASA Astrophysics Data System (ADS)
Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.
2001-05-01
There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.
Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Papadopoulos, G. A.
2017-12-01
The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.
Sea surface temperature and salinity from French research vessels, 2001–2013
Gaillard, Fabienne; Diverres, Denis; Jacquin, Stéphane; Gouriou, Yves; Grelet, Jacques; Le Menn, Marc; Tassel, Joelle; Reverdin, Gilles
2015-01-01
French Research vessels have been collecting thermo-salinometer (TSG) data since 1999 to contribute to the Global Ocean Surface Underway Data (GOSUD) programme. The instruments are regularly calibrated and continuously monitored. Water samples are taken on a daily basis by the crew and later analysed in the laboratory. We present here the delayed mode processing of the 2001–2013 dataset and an overview of the resulting quality. Salinity measurement error was a few hundredths of a unit or less on the practical salinity scale (PSS), due to careful calibration and instrument maintenance, complemented with a rigorous adjustment on water samples. In a global comparison, these data show excellent agreement with an ARGO-based salinity gridded product. The Sea Surface Salinity and Temperature from French REsearch SHips (SSST-FRESH) dataset is very valuable for the ‘calibration and validation’ of the new satellite observations delivered by the Soil Moisture and Ocean Salinity (SMOS) and Aquarius missions. PMID:26504523
The SAMI Galaxy Survey: cubism and covariance, putting round pegs into square holes
NASA Astrophysics Data System (ADS)
Sharp, R.; Allen, J. T.; Fogarty, L. M. R.; Croom, S. M.; Cortese, L.; Green, A. W.; Nielsen, J.; Richards, S. N.; Scott, N.; Taylor, E. N.; Barnes, L. A.; Bauer, A. E.; Birchall, M.; Bland-Hawthorn, J.; Bloom, J. V.; Brough, S.; Bryant, J. J.; Cecil, G. N.; Colless, M.; Couch, W. J.; Drinkwater, M. J.; Driver, S.; Foster, C.; Goodwin, M.; Gunawardhana, M. L. P.; Ho, I.-T.; Hampton, E. J.; Hopkins, A. M.; Jones, H.; Konstantopoulos, I. S.; Lawrence, J. S.; Leslie, S. K.; Lewis, G. F.; Liske, J.; López-Sánchez, Á. R.; Lorente, N. P. F.; McElroy, R.; Medling, A. M.; Mahajan, S.; Mould, J.; Parker, Q.; Pracy, M. B.; Obreschkow, D.; Owers, M. S.; Schaefer, A. L.; Sweet, S. M.; Thomas, A. D.; Tonini, C.; Walcher, C. J.
2015-01-01
We present a methodology for the regularization and combination of sparse sampled and irregularly gridded observations from fibre-optic multiobject integral field spectroscopy. The approach minimizes interpolation and retains image resolution on combining subpixel dithered data. We discuss the methodology in the context of the Sydney-AAO multiobject integral field spectrograph (SAMI) Galaxy Survey underway at the Anglo-Australian Telescope. The SAMI instrument uses 13 fibre bundles to perform high-multiplex integral field spectroscopy across a 1° diameter field of view. The SAMI Galaxy Survey is targeting ˜3000 galaxies drawn from the full range of galaxy environments. We demonstrate the subcritical sampling of the seeing and incomplete fill factor for the integral field bundles results in only a 10 per cent degradation in the final image resolution recovered. We also implement a new methodology for tracking covariance between elements of the resulting data cubes which retains 90 per cent of the covariance information while incurring only a modest increase in the survey data volume.
de Vries, W; Wieggers, H J J; Brus, D J
2010-08-05
Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).
Sampling designs matching species biology produce accurate and affordable abundance indices
Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff
2013-01-01
Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290
NASA Astrophysics Data System (ADS)
Edwards, Nathaniel S.; Conley, Jerrod C.; Reichenberger, Michael A.; Nelson, Kyle A.; Tiner, Christopher N.; Hinson, Niklas J.; Ugorowski, Philip B.; Fronk, Ryan G.; McGregor, Douglas S.
2018-06-01
The propagation of electrons through several linear pore densities of reticulated vitreous carbon (RVC) foam was studied using a Frisch-grid parallel-plate ionization chamber pressurized to 1 psig of P-10 proportional gas. The operating voltages of the electrodes contained within the Frisch-grid parallel-plate ionization chamber were defined by measuring counting curves using a collimated 241Am alpha-particle source with and without a Frisch grid. RVC foam samples with linear pore densities of 5, 10, 20, 30, 45, 80, and 100 pores per linear inch were separately positioned between the cathode and anode. Pulse-height spectra and count rates from a collimated 241Am alpha-particle source positioned between the cathode and each RVC foam sample were measured and compared to a measurement without an RVC foam sample. The Frisch grid was positioned in between the RVC foam sample and the anode. The measured pulse-height spectra were indiscernible from background and resulted in negligible net count rates for all RVC foam samples. The Frisch grid parallel-plate ionization chamber measurement results indicate that electrons do not traverse the bulk of RVC foam and consequently do not produce a pulse.
NASA Astrophysics Data System (ADS)
Campagnolo, M.; Schaaf, C.
2016-12-01
Due to the necessity of time compositing and other user requirements, vegetation indices, as well as many other EOS derived products, are distributed in a gridded format (level L2G or higher) using an equal area sinusoidal grid, at grid sizes of 232 m, 463 m or 926 m. In this process, the actual surface signal suffers somewhat of a degradation, caused by both the sensor's point spread function and this resampling from swath to the regular grid. The magnitude of that degradation depends on a number of factors, such as surface heterogeneity, band nominal resolution, observation geometry and grid size. In this research, the effect of grid size is quantified for MODIS and VIIRS (at five EOS validation sites with distinct land covers), for the full range of view zenith angles, and at grid sizes of 232 m, 253 m, 309 m, 371 m, 397 m and 463 m. This allows us to compare MODIS and VIIRS gridded products for the same scenes, and to determine the grid size at which these products are most similar. Towards that end, simulated MODIS and VIIRS bands are generated from Landsat 8 surface reflectance images at each site and gridded products are then derived by using maximum obscov resampling. Then, for every grid size, the original Landsat 8 NDVI and the derived MODIS and VIIRS NDVI products are compared. This methodology can be applied to other bands and products, to determine which spatial aggregation overall is best suited for EOS to S-NPP product continuity. Results for MODIS (250 m bands) and VIIRS (375 m bands) NDVI products show that finer grid sizes tend to be better at preserving the original signal. Significant degradation for gridded NDVI occurs when grid size is larger then 253 m (MODIS) and 371 m (VIIRS). Our results suggest that current MODIS "500 m" (actually 463 m) grid size is best for product continuity. Note however, that up to that grid size value, MODIS gridded products are somewhat better at preserving the surface signal than VIIRS, except for at very high VZA.
Comparison of measuring strategies for the 3-D electrical resistivity imaging of tumuli
NASA Astrophysics Data System (ADS)
Tsourlos, Panagiotis; Papadopoulos, Nikos; Yi, Myeong-Jong; Kim, Jung-Ho; Tsokas, Gregory
2014-02-01
Artificial erected hills like tumuli, mounds, barrows and kurgans comprise monuments of the past human activity and offer opportunities to reconstruct habitation models regarding the life and customs during their building period. These structures also host features of archeological significance like architectural relics, graves or chamber tombs. Tumulus exploration is a challenging geophysical problem due to the complex distribution of the subsurface physical properties, the size and burial depth of potential relics and the uneven topographical terrain. Geoelectrical methods by means of three-dimensional (3-D) inversion are increasingly popular for tumulus investigation. Typically data are obtained by establishing a regular rectangular grid and assembling the data collected by parallel two-dimensional (2-D) tomographies. In this work the application of radial 3-D mode is studied, which is considered as the assembly of data collected by radially positioned Electrical Resistivity Tomography (ERT) lines. The relative advantages and disadvantages of this measuring mode over the regular grid measurements were investigated and optimum ways to perform 3-D ERT surveys for tumuli investigations were proposed. Comparative test was performed by means of synthetic examples as well as by tests with field data. Overall all tested models verified the superiority of the radial mode in delineating bodies positioned at the central part of the tumulus while regular measuring mode proved superior in recovering bodies positioned away from the center of the tumulus. The combined use of radial and regular modes seems to produce superior results in the expense of time required for data acquisition and processing.
NASA Astrophysics Data System (ADS)
Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.
2017-02-01
An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack
2018-04-01
A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.
Cascading failures in ac electricity grids.
Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan
2016-09-01
Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.
Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark
2018-05-09
Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core
NASA Astrophysics Data System (ADS)
Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey
2017-05-01
SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.
Pilly, Praveen K.; Grossberg, Stephen
2013-01-01
Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous adaptive robots capable of spatial navigation. PMID:23577130
SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, M; Tobias, R; Pankuch, M
Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less
From grid cells to place cells with realistic field sizes
2017-01-01
While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005
Im, Seokjin; Choi, JinTak
2014-06-17
In the pervasive computing environment using smart devices equipped with various sensors, a wireless data broadcasting system for spatial data items is a natural way to efficiently provide a location dependent information service, regardless of the number of clients. A non-flat wireless broadcast system can support the clients in accessing quickly their preferred data items by disseminating the preferred data items more frequently than regular data on the wireless channel. To efficiently support the processing of spatial window queries in a non-flat wireless data broadcasting system, we propose a distributed air index based on a maximum boundary rectangle (MaxBR) over grid-cells (abbreviated DAIM), which uses MaxBRs for filtering out hot data items on the wireless channel. Unlike the existing index that repeats regular data items in close proximity to hot items at same frequency as hot data items in a broadcast cycle, DAIM makes it possible to repeat only hot data items in a cycle and reduces the length of the broadcast cycle. Consequently, DAIM helps the clients access the desired items quickly, improves the access time, and reduces energy consumption. In addition, a MaxBR helps the clients decide whether they have to access regular data items or not. Simulation studies show the proposed DAIM outperforms existing schemes with respect to the access time and energy consumption.
Quantitative characterization of the small-scale fracture patterns on the plains of Venus
NASA Technical Reports Server (NTRS)
Sammis, Charles G.; Bowman, David D.
1995-01-01
The objectives of this research project were to (1) compile a comprehensive database of the occurrence of regularly spaced kilometer scale lineations on the volcanic plains of Venus in an effort to verify the effectiveness of the shear-lag model developed by Banerdt and Sammis (1992), and (2) develop a model for the formation of irregular kilometer scale lineations such as typified in the gridded plains region of Guinevere Planitia. Attached to this report is the paper 'A Tectonic Model for the Formation of the Gridded Plains on Guinevere Planitia, Venus, and Implications for the Elastic Thickness of the Lithosphere'.
Real-Time Rotational Activity Detection in Atrial Fibrillation
Ríos-Muñoz, Gonzalo R.; Arenal, Ángel; Artés-Rodríguez, Antonio
2018-01-01
Rotational activations, or spiral waves, are one of the proposed mechanisms for atrial fibrillation (AF) maintenance. We present a system for assessing the presence of rotational activity from intracardiac electrograms (EGMs). Our system is able to operate in real-time with multi-electrode catheters of different topologies in contact with the atrial wall, and it is based on new local activation time (LAT) estimation and rotational activity detection methods. The EGM LAT estimation method is based on the identification of the highest sustained negative slope of unipolar signals. The method is implemented as a linear filter whose output is interpolated on a regular grid to match any catheter topology. Its operation is illustrated on selected signals and compared to the classical Hilbert-Transform-based phase analysis. After the estimation of the LAT on the regular grid, the detection of rotational activity in the atrium is done by a novel method based on the optical flow of the wavefront dynamics, and a rotation pattern match. The methods have been validated using in silico and real AF signals. PMID:29593566
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dong Sik; Lee, Sanggyun
2013-06-15
Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less
Variability of 137Cs inventory at a reference site in west-central Iran.
Bazshoushtari, Nasim; Ayoubi, Shamsollah; Abdi, Mohammad Reza; Mohammadi, Mohammad
2016-12-01
137 Cs technique has been widely used for the evaluation rates and patterns of soil erosion and deposition. This technique requires an accurate estimate of the values of 137 Cs inventory at the reference site. This study was conducted to evaluate the variability of the inventory of 137 Cs regarding to the sampling program including sample size, distance and sampling method at a reference site located in vicinity of Fereydan district in Isfahan province, west-central Iran. Two 3 × 8 grids were established comprising large grid (35 m length and 8 m width), and small grid (24 m length and 6 m width). At each grid intersection two soil samples were collected from 0 to 15 cm and 15-30 cm depths, totally 96 soil samples from 48 sampling points. Coefficients of variation for 137 Cs inventory in the soil samples was relatively low (CV = 15%), and the sampling distance and methods used did not significantly affect the 137 Cs inventories across the studied reference site. To obtain a satisfactory estimate of the mean 137 Cs activity in the reference sites, particularly those located in the semiarid regions, it is recommended to collect at least four samples along in a grid pattern 3 m apart. Copyright © 2016 Elsevier Ltd. All rights reserved.
Petrovskaya, Natalia B.; Forbes, Emily; Petrovskii, Sergei V.; Walters, Keith F. A.
2018-01-01
Studies addressing many ecological problems require accurate evaluation of the total population size. In this paper, we revisit a sampling procedure used for the evaluation of the abundance of an invertebrate population from assessment data collected on a spatial grid of sampling locations. We first discuss how insufficient information about the spatial population density obtained on a coarse sampling grid may affect the accuracy of an evaluation of total population size. Such information deficit in field data can arise because of inadequate spatial resolution of the population distribution (spatially variable population density) when coarse grids are used, which is especially true when a strongly heterogeneous spatial population density is sampled. We then argue that the average trap count (the quantity routinely used to quantify abundance), if obtained from a sampling grid that is too coarse, is a random variable because of the uncertainty in sampling spatial data. Finally, we show that a probabilistic approach similar to bootstrapping techniques can be an efficient tool to quantify the uncertainty in the evaluation procedure in the presence of a spatial pattern reflecting a patchy distribution of invertebrates within the sampling grid. PMID:29495513
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2013 CFR
2013-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2011 CFR
2011-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2010 CFR
2010-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2014 CFR
2014-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2012 CFR
2012-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
glideinWMS—a generic pilot-based workload management system
NASA Astrophysics Data System (ADS)
Sfiligoi, I.
2008-07-01
The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. glideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.
glideinWMS - A generic pilot-based Workload Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sfiligoi, Igor; /Fermilab
The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. GlideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a setmore » of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.« less
TIGGERC: Turbomachinery Interactive Grid Generator for 2-D Grid Applications and Users Guide
NASA Technical Reports Server (NTRS)
Miller, David P.
1994-01-01
A two-dimensional multi-block grid generator has been developed for a new design and analysis system for studying multiple blade-row turbomachinery problems. TIGGERC is a mouse driven, interactive grid generation program which can be used to modify boundary coordinates and grid packing and generates surface grids using a hyperbolic tangent or algebraic distribution of grid points on the block boundaries. The interior points of each block grid are distributed using a transfinite interpolation approach. TIGGERC can generate a blocked axisymmetric H-grid, C-grid, I-grid or O-grid for studying turbomachinery flow problems. TIGGERC was developed for operation on Silicon Graphics workstations. Detailed discussion of the grid generation methodology, menu options, operational features and sample grid geometries are presented.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.
Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.
A multi-resolution approach to electromagnetic modelling
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-07-01
We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
Goldrath, Dara A.; Kulongoski, Justin T.; Davis, Tracy A.
2016-09-01
Groundwater quality in the 3,016-square-mile Monterey–Salinas Shallow Aquifer study unit was investigated by the U.S. Geological Survey (USGS) from October 2012 to May 2013 as part of the California State Water Resources Control Board Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project. The GAMA Monterey–Salinas Shallow Aquifer study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the shallow-aquifer systems in parts of Monterey and San Luis Obispo Counties and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The shallow-aquifer system in the Monterey–Salinas Shallow Aquifer study unit was defined as those parts of the aquifer system shallower than the perforated depth intervals of public-supply wells, which generally corresponds to the part of the aquifer system used by domestic wells. Groundwater quality in the shallow aquifers can differ from the quality in the deeper water-bearing zones; shallow groundwater can be more vulnerable to surficial contamination.Samples were collected from 170 sites that were selected by using a spatially distributed, randomized grid-based method. The study unit was divided into 4 study areas, each study area was divided into grid cells, and 1 well was sampled in each of the 100 grid cells (grid wells). The grid wells were domestic wells or wells with screen depths similar to those in nearby domestic wells. A greater spatial density of data was achieved in 2 of the study areas by dividing grid cells in those study areas into subcells, and in 70 subcells, samples were collected from exterior faucets at sites where there were domestic wells or wells with screen depths similar to those in nearby domestic wells (shallow-well tap sites).Field water-quality indicators (dissolved oxygen, water temperature, pH, and specific conductance) were measured, and samples for analysis of inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids, and alkalinity) were collected at all 170 sites. In addition to these constituents, the samples from grid wells were analyzed for organic constituents (volatile organic compounds, pesticides and pesticide degradates), constituents of special interest (perchlorate and N-nitrosodimethylamine, or NDMA), radioactive constituents (radon-222 and gross-alpha and gross-beta radioactivity), and geochemical and age-dating tracers (stable isotopes of carbon in dissolved inorganic carbon, carbon-14 abundances, stable isotopes of hydrogen and oxygen in water, and tritium activities).Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 11 percent of the wells in the Monterey–Salinas Shallow Aquifer study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. With the exception of trace elements, blanks rarely contained detectable concentrations of any constituent, indicating that contamination from sample-collection procedures was not a significant source of bias in the data for the groundwater samples. Low concentrations of some trace elements were detected in blanks; therefore, the data were re-censored at higher reporting levels. Replicate samples generally were within the limits of acceptable analytical reproducibility. The median values of matrix-spike recoveries were within the acceptable range (70 to 130 percent) for the volatile organic compounds (VOCs) and N-nitrosodimethylamine (NDMA), but were only approximately 64 percent for pesticides and pesticide degradates.The sample-collection protocols used in this study were designed to obtain representative samples of groundwater. The quality of groundwater can differ from the quality of drinking water because water chemistry can change as a result of contact with plumbing systems or the atmosphere; because of treatment, disinfection, or blending with water from other sources; or some combination of these. Water quality in domestic wells is not regulated in California, however, to provide context for the water-quality data presented in this report, results were compared to benchmarks established for drinking-water quality. The primary comparison benchmarks were maximum contaminant levels established by the U.S. Environmental Protection Agency and the State of California (MCL-US and MCL-CA, respectively). Non-regulatory benchmarks were used for constituents without maximum contaminant levels (MCLs), including Health Based Screening Levels (HBSLs) developed by the USGS and State of California secondary maximum contaminant levels (SMCL-CA) and notification levels. Most constituents detected in samples from the Monterey–Salinas Shallow Aquifer study unit had concentrations less than their respective benchmarks.Of the 148 organic constituents analyzed in the 100 grid-well samples, 38 were detected, and all concentrations were less than the benchmarks. Volatile organic compounds were detected in 26 of the grid wells, and pesticides and pesticide degradates were detected in 28 grid wells. The special-interest constituent NDMA was detected above the HBSL in three samples, one of which also had a perchlorate concentration greater than the MCL-CA.Of the inorganic constituents, 6 were detected at concentrations above their respective MCL benchmarks in grid-well samples: arsenic (5 grid wells above the MCL of 10 micrograms per liter, μg/L), selenium (3 grid wells, MCL of 50 μg/L), uranium (4 grid wells, MCL of 30 μg/L), nitrate (16 grid wells, MCL of 10 milligrams per liter, mg/L), adjusted gross alpha particle activity (10 grid wells, MCL of 15 picocuries per liter, pCi/L), and gross beta particle activity (1 grid well, MCL of 50 pCi/L). An additional 4 inorganic constituents were detected at concentrations above their respective HBSL benchmarks in grid-well samples: boron (1 grid well above the HBSL of 6,000 μg/L), manganese (8 grid wells, HBSL of 300 μg/L), molybdenum (6 grid wells, HBSL of 40 μg/L), and strontium (6 grid wells, HBSL of 4,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in grid-well samples: iron (9 grid wells above the SMCL of 300 μg/L), chloride (7 grid wells, SMCL of 500 mg/L), sulfate (14 grid wells, SMCL of 500 mg/L), and total dissolved solids (27 grid wells, SMCL of 1,000 mg/L).Of the inorganic constituents analyzed in the 70 shallow-well tap sites, 10 were detected at concentrations above the benchmarks. Of the inorganic constituents, 3 were detected at concentrations above their respective MCL benchmarks in shallow-well tap sites: arsenic (2 shallow-well tap sites above the MCL of 10 μg/L), uranium (2 shallow-well tap sites, MCL of 30 μg/L), and nitrate (24 shallow-well tap sites, MCL of 10 mg/L). An additional 3 inorganic constituents were detected above their respective HBSL benchmarks in shallow-well tap sites: manganese (4 shallow-well tap sites above the HBSL of 300 μg/L), molybdenum (4 shallow-well tap sites, HBSL of 40 μg/L), and zinc (2 shallow-well tap sites, HBSL of 2,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in shallow-well tap sites: iron (6 shallow-well tap sites above the SMCL of 300 μg/L), chloride (1 shallow-well tap site, SMCL of 500 mg/L), sulfate (9 shallow-well tap sites, SMCL of 500 mg/L), and total dissolved solids (15 shallow-well tap sites, SMCL of 1,000 mg/L).
Stochastic dynamic modeling of regular and slow earthquakes
NASA Astrophysics Data System (ADS)
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.
Mass production of extensive air showers for the Pierre Auger Collaboration using Grid Technology
NASA Astrophysics Data System (ADS)
Lozano Bahilo, Julio; Pierre Auger Collaboration
2012-06-01
When ultra-high energy cosmic rays enter the atmosphere they interact producing extensive air showers (EAS) which are the objects studied by the Pierre Auger Observatory. The number of particles involved in an EAS at these energies is of the order of billions and the generation of a single simulated EAS requires many hours of computing time with current processors. In addition, the storage space consumed by the output of one simulated EAS is very high. Therefore we have to make use of Grid resources to be able to generate sufficient quantities of showers for our physics studies in reasonable time periods. We have developed a set of highly automated scripts written in common software scripting languages in order to deal with the high number of jobs which we have to submit regularly to the Grid. In spite of the low number of sites supporting our Virtual Organization (VO) we have reached the top spot on CPU consumption among non LHC (Large Hadron Collider) VOs within EGI (European Grid Infrastructure).
Application of spatially gridded temperature and land cover data sets for urban heat island analysis
Gallo, Kevin; Xian, George Z.
2014-01-01
Two gridded data sets that included (1) daily mean temperatures from 2006 through 2011 and (2) satellite-derived impervious surface area, were combined for a spatial analysis of the urban heat-island effect within the Dallas-Ft. Worth Texas region. The primary advantage of using these combined datasets included the capability to designate each 1 × 1 km grid cell of available temperature data as urban or rural based on the level of impervious surface area within the grid cell. Generally, the observed differences in urban and rural temperature increased as the impervious surface area thresholds used to define an urban grid cell were increased. This result, however, was also dependent on the size of the sample area included in the analysis. As the spatial extent of the sample area increased and included a greater number of rural defined grid cells, the observed urban and rural differences in temperature also increased. A cursory comparison of the spatially gridded temperature observations with observations from climate stations suggest that the number and location of stations included in an urban heat island analysis requires consideration to assure representative samples of each (urban and rural) environment are included in the analysis.
NASA Astrophysics Data System (ADS)
Khaki, M.; Forootan, E.; Sharifi, M. A.; Awange, J.; Kuhn, M.
2015-09-01
Satellite radar altimetry observations are used to derive short wavelength gravity anomaly fields over the Persian Gulf and the Caspian Sea, where in situ and ship-borne gravity measurements have limited spatial coverage. In this study the retracking algorithm `Extrema Retracking' (ExtR) was employed to improve sea surface height (SSH) measurements that are highly biased in the study regions due to land contaminations in the footprints of the satellite altimetry observations. ExtR was applied to the waveforms sampled by the five satellite radar altimetry missions: TOPEX/POSEIDON, JASON-1, JASON-2, GFO and ERS-1. Along-track slopes have been estimated from the improved SSH measurements and used in an iterative process to estimate deflections of the vertical, and subsequently, the desired gravity anomalies. The main steps of the gravity anomaly computations involve estimating improved SSH using the ExtR technique, computing deflections of the vertical from interpolated SSHs on a regular grid using a biharmonic spline interpolation and finally estimating gridded gravity anomalies. A remove-compute-restore algorithm, based on the fast Fourier transform, has been applied to convert deflections of the vertical into gravity anomalies. Finally, spline interpolation has been used to estimate regular gravity anomaly grids over the two study regions. Results were evaluated by comparing the estimated altimetry-derived gravity anomalies (with and without implementing the ExtR algorithm) with ship-borne free air gravity anomaly observations, and free air gravity anomalies from the Earth Gravitational Model 2008 (EGM2008). The comparison indicates a range of 3-5 mGal in the residuals, which were computed by taking the differences between the retracked altimetry-derived gravity anomaly and the ship-borne data. The comparison of retracked data with ship-borne data indicates a range in the root-mean-square-error (RMSE) between approximately 1.8 and 4.4 mGal and a bias between 0.4062 and 2.1413 mGal over different areas. Also a maximum RMSE of 4.4069 mGal, with a mean value of 0.7615 mGal was obtained in the residuals. An average improvement of 5.2746 mGal in the RMSE of the altimetry-derived gravity anomalies corresponding to 89.9 per cent was obtained after applying the ExtR post-processing.
Regularization techniques on least squares non-uniform fast Fourier transform.
Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena
2013-05-01
Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.
Fast and accurate 3D tensor calculation of the Fock operator in a general basis
NASA Astrophysics Data System (ADS)
Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.
2012-11-01
The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
Discovering Structural Regularity in 3D Geometry
Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.
2010-01-01
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292
Rapid detection of Salmonella spp. in food by use of the ISO-GRID hydrophobic grid membrane filter.
Entis, P; Brodsky, M H; Sharpe, A N; Jarvis, G A
1982-01-01
A rapid hydrophobic grid-membrane filter (HGMF) method was developed and compared with the Health Protection Branch cultural method for the detection of Salmonella spp. in 798 spiked samples and 265 naturally contaminated samples of food. With the HGMF method, Salmonella spp. were isolated from 618 of the spiked samples and 190 of the naturally contaminated samples. The conventional method recovered Salmonella spp. from 622 spiked samples and 204 unspiked samples. The isolation rates from Salmonella-positive samples for the two methods were not significantly different (94.6% overall for the HGMF method and 96.7% for the conventional approach), but the HGMF results were available in only 2 to 3 days after sample receipt compared with 3 to 4 days by the conventional method. Images PMID:7059168
Networks of channels for self-healing composite materials
NASA Astrophysics Data System (ADS)
Bejan, A.; Lorente, S.; Wang, K.-M.
2006-08-01
This is a fundamental study of how to vascularize a self-healing composite material so that healing fluid reaches all the crack sites that may occur randomly through the material. The network of channels is built into the material and is filled with pressurized healing fluid. When a crack forms, the pressure drops at the crack site and fluid flows from the network into the crack. The objective is to discover the network configuration that is capable of delivering fluid to all the cracks the fastest. The crack site dimension and the total volume of the channels are fixed. It is argued that the network must be configured as a grid and not as a tree. Two classes of grids are considered and optimized: (i) grids with one channel diameter and regular polygonal loops (square, triangle, hexagon) and (ii) grids with two channel sizes. The best architecture of type (i) is the grid with triangular loops. The best architecture of type (ii) has a particular (optimal) ratio of diameters that departs from 1 as the crack length scale becomes smaller than the global scale of the vascularized structure from which the crack draws its healing fluid. The optimization of the ratio of channel diameters cuts in half the time of fluid delivery to the crack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D
In this work, the net power delivered to the grid from a nonideal power take-off (PTO) is introduced followed by a review of the pseudo-spectral control theory. A power-to-load ratio, used to evaluate the pseudo-spectral controller performance, is discussed, and the results obtained from optimizing a multiterm objective function are compared against results obtained from maximizing the net output power to the grid. Simulation results are then presented for four different oscillating wave energy converter geometries to highlight the potential of combing both geometry and PTO control to maximize power while minimizing loads.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
High-resolution CSR GRACE RL05 mascons
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2016-10-01
The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
Application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Parks, Steven J.; Chan, William M.; Renze, Kevin J.
1992-01-01
Several issues relating to the application of Chimera overlapped grids to complex geometries and flowfields are discussed. These include the addition of geometric components with different grid topologies, gridding for intersecting pieces of geometry, and turbulence modeling in grid overlap regions. Sample results are presented for transonic flow about the Space Shuttle launch vehicle. Comparisons with wind tunnel and flight measured pressures are shown.
NASA Astrophysics Data System (ADS)
Juvela, Mika J.
The relationship between physical conditions of an interstellar cloud and the observed radiation is defined by the radiative transfer problem. Radiative transfer calculations are needed if, e.g., one wants to disentangle abundance variations from excitation effects or wants to model variations of dust properties inside an interstellar cloud. New observational facilities (e.g., ALMA and Herschel) will bring improved accuracy both in terms of intensity and spatial resolution. This will enable detailed studies of the densest sub-structures of interstellar clouds and star forming regions. Such observations must be interpreted with accurate radiative transfer methods and realistic source models. In many cases this will mean modelling in three dimensions. High optical depths and observed wide range of linear scales are, however, challenging for radiative transfer modelling. A large range of linear scales can be accessed only with hierarchical models. Figure 1 shows an example of the use of a hierarchical grid for radiative transfer calculations when the original model cloud (L=10 pc,
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
Sparse spikes super-resolution on thin grids II: the continuous basis pursuit
NASA Astrophysics Data System (ADS)
Duval, Vincent; Peyré, Gabriel
2017-09-01
This article analyzes the performance of the continuous basis pursuit (C-BP) method for sparse super-resolution. The C-BP has been recently proposed by Ekanadham, Tranchina and Simoncelli as a refined discretization scheme for the recovery of spikes in inverse problems regularization. One of the most well known discretization scheme, the basis pursuit (BP, also known as \
NASA Astrophysics Data System (ADS)
Wang, Qing; Zhao, Xinyu; Ihme, Matthias
2017-11-01
Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).
Sensor network based solar forecasting using a local vector autoregressive ridge framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Yoo, S.; Heiser, J.
2016-04-04
The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations duemore » to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.« less
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
Paciorek, Christopher J; Goring, Simon J; Thurman, Andrew L; Cogbill, Charles V; Williams, John W; Mladenoff, David J; Peters, Jody A; Zhu, Jun; McLachlan, Jason S
2016-01-01
We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio) and midwestern (Indiana to Minnesota). Public Land Survey point data in the midwestern region (ca. 0.8-km resolution) are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0.
Yoon, Jai-Woong; Park, Young-Guk; Park, Chun-Joo; Kim, Do-Il; Lee, Jin-Ho; Chung, Nag-Kun; Choe, Bo-Young; Suh, Tae-Suk; Lee, Hyoung-Koo
2007-11-01
The stationary grid commonly used with a digital x-ray detector causes a moiré interference pattern due to the inadequate sampling of the grid shadows by the detector pixels. There are limitations with the previous methods used to remove the moiré such as imperfect electromagnetic interference shielding and the loss of image information. A new method is proposed for removing the moiré pattern by integrating a carbon-interspaced high precision x-ray grid with high grid line uniformity with the detector for frequency matching. The grid was aligned to the detector by translating and rotating the x-ray grid with respect to the detector using microcontrolled alignment mechanism. The gap between the grid and the detector surface was adjusted with micrometer precision to precisely match the projected grid line pitch to the detector pixel pitch. Considering the magnification of the grid shadows on the detector plane, the grids were manufactured such that the grid line frequency was slightly higher than the detector sampling frequency. This study examined the factors that affect the moiré pattern, particularly the line frequency and displacement. The frequency of the moiré pattern was found to be sensitive to the angular displacement of the grid with respect to the detector while the horizontal translation alters the phase but not the moiré frequency. The frequency of the moiré pattern also decreased with decreasing difference in frequency between the grid and the detector, and a moiré-free image was produced after complete matching for a given source to detector distance. The image quality factors including the contrast, signal-to-noise ratio and uniformity in the images with and without the moiré pattern were investigated.
NASA Astrophysics Data System (ADS)
Wilde-Piorko, M.; Polkowski, M.
2016-12-01
Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
PDF added value of a high resolution climate simulation for precipitation
NASA Astrophysics Data System (ADS)
Soares, Pedro M. M.; Cardoso, Rita M.
2015-04-01
General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.
NASA Astrophysics Data System (ADS)
Lari, L.; Wright, I.; Boyes, E. D.
2015-10-01
A very simple tomography sample holder at minimal cost was developed in-house. The holder is based on a JEOL single tilt fast exchange sample holder where its exchangeable tip was modified to allow high angle degree tilt. The shape of the tip was designed to retain mechanical stability while minimising the lateral size of the tip. The sample can be mounted on as for a standard 3mm Cu grids as well as semi-circular grids from FIB sample preparation. Applications of the holder on different sample systems are shown.
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
NASA Astrophysics Data System (ADS)
Kumar, V.; Singh, A.; Sharma, S. P.
2016-12-01
Regular grid discretization is often utilized to define complex geological models. However, this subdivision strategy performs at lower precision to represent the topographical observation surface. We have developed a new 2D unstructured grid based inversion for magnetic data for models including topography. It will consolidate prior parametric information into a deterministic inversion system to enhance the boundary between the different lithology based on recovered magnetic susceptibility distribution from the inversion. The presented susceptibility model will satisfy both the observed magnetic data and parametric information and therefore can represent the earth better than geophysical inversion models that only honor the observed magnetic data. Geophysical inversion and lithology classification are generally treated as two autonomous methodologies and connected in a serial way. The presented inversion strategy integrates these two parts into a unified scheme. To reduce the storage space and computation time, the conjugate gradient method is used. It results in feasible and practical imaging inversion of magnetic data to deal with large number of triangular grids. The efficacy of the presented inversion is demonstrated using two synthetic examples and one field data example.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
Grid systems for Earth radiation budget experiment applications
NASA Technical Reports Server (NTRS)
Brooks, D. R.
1981-01-01
Spatial coordinate transformations are developed for several global grid systems of interest to the Earth Radiation Budget Experiment. The grid boxes are defined in terms of a regional identifier and longitude-latitude indexes. The transformations associate longitude with a particular grid box. The reverse transformations identify the center location of a given grid box. Transformations are given to relate the rotating (Earth-based) grid systems to solar position expressed in an inertial (nonrotating) coordinate system. The FORTRAN implementations of the transformations are given, along with sample input and output.
NASA Astrophysics Data System (ADS)
Zlotnik, A. A.
2017-04-01
The multidimensional quasi-gasdynamic system written in the form of mass, momentum, and total energy balance equations for a perfect polytropic gas with allowance for a body force and a heat source is considered. A new conservative symmetric spatial discretization of these equations on a nonuniform rectangular grid is constructed (with the basic unknown functions—density, velocity, and temperature—defined on a common grid and with fluxes and viscous stresses defined on staggered grids). Primary attention is given to the analysis of entropy behavior: the discretization is specially constructed so that the total entropy does not decrease. This is achieved via a substantial revision of the standard discretization and applying numerous original features. A simplification of the constructed discretization serves as a conservative discretization with nondecreasing total entropy for the simpler quasi-hydrodynamic system of equations. In the absence of regularizing terms, the results also hold for the Navier-Stokes equations of a viscous compressible heat-conducting gas.
On unstructured grids and solvers
NASA Technical Reports Server (NTRS)
Barth, T. J.
1990-01-01
The fundamentals and the state-of-the-art technology for unstructured grids and solvers are highlighted. Algorithms and techniques pertinent to mesh generation are discussed. It is shown that grid generation and grid manipulation schemes rely on fast multidimensional searching. Flow solution techniques for the Euler equations, which can be derived from the integral form of the equations are discussed. Sample calculations are also provided.
Zanon, Marco; Davis, Basil A. S.; Marquer, Laurent; Brewer, Simon; Kaplan, Jed O.
2018-01-01
Characterization of land cover change in the past is fundamental to understand the evolution and present state of the Earth system, the amount of carbon and nutrient stocks in terrestrial ecosystems, and the role played by land-atmosphere interactions in influencing climate. The estimation of land cover changes using palynology is a mature field, as thousands of sites in Europe have been investigated over the last century. Nonetheless, a quantitative land cover reconstruction at a continental scale has been largely missing. Here, we present a series of maps detailing the evolution of European forest cover during last 12,000 years. Our reconstructions are based on the Modern Analog Technique (MAT): a calibration dataset is built by coupling modern pollen samples with the corresponding satellite-based forest-cover data. Fossil reconstructions are then performed by assigning to every fossil sample the average forest cover of its closest modern analogs. The occurrence of fossil pollen assemblages with no counterparts in modern vegetation represents a known limit of analog-based methods. To lessen the influence of no-analog situations, pollen taxa were converted into plant functional types prior to running the MAT algorithm. We then interpolate site-specific reconstructions for each timeslice using a four-dimensional gridding procedure to create continuous gridded maps at a continental scale. The performance of the MAT is compared against methodologically independent forest-cover reconstructions produced using the REVEALS method. MAT and REVEALS estimates are most of the time in good agreement at a trend level, yet MAT regularly underestimates the occurrence of densely forested situations, requiring the application of a bias correction procedure. The calibrated MAT-based maps draw a coherent picture of the establishment of forests in Europe in the Early Holocene with the greatest forest-cover fractions reconstructed between ∼8,500 and 6,000 calibrated years BP. This forest maximum is followed by a general decline in all parts of the continent, likely as a result of anthropogenic deforestation. The continuous spatial and temporal nature of our reconstruction, its continental coverage, and gridded format make it suitable for climate, hydrological, and biogeochemical modeling, among other uses. PMID:29568303
Zanon, Marco; Davis, Basil A S; Marquer, Laurent; Brewer, Simon; Kaplan, Jed O
2018-01-01
Characterization of land cover change in the past is fundamental to understand the evolution and present state of the Earth system, the amount of carbon and nutrient stocks in terrestrial ecosystems, and the role played by land-atmosphere interactions in influencing climate. The estimation of land cover changes using palynology is a mature field, as thousands of sites in Europe have been investigated over the last century. Nonetheless, a quantitative land cover reconstruction at a continental scale has been largely missing. Here, we present a series of maps detailing the evolution of European forest cover during last 12,000 years. Our reconstructions are based on the Modern Analog Technique (MAT): a calibration dataset is built by coupling modern pollen samples with the corresponding satellite-based forest-cover data. Fossil reconstructions are then performed by assigning to every fossil sample the average forest cover of its closest modern analogs. The occurrence of fossil pollen assemblages with no counterparts in modern vegetation represents a known limit of analog-based methods. To lessen the influence of no-analog situations, pollen taxa were converted into plant functional types prior to running the MAT algorithm. We then interpolate site-specific reconstructions for each timeslice using a four-dimensional gridding procedure to create continuous gridded maps at a continental scale. The performance of the MAT is compared against methodologically independent forest-cover reconstructions produced using the REVEALS method. MAT and REVEALS estimates are most of the time in good agreement at a trend level, yet MAT regularly underestimates the occurrence of densely forested situations, requiring the application of a bias correction procedure. The calibrated MAT-based maps draw a coherent picture of the establishment of forests in Europe in the Early Holocene with the greatest forest-cover fractions reconstructed between ∼8,500 and 6,000 calibrated years BP. This forest maximum is followed by a general decline in all parts of the continent, likely as a result of anthropogenic deforestation. The continuous spatial and temporal nature of our reconstruction, its continental coverage, and gridded format make it suitable for climate, hydrological, and biogeochemical modeling, among other uses.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2009-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.
NASA Astrophysics Data System (ADS)
Bonduà, Stefano; Battistelli, Alfredo; Berry, Paolo; Bortolotti, Villiam; Consonni, Alberto; Cormio, Carlo; Geloni, Claudio; Vasini, Ester Maria
2017-11-01
As is known, a full three-dimensional (3D) unstructured grid permits a great degree of flexibility when performing accurate numerical reservoir simulations. However, when the Integral Finite Difference Method (IFDM) is used for spatial discretization, constraints (arising from the required orthogonality between the segment connecting the blocks nodes and the interface area between blocks) pose difficulties in the creation of grids with irregular shaped blocks. The full 3D Voronoi approach guarantees the respect of IFDM constraints and allows generation of grids conforming to geological formations and structural objects and at the same time higher grid resolution in volumes of interest. In this work, we present dedicated pre- and post-processing gridding software tools for the TOUGH family of numerical reservoir simulators, developed by the Geothermal Research Group of the DICAM Department, University of Bologna. VORO2MESH is a new software coded in C++, based on the voro++ library, allowing computation of the 3D Voronoi tessellation for a given domain and the creation of a ready to use TOUGH2 MESH file. If a set of geological surfaces is available, the software can directly generate the set of Voronoi seed points used for tessellation. In order to reduce the number of connections and so to decrease computation time, VORO2MESH can produce a mixed grid with regular blocks (orthogonal prisms) and irregular blocks (polyhedron Voronoi blocks) at the point of contact between different geological formations. In order to visualize 3D Voronoi grids together with the results of numerical simulations, the functionality of the TOUGH2Viewer post-processor has been extended. We describe an application of VORO2MESH and TOUGH2Viewer to validate the two tools. The case study deals with the simulation of the migration of gases in deep layered sedimentary formations at basin scale using TOUGH2-TMGAS. A comparison between the simulation performances of unstructured and structured grids is presented.
Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells
Trimper, John B.; Trettel, Sean G.; Hwaun, Ernie; Colgin, Laura Lee
2017-01-01
At rest, hippocampal “place cells,” neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These “replay” events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay. PMID:28824388
Development and deployment of a Desktop and Mobile application on grid for GPS studie
NASA Astrophysics Data System (ADS)
Ntumba, Patient; Lotoy, Vianney; Djungu, Saint Jean; Fleury, Rolland; Petitdidier, Monique; Gemünd, André; Schwichtenberg, Horst
2013-04-01
GPS networks for scientific studies are developed all other the world and large databases, regularly updated, like IGS are also available. Many GPS have been installed in West and Central Africa during AMMA (African Monsoon Multiplidisciplinary Analysis), IHY (International heliophysical Year)and many other projects since 2005. African scientists have been educated to use those data especially for meteorological and ionospheric studies. The annual variations of ionospheric parameters for a given station or map of a given region are very intensive computing. Then grid or cloud computing may be a solution to obtain results in a relatively short time. Real time At the University of Kinshasa the chosen solution is a grid of several PCs. It has been deployed by using Globus Toolkit on a Condor pool in order to support the processing of GPS data for ionospheric studies. To be user-friendly, graphical user interfaces(GUI) have been developed to help the user to prepare and submit jobs. One is a java GUI for desktop client, the other is an Android GUI for mobile client. The interest of a grid is the possibility to send a bunch of jobs with an adequate agent control in order to survey the job execution and result storage. After the feasibility study the grid will be extended to a larger number of PCs. Other solutions will be in parallel explored.
Nonuniform depth grids in parabolic equation solutions.
Sanders, William M; Collins, Michael D
2013-04-01
The parabolic wave equation is solved using a finite-difference solution in depth that involves a nonuniform grid. The depth operator is discretized using Galerkin's method with asymmetric hat functions. Examples are presented to illustrate that this approach can be used to improve efficiency for problems in ocean acoustics and seismo-acoustics. For shallow water problems, accuracy is sensitive to the precise placement of the ocean bottom interface. This issue is often addressed with the inefficient approach of using a fine grid spacing over all depth. Efficiency may be improved by using a relatively coarse grid with nonuniform sampling to precisely position the interface. Efficiency may also be improved by reducing the sampling in the sediment and in an absorbing layer that is used to truncate the computational domain. Nonuniform sampling may also be used to improve the implementation of a single-scattering approximation for sloping fluid-solid interfaces.
Collective dynamics of 'small-world' networks.
Watts, D J; Strogatz, S H
1998-06-04
Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.
Structured background grids for generation of unstructured grids by advancing front method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
1991-01-01
A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.
Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.
2017-12-01
Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error associated with geometric and harmonic means.
Grid generation on trimmed Bezier and NURBS quilted surfaces
NASA Technical Reports Server (NTRS)
Woan, Chung-Jin; Clever, Willard C.; Tam, Clement K.
1995-01-01
This paper presents some recently added capabilities to RAGGS, Rockwell Automated Grid Generation System. Included are the trimmed surface handling and display capability and structures and unstructured grid generation on trimmed Bezier and NURBS (non-uniform rational B-spline surfaces) quilted surfaces. Samples are given to demonstrate the new capabilities.
Positron lifetime spectrometer using a DC positron beam
Xu, Jun; Moxom, Jeremy
2003-10-21
An entrance grid is positioned in the incident beam path of a DC beam positron lifetime spectrometer. The electrical potential difference between the sample and the entrance grid provides simultaneous acceleration of both the primary positrons and the secondary electrons. The result is a reduction in the time spread induced by the energy distribution of the secondary electrons. In addition, the sample, sample holder, entrance grid, and entrance face of the multichannel plate electron detector assembly are made parallel to each other, and are arranged at a tilt angle to the axis of the positron beam to effectively separate the path of the secondary electrons from the path of the incident positrons.
Self-Assembly of Large-Scale Shape-Controlled DNA Nano-Structures
2014-12-16
discharged carbon-coated TEM grids for 4 min and then stained for 1 min using a 2% aqueous uranyl formate solution containing 25 mM NaOH. Imaging was...temperature for 3 h in the dark. TEM imaging. For imaging, 2,5 pi annealed sample was adsorbed for 2 min onto glow- discharged , carbon-coated TEM grids...Imaging. For ’I’EM imaging, a 3.S //L sample (l—5 nM) was adsorbed onto glow discharged carbon-coated TEM grids for 4 min and then stained for 1 min or a
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2010-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.
An IBM-compatible program for interactive three-dimensional gravity modeling
NASA Astrophysics Data System (ADS)
Broome, John
1992-04-01
G3D is a 3-D interactive gravity modeling program for IBM-compatible microcomputers. The program allows a model to be created interactively by defining multiple tabular bodies with horizontal tops and bottoms. The resulting anomaly is calculated using Plouff's algorithm at up to 2000 predefined random or regularly located points. In order to display the anomaly as a color image, the point data are interpolated onto a regular grid and quantized into discrete intervals. Observed and residual gravity field images also can be generated. Adjustments to the model are made using a graphics cursor to move, insert, and delete body points or whole bodies. To facilitate model changes, planview body outlines can be overlain on any of the gravity field images during editing. The model's geometry can be displayed in planview or along a user-defined vertical section. G3D is written in Microsoft® FORTRAN and utilizes the Halo-Professional® (or Halo-88®) graphics subroutine library. The program is written for use on an IBM-compatible microcomputer equipped with hard disk, numeric coprocessor, and VGA, Number Nine Revolution (Halo-88® only), or TIGA® compatible graphics cards. A mouse or digitizing tablet is recommended for cursor positioning. Program source code, a user's guide, and sample data are available as Geological Survey of Canada Open File (G3D: A Three-dimensional Gravity Modeling Program for IBM-compatible Microcomputers).
Biomass energy inventory and mapping system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasile, J.D.
1993-12-31
A four-stage biomass energy inventory and mapping system was conducted for the entire State of Ohio. The product is a set of maps and an inventory of the State of Ohio. The set of amps and an inventory of the State`s energy biomass resource are to a one kilometer grid square basis on the Universal Transverse Mercator (UTM) system. Each square kilometer is identified and mapped showing total British Thermal Unit (BTU) energy availability. Land cover percentages and BTU values are provided for each of nine biomass strata types for each one kilometer grid square. LANDSAT satellite data was usedmore » as the primary stratifier. The second stage sampling was the photointerpretation of randomly selected one kilometer grid squares that exactly corresponded to the LANDSAT one kilometer grid square classification orientation. Field sampling comprised the third stage of the energy biomass inventory system and was combined with the fourth stage sample of laboratory biomass energy analysis using a Bomb calorimeter and was then used to assign BTU values to the photointerpretation and to adjust the LANDSAT classification. The sampling error for the whole system was 3.91%.« less
Optimized Routing of Intelligent, Mobile Sensors for Dynamic, Data-Driven Sampling
2016-09-27
nonstationary random process that requires nonuniform sampling. The ap- proach incorporates complementary representations of an unknown process: the first...lookup table as follows. A uniform grid is created in the r-domain and mapped to the R-domain, which produces a nonuniform grid of locations in the R...vehicle coverage algorithm that invokes the coor- dinate transformation from the previous section to generate nonuniform sampling trajectories [54]. We
Two variants of minimum discarded fill ordering
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, E.F.; Forsyth, P.A.; Tang, Wei-Pai
1991-01-01
It is well known that the ordering of the unknowns can have a significant effect on the convergence of Preconditioned Conjugate Gradient (PCG) methods. There has been considerable experimental work on the effects of ordering for regular finite difference problems. In many cases, good results have been obtained with preconditioners based on diagonal, spiral or natural row orderings. However, for finite element problems having unstructured grids or grids generated by a local refinement approach, it is difficult to define many of the orderings for more regular problems. A recently proposed Minimum Discarded Fill (MDF) ordering technique is effective in findingmore » high quality Incomplete LU (ILU) preconditioners, especially for problems arising from unstructured finite element grids. Testing indicates this algorithm can identify a rather complicated physical structure in an anisotropic problem and orders the unknowns in the preferred'' direction. The MDF technique may be viewed as the numerical analogue of the minimum deficiency algorithm in sparse matrix technology. At any stage of the partial elimination, the MDF technique chooses the next pivot node so as to minimize the amount of discarded fill. In this work, two efficient variants of the MDF technique are explored to produce cost-effective high-order ILU preconditioners. The Threshold MDF orderings combine MDF ideas with drop tolerance techniques to identify the sparsity pattern in the ILU preconditioners. These techniques identify an ordering that encourages fast decay of the entries in the ILU factorization. The Minimum Update Matrix (MUM) ordering technique is a simplification of the MDF ordering and is closely related to the minimum degree algorithm. The MUM ordering is especially for large problems arising from Navier-Stokes problems. Some interesting pictures of the orderings are presented using a visualization tool. 22 refs., 4 figs., 7 tabs.« less
2017-01-01
Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589
High Order Numerical Simulation of Waves Using Regular Grids and Non-conforming Interfaces
2013-10-06
SECURITY CLASSIFICATION OF: We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material...of space with smooth, but not necessarily constant, material characteristics, separated into sub-domains by interfaces of arbitrary shape. We...Abstract We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material characteristics, separated into
1992-07-09
This sharp, cloud free view of San Antonio, Texas (29.5N, 98.5W) illustrates the classic pattern of western cities. The city has a late nineteenth century Anglo grid pattern overlaid onto an earlier, less regular Hispanic settlement. A well marked central business district having streets laid out north/south and east/west is surrounded by blocks of suburban homes and small businesses set between the older colonial radial transportation routes.
Baker, Daniel H; Meese, Tim S
2016-07-27
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.
Baker, Daniel H.; Meese, Tim S.
2016-01-01
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures. PMID:27460430
Asbestos Air Monitoring Results at Eleven Family Housing Areas throughout the United States.
1991-05-23
limits varied depending on sampling volumes and grid openings scanned. Therefore, the detection limits presented in the results summary tables vary...1 f/10 grid squares) (855 mm 2) (1 liter) = 3054 liters (0.005 f/cc) (0.0056 mm 2) (1000 cc) Where: * 1 f/10 grid squares (the maximum recommended...diameter filter. * 0.0056 mm 2 is the area of each grid square (75 /Jm per side) in a 200 mesh electron microscope grid . This value will vary from 0.0056
Surface topography of the Greenland Ice Sheet from satellite radar altimetry
NASA Technical Reports Server (NTRS)
Bindschadler, Robert A.; Zwally, H. Jay; Major, Judith A.; Brenner, Anita C.
1989-01-01
Surface elevation maps of the southern half of the Greenland subcontinent are produced from radar altimeter data acquired by the Seasat satellite. A summary of the processing procedure and examples of return waveform data are given. The elevation data are used to generate a regular grid which is then computer contoured to provide an elevation contour map. Ancillary maps show the statistical quality of the elevation data and various characteristics of the surface. The elevation map is used to define ice flow directions and delineate the major drainage basins. Regular maps of the Jakobshavns Glacier drainage basin and the ice divide in the vicinity of Crete Station are presented. Altimeter derived elevations are compared with elevations measured both by satellite geoceivers and optical surveying.
Self-Avoiding Walks over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.
NASA Astrophysics Data System (ADS)
Erdmann, G.
2015-08-01
The following text is an introduction into the economic theory of electricity supply and demand. The basic approach of economics has to reflect the physical peculiarities of electric power that is based on the directed movement of electrons from the minus pole to the plus pole of a voltage source. The regular grid supply of electricity is characterized by a largely constant frequency and voltage. Thus, from a physical point of view electricity is a homogeneous product. But from an economic point of view, electricity is not homogeneous. Wholesale electricity prices show significant fluctuations over time and between regions, because this product is not storable (in relevant quantities) and there may be bottlenecks in the transmission and distribution grids. The associated non-homogeneity is the starting point of the economic analysis of electricity markets.
Change Detection of Mobile LIDAR Data Using Cloud Computing
NASA Astrophysics Data System (ADS)
Liu, Kun; Boehm, Jan; Alis, Christian
2016-06-01
Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.
Interactive grid generation for turbomachinery flow field simulations
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Eiseman, Peter R.; Reno, Charles
1988-01-01
The control point form of algebraic grid generation presented provides the means that are needed to generate well structured grids for turbomachinery flow simulations. It uses a sparse collection of control points distributed over the flow domain. The shape and position of coordinate curves can be adjusted from these control points while the grid conforms precisely to all boundaries. An interactive program called TURBO, which uses the control point form, is being developed. Basic features of the code are discussed and sample grids are presented. A finite volume LU implicit scheme is used to simulate flow in a turbine cascade on the grid generated by the program.
Interactive grid generation for turbomachinery flow field simulations
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Reno, Charles; Eiseman, Peter R.
1988-01-01
The control point form of algebraic grid generation presented provides the means that are needed to generate well structured grids of turbomachinery flow simulations. It uses a sparse collection of control points distributed over the flow domain. The shape and position of coordinate curves can be adjusted from these control points while the grid conforms precisely to all boundaries. An interactive program called TURBO, which uses the control point form, is being developed. Basic features of the code are discussed and sample grids are presented. A finite volume LU implicit scheme is used to simulate flow in a turbine cascade on the grid generated by the program.
Transect versus grid trapping arrangements for sampling small-mammal communities
Dean E. Pearson; Leonard F. Ruggiero
2003-01-01
We compared transect and grid trapping arrangements for assessing small-mammal community composition and relative abundance for 2 years in 2 forest cover types in west-central Montana, USA. Transect arrangements yielded more total captures, more individual captures, and more species than grid arrangements in both cover types in both years. Differences between...
ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.
Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles
2018-04-19
Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We developed a novel de-arraying approach for TMA analysis. By combining wavelet-based detection, active contour segmentation, and thin-plate spline interpolation, our approach is able to handle TMA images with high dynamic, poor signal-to-noise ratio, complex background and non-linear deformation of TMA grid. In addition, the deformation estimation produces quantitative information to asset the manufacturing quality of TMAs.
Fiducial marker for correlating images
Miller, Lisa Marie [Rocky Point, NY; Smith, Randy J [Wading River, NY; Warren, John B [Port Jefferson, NY; Elliott, Donald [Hampton Bays, NY
2011-06-21
The invention relates to a fiducial marker having a marking grid that is used to correlate and view images produced by different imaging modalities or different imaging and viewing modalities. More specifically, the invention relates to the fiducial marking grid that has a grid pattern for producing either a viewing image and/or a first analytical image that can be overlaid with at least one other second analytical image in order to view a light path or to image different imaging modalities. Depending on the analysis, the grid pattern has a single layer of a certain thickness or at least two layers of certain thicknesses. In either case, the grid pattern is imageable by each imaging or viewing modality used in the analysis. Further, when viewing a light path, the light path of the analytical modality cannot be visualized by viewing modality (e.g., a light microscope objective). By correlating these images, the ability to analyze a thin sample that is, for example, biological in nature but yet contains trace metal ions is enhanced. Specifically, it is desired to analyze both the organic matter of the biological sample and the trace metal ions contained within the biological sample without adding or using extrinsic labels or stains.
Orientation domains: A mobile grid clustering algorithm with spherical corrections
NASA Astrophysics Data System (ADS)
Mencos, Joana; Gratacós, Oscar; Farré, Mercè; Escalante, Joan; Arbués, Pau; Muñoz, Josep Anton
2012-12-01
An algorithm has been designed and tested which was devised as a tool assisting the analysis of geological structures solely from orientation data. More specifically, the algorithm was intended for the analysis of geological structures that can be approached as planar and piecewise features, like many folded strata. Input orientation data is expressed as pairs of angles (azimuth and dip). The algorithm starts by considering the data in Cartesian coordinates. This is followed by a search for an initial clustering solution, which is achieved by comparing the results output from the systematic shift of a regular rigid grid over the data. This initial solution is optimal (achieves minimum square error) once the grid size and the shift increment are fixed. Finally, the algorithm corrects for the variable spread that is generally expected from the data type using a reshaped non-rigid grid. The algorithm is size-oriented, which implies the application of conditions over cluster size through all the process in contrast to density-oriented algorithms, also widely used when dealing with spatial data. Results are derived in few seconds and, when tested over synthetic examples, they were found to be consistent and reliable. This makes the algorithm a valuable alternative to the time-consuming traditional approaches available to geologists.
Big Geo Data Services: From More Bytes to More Barrels
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Baumann, Peter
2016-04-01
The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.
Improving ATLAS grid site reliability with functional tests using HammerCloud
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan
2012-12-01
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
Redistribution population data across a regular spatial grid according to buildings characteristics
NASA Astrophysics Data System (ADS)
Calka, Beata; Bielecka, Elzbieta; Zdunkiewicz, Katarzyna
2016-12-01
Population data are generally provided by state census organisations at the predefined census enumeration units. However, these datasets very are often required at userdefined spatial units that differ from the census output levels. A number of population estimation techniques have been developed to address these problems. This article is one of those attempts aimed at improving county level population estimates by using spatial disaggregation models with support of buildings characteristic, derived from national topographic database, and average area of a flat. The experimental gridded population surface was created for Opatów county, sparsely populated rural region located in Central Poland. The method relies on geolocation of population counts in buildings, taking into account the building volume and structural building type and then aggregation the people total in 1 km quadrilateral grid. The overall quality of population distribution surface expressed by the mean of RMSE equals 9 persons, and the MAE equals 0.01. We also discovered that nearly 20% of total county area is unpopulated and 80% of people lived on 33% of the county territory.
Two decades [1992-2012] of surface wind analyses based on satellite scatterometer observations
NASA Astrophysics Data System (ADS)
Desbiolles, Fabien; Bentamy, Abderrahim; Blanke, Bruno; Roy, Claude; Mestas-Nuñez, Alberto M.; Grodsky, Semyon A.; Herbette, Steven; Cambon, Gildas; Maes, Christophe
2017-04-01
Surface winds (equivalent neutral wind velocities at 10 m) from scatterometer missions since 1992 have been used to build up a 20-year climate series. Optimal interpolation and kriging methods have been applied to continuously provide surface wind speed and direction estimates over the global ocean on a regular grid in space and time. The use of other data sources such as radiometer data (SSM/I) and atmospheric wind reanalyses (ERA-Interim) has allowed building a blended product available at 1/4° spatial resolution and every 6 h from 1992 to 2012. Sampling issues throughout the different missions (ERS-1, ERS-2, QuikSCAT, and ASCAT) and their possible impact on the homogeneity of the gridded product are discussed. In addition, we assess carefully the quality of the blended product in the absence of scatterometer data (1992 to 1999). Data selection experiments show that the description of the surface wind is significantly improved by including the scatterometer winds. The blended winds compare well with buoy winds (1992-2012) and they resolve finer spatial scales than atmospheric reanalyses, which make them suitable for studying air-sea interactions at mesoscale. The seasonal cycle and interannual variability of the product compare well with other long-term wind analyses. The product is used to calculate 20-year trends in wind speed, as well as in zonal and meridional wind components. These trends show an important asymmetry between the southern and northern hemispheres, which may be an important issue for climate studies.
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
Integrating TITAN2D Geophysical Mass Flow Model with GIS
NASA Astrophysics Data System (ADS)
Namikawa, L. M.; Renschler, C.
2005-12-01
TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.
Evaluation of Statistical Methodologies Used in U. S. Army Ordnance and Explosive Work
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrouchov, G
2000-02-14
Oak Ridge National Laboratory was tasked by the U.S. Army Engineering and Support Center (Huntsville, AL) to evaluate the mathematical basis of existing software tools used to assist the Army with the characterization of sites potentially contaminated with unexploded ordnance (UXO). These software tools are collectively known as SiteStats/GridStats. The first purpose of the software is to guide sampling of underground anomalies to estimate a site's UXO density. The second purpose is to delineate areas of homogeneous UXO density that can be used in the formulation of response actions. It was found that SiteStats/GridStats does adequately guide the sampling somore » that the UXO density estimator for a sector is unbiased. However, the software's techniques for delineation of homogeneous areas perform less well than visual inspection, which is frequently used to override the software in the overall sectorization methodology. The main problems with the software lie in the criteria used to detect nonhomogeneity and those used to recommend the number of homogeneous subareas. SiteStats/GridStats is not a decision-making tool in the classical sense. Although it does provide information to decision makers, it does not require a decision based on that information. SiteStats/GridStats provides information that is supplemented by visual inspections, land-use plans, and risk estimates prior to making any decisions. Although the sector UXO density estimator is unbiased regardless of UXO density variation within a sector, its variability increases with increased sector density variation. For this reason, the current practice of visual inspection of individual sampled grid densities (as provided by Site-Stats/GridStats) is necessary to ensure approximate homogeneity, particularly at sites with medium to high UXO density. Together with Site-Stats/GridStats override capabilities, this provides a sufficient mechanism for homogeneous sectorization and thus yields representative UXO density estimates. Objections raised by various parties to the use of a numerical ''discriminator'' in SiteStats/GridStats were likely because of the fact that the concerned statistical technique is customarily applied for a different purpose and because of poor documentation. The ''discriminator'', in Site-Stats/GridStats is a ''tuning parameter'' for the sampling process, and it affects the precision of the grid density estimates through changes in required sample size. It is recommended that sector characterization in terms of a map showing contour lines of constant UXO density with an expressed uncertainty or confidence level is a better basis for remediation decisions than a sector UXO density point estimate. A number of spatial density estimation techniques could be adapted to the UXO density estimation problem.« less
Single and double grid long-range alpha detectors
MacArthur, Duncan W.; Allander, Krag S.
1993-01-01
Alpha particle detectors capable of detecting alpha radiation from distant sources. In one embodiment, a voltage is generated in a single electrically conductive grid while a fan draws air containing air molecules ionized by alpha particles through an air passage and across the conductive grid. The current in the conductive grid can be detected and used for measurement or alarm. Another embodiment builds on this concept and provides an additional grid so that air ions of both polarities can be detected. The detector can be used in many applications, such as for pipe or duct, tank, or soil sample monitoring.
Single and double grid long-range alpha detectors
MacArthur, D.W.; Allander, K.S.
1993-03-16
Alpha particle detectors capable of detecting alpha radiation from distant sources. In one embodiment, a voltage is generated in a single electrically conductive grid while a fan draws air containing air molecules ionized by alpha particles through an air passage and across the conductive grid. The current in the conductive grid can be detected and used for measurement or alarm. Another embodiment builds on this concept and provides an additional grid so that air ions of both polarities can be detected. The detector can be used in many applications, such as for pipe or duct, tank, or soil sample monitoring.
Eisele, Thomas P; Keating, Joseph; Swalm, Chris; Mbogo, Charles M; Githeko, Andrew K; Regens, James L; Githure, John I; Andrews, Linda; Beier, John C
2003-12-10
BACKGROUND: Remote sensing technology provides detailed spectral and thermal images of the earth's surface from which surrogate ecological indicators of complex processes can be measured. METHODS: Remote sensing data were overlaid onto georeferenced entomological and human ecological data randomly sampled during April and May 2001 in the cities of Kisumu (population asymptotically equal to 320,000) and Malindi (population asymptotically equal to 81,000), Kenya. Grid cells of 270 meters x 270 meters were used to generate spatial sampling units for each city for the collection of entomological and human ecological field-based data. Multispectral Thermal Imager (MTI) satellite data in the visible spectrum at five meter resolution were acquired for Kisumu and Malindi during February and March 2001, respectively. The MTI data were fit and aggregated to the 270 meter x 270 meter grid cells used in field-based sampling using a geographic information system. The normalized difference vegetation index (NDVI) was calculated and scaled from MTI data for selected grid cells. Regression analysis was used to assess associations between NDVI values and entomological and human ecological variables at the grid cell level. RESULTS: Multivariate linear regression showed that as household density increased, mean grid cell NDVI decreased (global F-test = 9.81, df 3,72, P-value = <0.01; adjusted R2 = 0.26). Given household density, the number of potential anopheline larval habitats per grid cell also increased with increasing values of mean grid cell NDVI (global F-test = 14.29, df 3,36, P-value = <0.01; adjusted R2 = 0.51). CONCLUSIONS: NDVI values obtained from MTI data were successfully overlaid onto georeferenced entomological and human ecological data spatially sampled at a scale of 270 meters x 270 meters. Results demonstrate that NDVI at such a scale was sufficient to describe variations in entomological and human ecological parameters across both cities.
Fine-Scale Survey of Right and Humpback Whale Prey Abundance and Distribution
2011-09-30
information, we accomplished: (1) Identification of the prey type (e.g. copepod , krill, fish) and numerical abundance of zooplankton and nekton in...primarily copepods in this area) and nekton (small fish such as sand lance or herring). The general approach is to conduct a regular grid-like...correlated right whale location in the water column with the distribution of copepods measured acoustically which has resulted in a high-profile, peer
NASA Astrophysics Data System (ADS)
Jahandari, H.; Farquharson, C. G.
2017-11-01
Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.
NASA Astrophysics Data System (ADS)
Sun, K.; Zhu, L.; Gonzalez Abad, G.; Nowlan, C. R.; Miller, C. E.; Huang, G.; Liu, X.; Chance, K.; Yang, K.
2017-12-01
It has been well demonstrated that regridding Level 2 products (satellite observations from individual footprints, or pixels) from multiple sensors/species onto regular spatial and temporal grids makes the data more accessible for scientific studies and can even lead to additional discoveries. However, synergizing multiple species retrieved from multiple satellite sensors faces many challenges, including differences in spatial coverage, viewing geometry, and data filtering criteria. These differences will lead to errors and biases if not treated carefully. Operational gridded products are often at 0.25°×0.25° resolution with a global scale, which is too coarse for local heterogeneous emission sources (e.g., urban areas), and at fixed temporal intervals (e.g., daily or monthly). We propose a consistent framework to fully use and properly weight the information of all possible individual satellite observations. A key aspect of this work is an accurate knowledge of the spatial response function (SRF) of the satellite Level 2 pixels. We found that the conventional overlap-area-weighting method (tessellation) is accurate only when the SRF is homogeneous within the parameterized pixel boundary and zero outside the boundary. There will be a tessellation error if the SRF is a smooth distribution, and if this distribution is not properly considered. On the other hand, discretizing the SRF at the destination grid will also induce errors. By balancing these error sources, we found that the SRF should be used in gridding OMI data to 0.2° for fine resolutions. Case studies by merging multiple species and wind data into 0.01° grid will be shown in the presentation.
Uncertainty in the profitability of fertilizer management based on various sampling designs.
NASA Astrophysics Data System (ADS)
Muhammed, Shibu; Ben, Marchant; Webster, Richard; Milne, Alice; Dailey, Gordon; Whitmore, Andrew
2016-04-01
Many farmers sample their soil to measure the concentrations of plant nutrients, including phosphorus (P), so as to decide how much fertilizer to apply. Now that fertilizer can be applied at variable rates, farmers want to know whether maps of nutrient concentration made from grid samples or from field subdivisions (zones within their fields) are merited: do such maps lead to greater profit than would a single measurement on a bulked sample for each field when all costs are taken into account? We have examined the merits of grid-based and zone-based sampling strategies over single field-based averages using continuous spatial data on wheat yields at harvest in six fields in southern England and simulated concentrations of P in the soil. Features of the spatial variation in the yields provide predictions about which sampling scheme is likely to be most cost effective, but there is uncertainty associated with these predictions that must be communicated to farmers. Where variograms of the yield have large variances and long effective ranges, grid-sampling and mapping nutrients are likely to be cost-effective. Where effective ranges are short, sampling must be dense to reveal the spatial variation and may be expensive. In these circumstances variable-rate application of fertilizer is likely to be impracticable and almost certainly not cost-effective. We have explored several methods for communicating these results and found that the most effective method was using probability maps that show the likelihood of grid-based and zone-based sampling being more profitable that a field-based estimate.
Nondestructive Quantitative Sampling for Freshwater Mussels in Variable Substrate Streams
John B. Richardson; Winston Paul Smith
1994-01-01
Unionidmussels were sampled in the Big South Fork of the Cumberland River, Tennessee and Kentucky, from July to October 1988 with a chain grid of10 l-m2 quadrats. The chain grid was used to define 100-m2 areas along the stream bed by repeatedly moving the10-m2 rectangle upstream. Within each100-m
Forest resources of southeast Alaska, 2000: results of a single-phase systematic sample.
Willem W.S. van Hees
2003-01-01
A baseline assessment of forest resources in southeast Alaska was made by using a single-phase, unstratified, systematic-grid sample, with ground plots established at each grid intersection. Ratio-of-means estimators were used to develop population estimates. Forests cover an estimated 48 percent of the 22.9-million-acre southeast Alaska inventory unit. Dominant forest...
A new statistic to express the uncertainty of kriging predictions for purposes of survey planning.
NASA Astrophysics Data System (ADS)
Lark, R. M.; Lapworth, D. J.
2014-05-01
It is well-known that one advantage of kriging for spatial prediction is that, given the random effects model, the prediction error variance can be computed a priori for alternative sampling designs. This allows one to compare sampling schemes, in particular sampling at different densities, and so to decide on one which meets requirements in terms of the uncertainty of the resulting predictions. However, the planning of sampling schemes must account not only for statistical considerations, but also logistics and cost. This requires effective communication between statisticians, soil scientists and data users/sponsors such as managers, regulators or civil servants. In our experience the latter parties are not necessarily able to interpret the prediction error variance as a measure of uncertainty for decision making. In some contexts (particularly the solution of very specific problems at large cartographic scales, e.g. site remediation and precision farming) it is possible to translate uncertainty of predictions into a loss function directly comparable with the cost incurred in increasing precision. Often, however, sampling must be planned for more generic purposes (e.g. baseline or exploratory geochemical surveys). In this latter context the prediction error variance may be of limited value to a non-statistician who has to make a decision on sample intensity and associated cost. We propose an alternative criterion for these circumstances to aid communication between statisticians and data users about the uncertainty of geostatistical surveys based on different sampling intensities. The criterion is the consistency of estimates made from two non-coincident instantiations of a proposed sample design. We consider square sample grids, one instantiation is offset from the second by half the grid spacing along the rows and along the columns. If a sample grid is coarse relative to the important scales of variation in the target property then the consistency of predictions from two instantiations is expected to be small, and can be increased by reducing the grid spacing. The measure of consistency is the correlation between estimates from the two instantiations of the sample grid, averaged over a grid cell. We call this the offset correlation, it can be calculated from the variogram. We propose that this measure is easier to grasp intuitively than the prediction error variance, and has the advantage of having an upper bound (1.0) which will aid its interpretation. This quality measure is illustrated for some hypothetical examples, considering both ordinary kriging and factorial kriging of the variable of interest. It is also illustrated using data on metal concentrations in the soil of north-east England.
Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen
1997-01-01
In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.
The Space-Wise Global Gravity Model from GOCE Nominal Mission Data
NASA Astrophysics Data System (ADS)
Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.
2011-12-01
In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation
Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
NASA Astrophysics Data System (ADS)
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
Dual-Beam Sample Preparation | Materials Science | NREL
images showing cutting of trenches to remove a wafer section and transferring that section to a grid post section and transferring that section to a grid post. Here the wafer section is lifted out and seen from , extracted from the wafer then transferred and welded to a TEM grid post. Final thinning down to a thickness
ERIC Educational Resources Information Center
Alem, Jaouad; Boudreau-Lariviere, Celine
2012-01-01
The objective of the present study is to analyze four metric qualities of an assessment grid for internship placements used by professionals to evaluate a sample of 110 Franco-Ontarian student interns registered between 2006 and 2009 at Laurentian University in the School of Human Kinetics. The evaluation grid was composed of 26 criteria. The four…
Spacecraft hazard avoidance utilizing structured light
NASA Technical Reports Server (NTRS)
Liebe, Carl Christian; Padgett, Curtis; Chapsky, Jacob; Wilson, Daniel; Brown, Kenneth; Jerebets, Sergei; Goldberg, Hannah; Schroeder, Jeffrey
2006-01-01
At JPL, a <5 kg free-flying micro-inspector spacecraft is being designed for host-vehicle inspection. The spacecraft includes a hazard avoidance sensor to navigate relative to the vehicle being inspected. Structured light was selected for hazard avoidance because of its low mass and cost. Structured light is a method of remote sensing 3-dimensional structure of the proximity utilizing a laser, a grating, and a single regular APS camera. The laser beam is split into 400 different beams by a grating to form a regular spaced grid of laser beams that are projected into the field of view of an APS camera. The laser source and the APS camera are separated forming the base of a triangle. The distance to all beam intersections of the host are calculated based on triangulation.
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.
Evolution of passive scalar statistics in a spatially developing turbulence
NASA Astrophysics Data System (ADS)
Paul, I.; Papadakis, G.; Vassilicos, J. C.
2018-02-01
We investigate the evolution of passive scalar statistics in a spatially developing turbulence using direct numerical simulation. Turbulence is generated by a square grid element, which is heated continuously, and the passive scalar is temperature. The square element is the fundamental building block for both regular and fractal grids. We trace the dominant mechanisms responsible for the dynamical evolution of scalar-variance and its dissipation along the bar and grid-element centerlines. The scalar-variance is generated predominantly by the action of the mean scalar gradient behind the bar and is transported laterally by turbulent fluctuations to the grid-element centerline. The scalar-variance dissipation (proportional to the scalar-gradient variance) is produced primarily by the compression of the fluctuating scalar-gradient vector by the turbulent strain rate, while the contribution of mean velocity and scalar fields is negligible. Close to the grid element the scalar spectrum exhibits a well-defined -5 /3 power-law, even though the basic premises of the Kolmogorov-Obukhov-Corrsin theory are not satisfied (the fluctuating scalar field is highly intermittent, inhomogeneous, and anisotropic, and the local Corrsin-microscale-Péclet number is small). At this location, the PDF of scalar gradient production is only slightly skewed towards positive, and the fluctuating scalar-gradient vector aligns only with the compressive strain-rate eigenvector. The scalar-gradient vector is stretched or compressed stronger than the vorticity vector by turbulent strain rate throughout the grid-element centerline. However, the alignment of the former changes much earlier in space than that of the latter, resulting in scalar-variance dissipation to decay earlier along the grid-element centerline compared to the turbulent kinetic energy dissipation. The universal alignment behavior of the scalar-gradient vector is found far downstream, although the local Reynolds and Péclet numbers (based on the Taylor and Corrsin length scales, respectively) are low.
NASA Astrophysics Data System (ADS)
Kyselý, Jan; Plavcová, Eva
2010-12-01
The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.
Galway, Lp; Bell, Nathaniel; Sae, Al Shatari; Hagopian, Amy; Burnham, Gilbert; Flaxman, Abraham; Weiss, Wiliam M; Rajaratnam, Julie; Takaro, Tim K
2012-04-27
Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.
2012-01-01
Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings. PMID:22540266
NASA Astrophysics Data System (ADS)
Eberle, Detlef G.; Daudi, Elias X. F.; Muiuane, Elônio A.; Nyabeze, Peter; Pontavida, Alfredo M.
2012-01-01
The National Geology Directorate of Mozambique (DNG) and Maputo-based Eduardo-Mondlane University (UEM) entered a joint venture with the South African Council for Geoscience (CGS) to conduct a case study over the meso-Proterozoic Alto Ligonha pegmatite field in the Zambézia Province of northeastern Mozambique to support the local exploration and mining sectors. Rare-metal minerals, i.e. tantalum and niobium, as well as rare-earth minerals have been mined in the Alto Ligonha pegmatite field since decades, but due to the civil war (1977-1992) production nearly ceased. The Government now strives to promote mining in the region as contribution to poverty alleviation. This study was undertaken to facilitate the extraction of geological information from the high resolution airborne magnetic and radiometric data sets recently acquired through a World Bank funded survey and mapping project. The aim was to generate a value-added map from the airborne geophysical data that is easier to read and use by the exploration and mining industries than mere airborne geophysical grid data or maps. As a first step towards clustering, thorium (Th) and potassium (K) concentrations were determined from the airborne geophysical data as well as apparent magnetic susceptibility and first vertical magnetic gradient data. These four datasets were projected onto a 100 m spaced regular grid to assemble 850,000 four-element (multivariate) sample vectors over the study area. Classification of the sample vectors using crisp clustering based upon the Euclidian distance between sample and class centre provided a (pseudo-) geology map or value-added map, respectively, displaying the spatial distribution of six different classes in the study area. To learn the quality of sample allocation, the degree of membership of each sample vector was determined using a-posterior discriminant analysis. Geophysical ground truth control was essential to allocate geology/geophysical attributes to the six classes. The highest probability to meet pegmatite bodies is in close vicinity to (magnetic) amphibole schist occurring in areas where depletion of potassium as indication of metasomatic processes is evident from the airborne radiometric data. Clustering has proven to be a fast and effective method to compile value-added maps from multivariate geophysical datasets. Experience made in the Alto Ligonha pegmatite field encourages adopting this new methodology for mapping other parts of the Mozambique Fold Belt.
A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids
Boschitsch, Alexander H.; Fenley, Marcia O.
2011-01-01
An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent – analytical solutions are available for this case, thus allowing rigorous assessment of the solution accuracy; (ii) a pair of low dielectric charged spheres embedded in a ionic solvent to compute electrostatic interaction free energies as a function of the distance between sphere centers; (iii) surface potentials of proteins, nucleic acids and their larger-scale assemblies such as ribosomes; and (iv) electrostatic solvation free energies and their salt sensitivities – obtained with both linear and nonlinear Poisson-Boltzmann equation – for a large set of proteins. These latter results along with timings can serve as benchmarks for comparing the performance of different PBE solvers. PMID:21984876
NASA Astrophysics Data System (ADS)
Chu, Chunlei; Stoffa, Paul L.
2012-01-01
Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations.
Small Scale Biodiversity of an Alkaline Hot Spring in Yellowstone National Park
NASA Astrophysics Data System (ADS)
Walther, K.; Oiler, J.; Meyer-Dombard, D. R.
2012-12-01
To date, many phylogenetic diversity studies have been conducted in Yellowstone National Park (YNP) [1-7] focusing on the amplification of the 16S rRNA gene and "metagenomic" datasets. However, few reports focus on diversity at small scales. Here, we report on a small scale biodiversity study of sediment and biofilm communities within a confined area of a YNP hot spring, compare and contrast these communities to other sediment and biofilm communities from previous studies [1-7], and with other sediment and biofilm communities in the same system. Sediment and biofilm samples were collected, using a 30 x 50 cm sampling grid divided in 5 x 5 cm squares, which was placed in the outflow channel of "Bat Pool", an alkaline (pH 7.9) hot spring in YNP. Accompanying geochemical data included a full range of spectrophotometry measurements along with major ions, trace elements, and DIC/DOC. In addition, in situ temperature and conductivity arrays were placed within the grid location. The temperature array closest to the source varied between 83-88°C, while the temperature array 40 cm downstream varied between ~83.5-86.5°C. The two conductivity arrays yielded measurements of 5632 μS and 5710 μS showing little variation within the sampling area. Within the grid space, DO ranged from 0.5-1.33 mg/L, with relatively similar, but slightly lower values down the outflow channel. Sulfide values within the grid ranged from 1020-1671 μg/L, while sulfide values outside of the grid region fluctuated, but generally followed the trend of decreasing from source down the outflow. Despite the relative heterogeneity of chemical and physical parameters in the grid space, there was biological diversity in sediments and biofilms at the 5 cm scale. Small scale biodiversity was analyzed by selecting a representative number of samples from within the grid. DNA was extracted and variable regions V3 and V6 (Archaea and Bacteria, respectively) were sequenced with 454 pyrosequencing. The datasets from each of the samples were randomly subsampled and the same number of sequences was taken from each dataset so that the samples could be directly compared. Using the Ribosomal Database Project Pyrosequencing Pipeline (http://rdp.cme.msu.edu/), the sequences were aligned, complete linkage clustering was performed, Shannon and Chao1 indices were calculated, and rarefaction curves were made. The RDP Classifier tool afforded classification in a taxonomical hierarchy and the samples were compared on the order level to determine the variation of the microbial communities within the sampling grid. Additional alpha and beta diversity indices were also established. Through comparing the samples on the order level, it was determined that there is variation within a small sampling area despite similar geochemical and temperature conditions at the time of sampling. This variation is seen in both the sediment and biofilm communities, primarily seen among Bacteria. [1] Barns, S.M. et al. (1994) PNAS. 91: 1609-1613. [2] Barns, S.M. et al. (1996) PNAS. 93: 9188-9193. [3] Hall, J.R. et al. (2008) AEM. 74(15): 4910-4922. [4] Hugenholtz, P. et al. (1998) JofBac. 180(2): 366-376. [5] Meyer-Dombard, D. R. et al. (2005) Geobio. 3: 211-227. [6] Meyer-Dombard, D.R. et al. (2011) EM. 13(8): 2216-2231. [7] Reysenbach, A.L. et al. (1994) AEM. 60 (6): 2113-2119.
MAGNA (Materially and Geometrically Nonlinear Analysis). Part II. Preprocessor Manual.
1982-12-01
AGRID can accept a virtually arbitrary collection of point coor- dinates which lie on a surface of interest, and generate a regular grid of mesh points...in the form of a collection of such patches to be translated into an assemblage of biquadratic surface elements (see Subsection 2.1, Figure 2.2...using IMPRESS can be converted for use with the present preprocessor by means of the IMPRINT translator. IMPRINT is a collection of conversion routines
A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆
Ying, Wenjun; Henriquez, Craig S.
2013-01-01
This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600
Exploration of exposure conditions with a novel wireless detector for bedside digital radiography
NASA Astrophysics Data System (ADS)
Bosmans, Hilde; Nens, Joris; Delzenne, Louis; Marshall, Nicholas; Pauwels, Herman; De Wever, Walter; Oyen, Raymond
2012-03-01
We propose, apply and validate an optimization scheme for a new wireless CsI based DR detector in combination with a regular mobile X-ray system for bedside imaging applications. Three different grids were tested in this combination. Signal-difference-to-noise was investigated in two ways, using a 1mm Cu piece in combination with different thicknesses of PMMA and by means of the CDRAD phantom using 10 images per condition and an automated evaluation method. A Figure of Merit (FOM), namely SDNR2/Imparted Energy, was calculated for a large range of exposure conditions, without and with grid in place. Misalignment of the grids was evaluated via the same FOMs. This optimization study was validated with comparative X-ray acquisitions performed on dead bodies. An experienced radiologist scored the quality of several specific aspects for all these exposures. Signal difference to noise ratios measured with the Cu method correlated well with the threshold contrasts from the CDRAD analysis (R2 > 0.9). The analysis showed optimal FOM with detector air kerma rates as typically used in clinical practice. Lower tube voltages provide higher FOM than the higher values but their practical use depends on the limitations of X-ray tubes, linked to patient motion artefacts. The use of high resolution grids should be encouraged, as the FOM increases with 47% at 75kV. These scores from the Visual grading study confirmed the results obtained with the FOM. The switch to (wireless) DR technology for bedside imaging could benefit from devices to improve grid positioning or any scatter reduction technique.
Solar activity and economic fundamentals: Evidence from 12 geographically disparate power grids
NASA Astrophysics Data System (ADS)
Forbes, Kevin F.; St. Cyr, O. C.
2008-10-01
This study uses local (ground-based) magnetometer data as a proxy for geomagnetically induced currents (GICs) to address whether there is a space weather/electricity market relationship in 12 geographically disparate power grids: Eirgrid, the power grid that serves the Republic of Ireland; Scottish and Southern Electricity, the power grid that served northern Scotland until April 2005; Scottish Power, the power grid that served southern Scotland until April 2005; the power grid that serves the Czech Republic; E.ON Netz, the transmission system operator in central Germany; the power grid in England and Wales; the power grid in New Zealand; the power grid that serves the vast proportion of the population in Australia; ISO New England, the power grid that serves New England; PJM, a power grid that over the sample period served all or parts of Delaware, Maryland, New Jersey, Ohio, Pennsylvania, Virginia, West Virginia, and the District of Columbia; NYISO, the power grid that serves New York State; and the power grid in the Netherlands. This study tests the hypothesis that GIC levels (proxied by the time variation of local magnetic field measurements (dH/dt)) and electricity grid conditions are related using Pearson's chi-squared statistic. The metrics of power grid conditions include measures of electricity market imbalances, energy losses, congestion costs, and actions by system operators to restore grid stability. The results of the analysis indicate that real-time market conditions in these power grids are statistically related with the GIC proxy.
Routine single particle CryoEM sample and grid characterization by tomography
Noble, Alex J; Brasch, Julia; Chase, Jillian; Acharya, Priyamvada; Tan, Yong Zi; Zhang, Zhening; Kim, Laura Y; Scapin, Giovanna; Rapp, Micah; Eng, Edward T; Rice, William J; Cheng, Anchi; Negro, Carl J; Shapiro, Lawrence; Kwong, Peter D; Jeruzalmi, David; des Georges, Amedee; Potter, Clinton S
2018-01-01
Single particle cryo-electron microscopy (cryoEM) is often performed under the assumption that particles are not adsorbed to the air-water interfaces and in thin, vitreous ice. In this study, we performed fiducial-less tomography on over 50 different cryoEM grid/sample preparations to determine the particle distribution within the ice and the overall geometry of the ice in grid holes. Surprisingly, by studying particles in holes in 3D from over 1000 tomograms, we have determined that the vast majority of particles (approximately 90%) are adsorbed to an air-water interface. The implications of this observation are wide-ranging, with potential ramifications regarding protein denaturation, conformational change, and preferred orientation. We also show that fiducial-less cryo-electron tomography on single particle grids may be used to determine ice thickness, optimal single particle collection areas and strategies, particle heterogeneity, and de novo models for template picking and single particle alignment. PMID:29809143
Technology for Elevated Temperature Tests of Structural Panels
NASA Technical Reports Server (NTRS)
Thornton, E. A.
1999-01-01
A technique for full-field measurement of surface temperature and in-plane strain using a single grid imaging technique was demonstrated on a sample subjected to thermally-induced strain. The technique is based on digital imaging of a sample marked by an alternating line array of La2O2S:Eu(+3) thermographic phosphor and chromium illuminated by a UV lamp. Digital images of this array in unstrained and strained states were processed using a modified spin filter. Normal strain distribution was determined by combining unstrained and strained grid images using a single grid digital moire technique. Temperature distribution was determined by ratioing images of phosphor intensity at two wavelengths. Combined strain and temperature measurements demonstrated on the thermally heated sample were DELTA-epsilon = +/- 250 microepsilon and DELTA-T = +/- 5 K respectively with a spatial resolution of 0.8 mm.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
Ashtiani, Dariush; Venugopal, Hari; Belousoff, Matthew; Spicer, Bradley; Mak, Johnson; Neild, Adrian; de Marco, Alex
2018-04-06
Cryo-Electron Microscopy (cryo-EM) has become an invaluable tool for structural biology. Over the past decade, the advent of direct electron detectors and automated data acquisition has established cryo-EM as a central method in structural biology. However, challenges remain in the reliable and efficient preparation of samples in a manner which is compatible with high time resolution. The delivery of sample onto the grid is recognized as a critical step in the workflow as it is a source of variability and loss of material due to the blotting which is usually required. Here, we present a method for sample delivery and plunge freezing based on the use of Surface Acoustic Waves to deploy 6-8 µm droplets to the EM grid. This method minimises the sample dead volume and ensures vitrification within 52.6 ms from the moment the sample leaves the microfluidics chip. We demonstrate a working protocol to minimize the atomised volume and apply it to plunge freeze three different samples and provide proof that no damage occurs due to the interaction between the sample and the acoustic waves. Copyright © 2018 Elsevier Inc. All rights reserved.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Lausch, V; Hermann, P; Laue, M; Bannert, N
2014-06-01
Successive application of negative staining transmission electron microscopy (TEM) and tip-enhanced Raman spectroscopy (TERS) is a new correlative approach that could be used to rapidly and specifically detect and identify single pathogens including bioterrorism-relevant viruses in complex samples. Our objective is to evaluate the TERS-compatibility of commonly used electron microscopy (EM) grids (sample supports), chemicals and negative staining techniques and, if required, to devise appropriate alternatives. While phosphortungstic acid (PTA) is suitable as a heavy metal stain, uranyl acetate, paraformaldehyde in HEPES buffer and alcian blue are unsuitable due to their relatively high Raman scattering. Moreover, the low thermal stability of the carbon-coated pioloform film on copper grids (pioloform grids) negates their utilization. The silicon in the cantilever of the silver-coated atomic force microscope tip used to record TERS spectra suggested that Si-based grids might be employed as alternatives. From all evaluated Si-based TEM grids, the silicon nitride (SiN) grid was found to be best suited, with almost no background Raman signals in the relevant spectral range, a low surface roughness and good particle adhesion properties that could be further improved by glow discharge. Charged SiN grids have excellent particle adhesion properties. The use of these grids in combination with PTA for contrast in the TEM is suitable for subsequent analysis by TERS. The study reports fundamental modifications and optimizations of the negative staining EM method that allows a combination with near-field Raman spectroscopy to acquire a spectroscopic signature from nanoscale biological structures. This should facilitate a more precise diagnosis of single viral particles and other micro-organisms previously localized and visualized in the TEM. © 2014 The Society for Applied Microbiology.
Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence
NASA Astrophysics Data System (ADS)
Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem
2017-04-01
Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings from the 3DSGT by the time of the conference. {Acknowledgements}: This work has been supported partly by the EuHIT grant, 'Turbulence Generated by Sparse 3D Multi-Scale Grid (M3SG)', 2017. {References} [1] S. Laizet, J. C. Vassilicos. DNS of Fractal-Generated Turbulence. Flow Turbulence Combust 87:673705, (2011). [2] N. A. Malik. Sparse 3D Multi-Scale Grid Turbulence Generator. USPTO Application no. 14/710,531, Patent Pending, (2015). [3] J. Tellez, M. Gomez, B. Russo, J.M. Redondo. Surface Flow Image Velocimetry (SFIV) for hydraulics applications. 18th Int. Symposium on the Application of Laser Imaging Techniques in Fluid Mechanics, Lisbon, Portugal (2016).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
The three-dimensional Multi-Block Advanced Grid Generation System (3DMAGGS)
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Weilmuenster, Kenneth J.
1993-01-01
As the size and complexity of three dimensional volume grids increases, there is a growing need for fast and efficient 3D volumetric elliptic grid solvers. Present day solvers are limited by computational speed and do not have all the capabilities such as interior volume grid clustering control, viscous grid clustering at the wall of a configuration, truncation error limiters, and convergence optimization residing in one code. A new volume grid generator, 3DMAGGS (Three-Dimensional Multi-Block Advanced Grid Generation System), which is based on the 3DGRAPE code, has evolved to meet these needs. This is a manual for the usage of 3DMAGGS and contains five sections, including the motivations and usage, a GRIDGEN interface, a grid quality analysis tool, a sample case for verifying correct operation of the code, and a comparison to both 3DGRAPE and GRIDGEN3D. Since it was derived from 3DGRAPE, this technical memorandum should be used in conjunction with the 3DGRAPE manual (NASA TM-102224).
Implicit finite difference methods on composite grids
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1987-01-01
Techniques for eliminating time lags in the implicit finite-difference solution of partial differential equations are investigated analytically, with a focus on transient fluid dynamics problems on overlapping multicomponent grids. The fundamental principles of the approach are explained, and the method is shown to be applicable to both rectangular and curvilinear grids. Numerical results for sample problems are compared with exact solutions in graphs, and good agreement is demonstrated.
GridMass: a fast two-dimensional feature detection method for LC/MS.
Treviño, Victor; Yañez-Garza, Irma-Luz; Rodriguez-López, Carlos E; Urrea-López, Rafael; Garza-Rodriguez, Maria-Lourdes; Barrera-Saldaña, Hugo-Alberto; Tamez-Peña, José G; Winkler, Robert; Díaz de-la-Garza, Rocío-Isabel
2015-01-01
One of the initial and critical procedures for the analysis of metabolomics data using liquid chromatography and mass spectrometry is feature detection. Feature detection is the process to detect boundaries of the mass surface from raw data. It consists of detected abundances arranged in a two-dimensional (2D) matrix of mass/charge and elution time. MZmine 2 is one of the leading software environments that provide a full analysis pipeline for these data. However, the feature detection algorithms provided in MZmine 2 are based mainly on the analysis of one-dimension at a time. We propose GridMass, an efficient algorithm for 2D feature detection. The algorithm is based on landing probes across the chromatographic space that are moved to find local maxima providing accurate boundary estimations. We tested GridMass on a controlled marker experiment, on plasma samples, on plant fruits, and in a proteome sample. Compared with other algorithms, GridMass is faster and may achieve comparable or better sensitivity and specificity. As a proof of concept, GridMass has been implemented in Java under the MZmine 2 environment and is available at http://www.bioinformatica.mty.itesm.mx/GridMass and MASSyPup. It has also been submitted to the MZmine 2 developing community. Copyright © 2015 John Wiley & Sons, Ltd.
Davis, Tracy A.; Kulongoski, Justin T.; Belitz, Kenneth
2013-01-01
Groundwater quality in the 48-square-mile Santa Barbara study unit was investigated by the U.S. Geological Survey (USGS) from January to February 2011, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The Santa Barbara study unit was the thirty-fourth study unit to be sampled as part of the GAMA-PBP. The GAMA Santa Barbara study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as those parts of the aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the Santa Barbara study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the Santa Barbara study unit located in Santa Barbara and Ventura Counties, groundwater samples were collected from 24 wells. Eighteen of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and six wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds); constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]); naturally occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and arsenic, chromium, and iron species); and radioactive constituents (radon-222 and gross alpha and gross beta radioactivity). Naturally occurring isotopes (stable isotopes of hydrogen and oxygen in water, stables isotopes of inorganic carbon and boron dissolved in water, isotope ratios of dissolved strontium, tritium activities, and carbon-14 abundances) and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 281 constituents and water-quality indicators were measured. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 12 percent of the wells in the Santa Barbara study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 82 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 18 grid wells in the Santa Barbara study unit were detected at concentrations less than drinking-water benchmarks. Of the 220 organic and special-interest constituents sampled for at the 18 grid wells, 13 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and non-regulatory health-based benchmarks. In total, VOCs were detected in 61 percent of the 18 grid wells sampled, pesticides and pesticide degradates were detected in 11 percent, and perchlorate was detected in 67 percent. Polar pesticides and their degradates, pharmaceutical compounds, and NDMA were not detected in any of the grid wells sampled in the Santa Barbara study unit. Eighteen grid wells were sampled for trace elements, major and minor ions, nutrients, and radioactive constituents; most detected concentrations were less than health-based benchmarks. Exceptions are one detection of boron greater than the CDPH notification level (NL-CA) of 1,000 micrograms per liter (μg/L) and one detection of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L). Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in three grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in seven grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in four grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in eight grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 17 grid wells, and concentrations in six of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.
Sparsely sampling the sky: Regular vs. random sampling
NASA Astrophysics Data System (ADS)
Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.
2015-09-01
Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
NPP-VIIRS DNB-based reallocating subpopulations to mercury in Urumqi city cluster, central Asia
NASA Astrophysics Data System (ADS)
Zhou, X.; Feng, X. B.; Dai, W.; Li, P.; Ju, C. Y.; Bao, Z. D.; Han, Y. L.
2017-02-01
Accurate and update assignment of population-related environmental matters onto fine grid cells in oasis cities of arid areas remains challenging. We present the approach based on Suomi National Polar-orbiting Partnership (S-NPP) -Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) to reallocate population onto a regular finer surface. The number of potential population to the mercury were reallocated onto 0.1x0.1 km reference grid in Urumqi city cluster of China’s Xinjiang, central Asia. The result of Monte Carlo modelling indicated that the range of 0.5 to 2.4 million people was reliable. The study highlights that the NPP-VIIRS DNB-based multi-layered, dasymetric, spatial method enhances our abilities to remotely estimate the distribution and size of target population at the street-level scale and has the potential to transform control strategies for epidemiology, public policy and other socioeconomic fields.
A deep learning-based reconstruction of cosmic ray-induced air showers
NASA Astrophysics Data System (ADS)
Erdmann, M.; Glombitza, J.; Walz, D.
2018-01-01
We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.
Compressed Sensing in On-Grid MIMO Radar.
Minner, Michael F
2015-01-01
The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.
A dynamic analysis of rotary combustion engine seals
NASA Technical Reports Server (NTRS)
Knoll, J.; Vilmann, C. R.; Schock, H. J.; Stumpf, R. P.
1984-01-01
Real time work cell pressures are incorporated into a dynamic analysis of the gas sealing grid in Rotary Combustion Engines. The analysis which utilizes only first principal concepts accounts for apex seal separation from the crochoidal bore, apex seal shifting between the sides of its restraining channel, and apex seal rotation within the restraining channel. The results predict that apex seals do separate from the trochoidal bore and shift between the sides of their channels. The results also show that these two motions are regularly initiated by a seal rotation. The predicted motion of the apex seals compares favorably with experimental results. Frictional losses associated with the sealing grid are also calculated and compare well with measurements obtained in a similar engine. A comparison of frictional losses when using steel and carbon apex seals has also been made as well as friction losses for single and dual side sealing.
NASA Technical Reports Server (NTRS)
Hammond, Ernest C., Jr.; Peters, Kevein; Boone, Kevin
1995-01-01
The current requirements for the Laboratory for Astronomy and Solar Physics, sends rocket satellites and in the near future will involve flights in the shuttle to the upper reaches of the Earth's atmosphere where they will be subjected to the atomic particles and electromagnetic radiation produced by the Sun and other cosmic radiation. It is therefore appropriate to examine the effect of neutrons, gamma rays, beta particles, and X-rays on the film currently being used by the Laboratory for current and future research requirements. It is also hoped by examining these particles in their effect that we will have simulated the space environment of the rockets, satellites, and shuttles. Several samples of the IIaO film were exposed to a neutron howitzer with a source energy of approximately 106 neutrons/steradians. We exposed several samples of the film to a 10 second blast of neutrons in both metal and plastic containers which exhibited higher density readings which indicated the possibility of some secondary nuclear interactions between neutrons and the aluminum container. The plastic container showed some variations at the higher densities. Exposure of the samples of IIaO film to a neutron beam of approximately 10 neutrons per steradians for eight minutes produces approximately a 13% difference in the density readings of the dark density grids. It is not noticeable that at the lighter density grid the neutrons have minimal effects, but on a whole the trend of the eight minute exposed IIaO film density grids at the darker end had a 7.1% difference than the control. Further analysis is anticipated by increasing the exposure time. Two sets of film were exposed to a beta source in a plastic container. The beta source was placed at the bottom so that the cone of rays striking the film would be conical for a period of seven days. It was observed in the films, designated 4a and 4b, a dramatic increase in the grid densities had occurred. The attenuation of beta particles due to the presence of air were observed. The darker density grids, whose positions were the furthest from the beta source, displayed minimal fluctuations as compared with the control. It is suspected that the orientation of the film in the cansister with the beta source is the key factor responsible for the dramatic increases of the lighter density grids. Emulsions 3a and 3b exposed for a period of six days with the grid orientation reserved produced substantial differences in the darker grids as shown in the graphs. There is a great deal of fluctuations in this sample between the beta exposed density grids and the control density grids. The lighter density grids whose orientations were reversed displays minimal fluctuations due to the presence of this beta source and the attenuation that is taking place.
Multigrid Strategies for Viscous Flow Solvers on Anisotropic Unstructured Meshes
NASA Technical Reports Server (NTRS)
Movriplis, Dimitri J.
1998-01-01
Unstructured multigrid techniques for relieving the stiffness associated with high-Reynolds number viscous flow simulations on extremely stretched grids are investigated. One approach consists of employing a semi-coarsening or directional-coarsening technique, based on the directions of strong coupling within the mesh, in order to construct more optimal coarse grid levels. An alternate approach is developed which employs directional implicit smoothing with regular fully coarsened multigrid levels. The directional implicit smoothing is obtained by constructing implicit lines in the unstructured mesh based on the directions of strong coupling. Both approaches yield large increases in convergence rates over the traditional explicit full-coarsening multigrid algorithm. However, maximum benefits are achieved by combining the two approaches in a coupled manner into a single algorithm. An order of magnitude increase in convergence rate over the traditional explicit full-coarsening algorithm is demonstrated, and convergence rates for high-Reynolds number viscous flows which are independent of the grid aspect ratio are obtained. Further acceleration is provided by incorporating low-Mach-number preconditioning techniques, and a Newton-GMRES strategy which employs the multigrid scheme as a preconditioner. The compounding effects of these various techniques on speed of convergence is documented through several example test cases.
A weakly-compressible Cartesian grid approach for hydrodynamic flows
NASA Astrophysics Data System (ADS)
Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.
2017-11-01
The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.
Grid-cell-based crop water accounting for the famine early warning system
Verdin, J.; Klaver, R.
2002-01-01
Rainfall monitoring is a regular activity of food security analysts for sub-Saharan Africa due to the potentially disastrous impact of drought. Crop water accounting schemes are used to track rainfall timing and amounts relative to phenological requirements, to infer water limitation impacts on yield. Unfortunately, many rain gauge reports are available only after significant delays, and the gauge locations leave large gaps in coverage. As an alternative, a grid-cell-based formulation for the water requirement satisfaction index (WRSI) was tested for maize in Southern Africa. Grids of input variables were obtained from remote sensing estimates of rainfall, meteorological models, and digital soil maps. The spatial WRSI was computed for the 1996–97 and 1997–98 growing seasons. Maize yields were estimated by regression and compared with a limited number of reports from the field for the 1996–97 season in Zimbabwe. Agreement at a useful level (r = 0·80) was observed. This is comparable to results from traditional analysis with station data. The findings demonstrate the complementary role that remote sensing, modelling, and geospatial analysis can play in an era when field data collection in sub-Saharan Africa is suffering an unfortunate decline.
NASA Astrophysics Data System (ADS)
Korobova, Elena; Romanov, Sergey; Beriozkin, Victor; Dogadkin, Nikolay
2016-04-01
The main goal of the study performed in 2014-2015 at the test site located in the abandoned zone of the Iput river basin was to study detailed patterns of Cs-137 redistribution along the terrace slope and the adjacent floodplain depression almost 30 years after the Chernobyl accident. Cs-137 surface activity was measured with the help of modified field gamma-spectrometer Violinist III (USA) in a grid 2 m x 2 m within the test plot sized 10 m x 24 m. Gamma-spectrometry was accompanied by topographical survey. Cs-137 depth distribution was studied by soil core sampling in increments of 2 cm and 5 cm down to 40 cm depth. Cs-137 activity in soil samples was measured in laboratory conditions by Nokia gamma-spectrometer. The results showed distinct natural dissimilarity of Cs-137 surface activity within the undisturbed soil of slope. Cs-137 depth migration in successive soil cores marked different patterns correlated with the position in relief. In particular cores Cs-137 depth variation correlated with water regime that shows that the processes of secondary distribution of Cs-137 along the slope obviously depend upon water migration. The finding is important for understanding of regularities in patterns of radiocesium spatial distribution.
Stackable differential mobility analyzer for aerosol measurement
Cheng, Meng-Dawn [Oak Ridge, TN; Chen, Da-Ren [Creve Coeur, MO
2007-05-08
A multi-stage differential mobility analyzer (MDMA) for aerosol measurements includes a first electrode or grid including at least one inlet or injection slit for receiving an aerosol including charged particles for analysis. A second electrode or grid is spaced apart from the first electrode. The second electrode has at least one sampling outlet disposed at a plurality different distances along its length. A volume between the first and the second electrode or grid between the inlet or injection slit and a distal one of the plurality of sampling outlets forms a classifying region, the first and second electrodes for charging to suitable potentials to create an electric field within the classifying region. At least one inlet or injection slit in the second electrode receives a sheath gas flow into an upstream end of the classifying region, wherein each sampling outlet functions as an independent DMA stage and classifies different size ranges of charged particles based on electric mobility simultaneously.
Testing & Validating: 3D Seismic Travel Time Tomography (Detailed Shallow Subsurface Imaging)
NASA Astrophysics Data System (ADS)
Marti, David; Marzan, Ignacio; Alvarez-Marron, Joaquina; Carbonell, Ramon
2016-04-01
A detailed full 3 dimensional P wave seismic velocity model was constrained by a high-resolution seismic tomography experiment. A regular and dense grid of shots and receivers was use to image a 500x500x200 m volume of the shallow subsurface. 10 GEODE's resulting in a 240 channels recording system and a 250 kg weight drop were used for the acquisition. The recording geometry consisted in 10x20m geophone grid spacing, and a 20x20 m stagered source spacing. A total of 1200 receivers and 676 source points. The study area is located within the Iberian Meseta, in Villar de Cañas (Cuenca, Spain). The lithological/geological target consisted in a Neogen sedimentary sequence formed from bottom to top by a transition from gyspum to silstones. The main objectives consisted in resolving the underground structure: contacts/discontinuities; constrain the 3D geometry of the lithology (possible cavities, faults/fractures). These targets were achieved by mapping the 3D distribution of the physical properties (P-wave velocity). The regularly space dense acquisition grid forced to acquire the survey in different stages and with a variety of weather conditions. Therefore, a careful quality control was required. More than a half million first arrivals were inverted to provide a 3D Vp velocity model that reached depths of 120 m in the areas with the highest ray coverage. An extended borehole campaign, that included borehole geophysical measurements in some wells provided unique tight constraints on the lithology an a validation scheme for the tomographic results. The final image reveals a laterally variable structure consisting of four different lithological units. In this methodological validation test travel-time tomography features a high capacity of imaging in detail the lithological contrasts for complex structures located at very shallow depths.
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
A method for grounding grid corrosion rate prediction
NASA Astrophysics Data System (ADS)
Han, Juan; Du, Jingyi
2017-06-01
Involved in a variety of factors, prediction of grounding grid corrosion complex, and uncertainty in the acquisition process, we propose a combination of EAHP (extended AHP) and fuzzy nearness degree of effective grounding grid corrosion rate prediction model. EAHP is used to establish judgment matrix and calculate the weight of each factors corrosion of grounding grid; different sample classification properties have different corrosion rate of contribution, and combining the principle of close to predict corrosion rate.The application result shows, the model can better capture data variation, thus to improve the validity of the model to get higher prediction precision.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging
Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.
2012-01-01
Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065
Shelton, Jennifer L.; Fram, Miranda S.; Munday, Cathy M.; Belitz, Kenneth
2010-01-01
Groundwater quality in the approximately 25,500-square-mile Sierra Nevada study unit was investigated in June through October 2008, as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Sierra Nevada study was designed to provide statistically robust assessments of untreated groundwater quality within the primary aquifer systems in the study unit, and to facilitate statistically consistent comparisons of groundwater quality throughout California. The primary aquifer systems (hereinafter, primary aquifers) are defined by the depth of the screened or open intervals of the wells listed in the California Department of Public Health (CDPH) database of wells used for public and community drinking-water supplies. The quality of groundwater in shallower or deeper water-bearing zones may differ from that in the primary aquifers; shallow groundwater may be more vulnerable to contamination from the surface. In the Sierra Nevada study unit, groundwater samples were collected from 84 wells (and springs) in Lassen, Plumas, Butte, Sierra, Yuba, Nevada, Placer, El Dorado, Amador, Alpine, Calaveras, Tuolumne, Madera, Mariposa, Fresno, Inyo, Tulare, and Kern Counties. The wells were selected on two overlapping networks by using a spatially-distributed, randomized, grid-based approach. The primary grid-well network consisted of 30 wells, one well per grid cell in the study unit, and was designed to provide statistical representation of groundwater quality throughout the entire study unit. The lithologic grid-well network is a secondary grid that consisted of the wells in the primary grid-well network plus 53 additional wells and was designed to provide statistical representation of groundwater quality in each of the four major lithologic units in the Sierra Nevada study unit: granitic, metamorphic, sedimentary, and volcanic rocks. One natural spring that is not used for drinking water was sampled for comparison with a nearby primary grid well in the same cell. Groundwater samples were analyzed for organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (N-nitrosodimethylamine [NDMA] and perchlorate), naturally occurring inorganic constituents (nutrients, major ions, total dissolved solids, and trace elements), and radioactive constituents (radium isotopes, radon-222, gross alpha and gross beta particle activities, and uranium isotopes). Naturally occurring isotopes and geochemical tracers (stable isotopes of hydrogen and oxygen in water, stable isotopes of carbon, carbon-14, strontium isotopes, and tritium), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. Three types of quality-control samples (blanks, replicates, and samples for matrix spikes) each were collected at approximately 10 percent of the wells sampled for each analysis, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection, handling, and analytical procedures was not a significant source of bias in the data for the groundwater samples. Differences between replicate samples were within acceptable ranges, with few exceptions. Matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, groundwater typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory benchmarks apply to finished drinking water that is served to the consumer, not to untre
Optimal Wind Energy Integration in Large-Scale Electric Grids
NASA Astrophysics Data System (ADS)
Albaijat, Mohammad H.
The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create profit for investors for renting their transmission capacity, and cheaper electricity for end users. We propose a hybrid method based on a heuristic and deterministic method to attain new transmission lines additions and increase transmission capacity. Renewable energy resources (RES) have zero operating cost, which makes them very attractive for generation companies and market participants. In addition, RES have zero carbon emission, which helps relieve the concerns of environmental impacts of electric generation resources' carbon emission. RES are wind, solar, hydro, biomass, and geothermal. By 2030, the expectation is that more than 30% of electricity in the U.S. will come from RES. One major contributor of RES generation will be from wind energy resources (WES). Furthermore, WES will be an important component of the future generation portfolio. However, the nature of WES is that it experiences a high intermittency and volatility. Because of the great expectation of high WES penetration and the nature of such resources, researchers focus on studying the effects of such resources on the electric grid operation and its adequacy from different aspects. Additionally, current market operations of electric grids add another complication to consider while integrating RES (e.g., specifically WES). Mandates by market rules and long-term analysis of renewable penetration in large-scale electric grid are also the focus of researchers in recent years. We advocate a method for high-wind resources penetration study on large-scale electric grid operations. PMU is a geographical positioning system (GPS) based device, which provides immediate and precise measurements of voltage angle in a high-voltage transmission system. PMUs can update the status of a transmission line and related measurements (e.g., voltage magnitude and voltage phase angle) more frequently. Every second, a PMU can provide 30 samples of measurements compared to traditional systems (e.g., supervisory control and data acquisition [SCADA] system), which provides one sample of measurement every 2 to 5 seconds. Because PMUs provide more measurement data samples, PMU can improve electric grid reliability and observability. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.
2017-10-01
We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.
Mathany, Timothy M.; Wright, Michael T.; Beuttel, Brandon S.; Belitz, Kenneth
2012-01-01
Groundwater quality in the 12,103-square-mile Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts (CLUB) study unit was investigated by the U.S. Geological Survey (USGS) from December 2008 to March 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CLUB study unit was the twenty-eighth study unit to be sampled as part of the GAMA-PBP. The GAMA CLUB study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer systems, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer systems (hereinafter referred to as primary aquifers) are defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the CLUB study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifers; shallow groundwater may be more vulnerable to surficial contamination. In the CLUB study unit, groundwater samples were collected from 52 wells in 3 study areas (Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts) in San Bernardino, Riverside, Kern, San Diego, and Imperial Counties. Forty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and three wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally-occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and species of inorganic chromium), and radioactive constituents (radon-222, radium isotopes, and gross alpha and gross beta radioactivity). Naturally-occurring isotopes (stable isotopes of hydrogen, oxygen, boron, and strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance) and dissolved noble gases also were measured to help identify the sources and ages of sampled groundwater. In total, 223 constituents and 12 water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 10 percent of the wells in the CLUB study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Median matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 85 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 49 grid wells were detected at concentrations less than drinking-water benchmarks. In addition, all detections of organic constituents from the CLUB study-unit grid-well samples were less than health-based benchmarks. In total, VOCs were detected in 17 of the 49 grid wells sampled (approximately 35 percent), pesticides and pesticide degradates were detected in 5 of the 47 grid wells sampled (approximately 11 percent), and perchlorate was detected in 41 of 49 grid wells sampled (approximately 84 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells, and radioactive constituents were sampled for at 23 grid wells; most detected concentrations were less than health-based benchmarks. Exceptions in the grid-well samples include seven detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L); four detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L; six detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L; two detections of uranium greater than the MCL-US of 30 μg/L; nine detections of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L); one detection of nitrite plus nitrate (NO2-+NO3-), as nitrogen, greater than the MCL-US of 10 mg/L; and four detections of gross alpha radioactivity (72-hour count), and one detection of gross alpha radioactivity (30-day count), greater than the MCL-US of 15 picocuries per liter. Results for constituents with non-regulatory benchmarks set for aesthetic concerns showed that a manganese concentration greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 50 μg/L was detected in one grid well. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were detected in three grid wells, and one of these wells also had a concentration that was greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in six grid wells. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 20 grid wells, and concentrations in 2 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.
Impacts of Marcellus Shale Natural Gas Production on Regional Air Quality
NASA Astrophysics Data System (ADS)
Swarthout, R.; Russo, R. S.; Zhou, Y.; Mitchell, B.; Miller, B.; Lipsky, E. M.; Sive, B. C.
2012-12-01
Natural gas is a clean burning alternative to other fossil fuels, producing lower carbon dioxide (CO2) emissions during combustion. Gas deposits located within shale rock or tight sand formations are difficult to access using conventional drilling techniques. However, horizontal drilling coupled with hydraulic fracturing is now widely used to enhance natural gas extraction. Potential environmental impacts of these practices are currently being assessed because of the rapid expansion of natural gas production in the U.S. Natural gas production has contributed to the deterioration of air quality in several regions, such as in Wyoming and Utah, that were near or downwind of natural gas basins. We conducted a field campaign in southwestern Pennsylvania on 16-18 June 2012 to investigate the impact of gas production operations in the Marcellus Shale on regional air quality. A total of 235 whole air samples were collected in 2-liter electropolished stainless- steel canisters throughout southwestern Pennsylvania in a regular grid pattern that covered an area of approximately 8500 square km. Day and night samples were collected at each grid point and additional samples were collected near active wells, flaring wells, fluid retention reservoirs, transmission pipelines, and a processing plant to assess the influence of different stages of the gas production operation on emissions. The samples were analyzed at Appalachian State University for methane (CH4), CO2, C2-C10 nonmethane hydrocarbons (NMHCs), C1-C2 halocarbons, C1-C5 alkyl nitrates and selected reduced sulfur compounds. In-situ measurements of ozone (O3), CH4, CO2, nitric oxide (NO), total reactive nitrogen (NOy), formaldehyde (HCHO), and a range of volatile organic compounds (VOCs) were carried out at an upwind site and a site near active gas wells using a mobile lab. Emissions associated with gas production were observed throughout the study region. Elevated mixing ratios of CH4 and CO2 were observed in the southwest and northeast portions of the study area indicating multiple emission sources. We also present comparisons of VOC fingerprints observed in the Marcellus Shale to our previous observations of natural gas emissions from the Denver-Julesburg Basin in northeast Colorado to identify tracers for these different natural gas sources.
ARC SDK: A toolbox for distributed computing and data applications
NASA Astrophysics Data System (ADS)
Skou Andersen, M.; Cameron, D.; Lindemann, J.
2014-06-01
Grid middleware suites provide tools to perform the basic tasks of job submission and retrieval and data access, however these tools tend to be low-level, operating on individual jobs or files and lacking in higher-level concepts. User communities therefore generally develop their own application-layer software catering to their specific communities' needs on top of the Grid middleware. It is thus important for the Grid middleware to provide a friendly, well documented and simple to use interface for the applications to build upon. The Advanced Resource Connector (ARC), developed by NorduGrid, provides a Software Development Kit (SDK) which enables applications to use the middleware for job and data management. This paper presents the architecture and functionality of the ARC SDK along with an example graphical application developed with the SDK. The SDK consists of a set of libraries accessible through Application Programming Interfaces (API) in several languages. It contains extensive documentation and example code and is available on multiple platforms. The libraries provide generic interfaces and rely on plugins to support a given technology or protocol and this modular design makes it easy to add a new plugin if the application requires supporting additional technologies.The ARC Graphical Clients package is a graphical user interface built on top of the ARC SDK and the Qt toolkit and it is presented here as a fully functional example of an application. It provides a graphical interface to enable job submission and management at the click of a button, and allows data on any Grid storage system to be manipulated using a visual file system hierarchy, as if it were a regular file system.
NASA Astrophysics Data System (ADS)
Sargent, Benjamin A.; Srinivasan, S.; Meixner, M.
2011-02-01
To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the "Grid of Red supergiant and Asymptotic giant branch star ModelS" (GRAMS). This model grid explores four parameters—stellar effective temperature from 2100 K to 4700 K luminosity from 103 to 106 L sun; dust shell inner radii of 3, 7, 11, and 15 R star; and 10.0 μm optical depth from 10-4 to 26. From an initial grid of ~1200 2Dust models, we create a larger grid of ~69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.
A grid-embedding transonic flow analysis computer program for wing/nacelle configurations
NASA Technical Reports Server (NTRS)
Atta, E. H.; Vadyak, J.
1983-01-01
An efficient grid-interfacing zonal algorithm was developed for computing the three-dimensional transonic flow field about wing/nacelle configurations. the algorithm uses the full-potential formulation and the AF2 approximate factorization scheme. The flow field solution is computed using a component-adaptive grid approach in which separate grids are employed for the individual components in the multi-component configuration, where each component grid is optimized for a particular geometry such as the wing or nacelle. The wing and nacelle component grids are allowed to overlap, and flow field information is transmitted from one grid to another through the overlap region using trivariate interpolation. This report represents a discussion of the computational methods used to generate both the wing and nacelle component grids, the technique used to interface the component grids, and the method used to obtain the inviscid flow solution. Computed results and correlations with experiment are presented. also presented are discussions on the organization of the wing grid generation (GRGEN3) and nacelle grid generation (NGRIDA) computer programs, the grid interface (LK) computer program, and the wing/nacelle flow solution (TWN) computer program. Descriptions of the respective subroutines, definitions of the required input parameters, a discussion on interpretation of the output, and the sample cases illustrating application of the analysis are provided for each of the four computer programs.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
Chen, Ying; Pham, Tuan D
2013-05-15
We apply for the first time the sample entropy (SampEn) and regularity dimension model for measuring signal complexity to quantify the structural complexity of the brain on MRI. The concept of the regularity dimension is based on the theory of chaos for studying nonlinear dynamical systems, where power laws and entropy measure are adopted to develop the regularity dimension for modeling a mathematical relationship between the frequencies with which information about signal regularity changes in various scales. The sample entropy and regularity dimension of MRI-based brain structural complexity are computed for early Alzheimer's disease (AD) elder adults and age and gender-matched non-demented controls, as well as for a wide range of ages from young people to elder adults. A significantly higher global cortical structure complexity is detected in AD individuals (p<0.001). The increase of SampEn and the regularity dimension are also found to be accompanied with aging which might indicate an age-related exacerbation of cortical structural irregularity. The provided model can be potentially used as an imaging bio-marker for early prediction of AD and age-related cognitive decline. Copyright © 2013 Elsevier B.V. All rights reserved.
Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking
Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun
2017-01-01
Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876
Zare, Mohammad Reza; Mostajaboddavati, Mojtaba; Kamali, Mahdi; Tari, Marziyeh; Mosayebi, Sanaz; Mortazavi, Mohammad Seddigh
2015-03-15
This study aims to establish a managed sampling plan for rapid estimate of natural radio-nuclides diffusion in the northern coast of the Oman Sea. First, the natural radioactivity analysis in 36 high volume surface water samples was carried out using a portable high-resolution gamma-ray spectrometry. Second, the oceanic currents in the northern coast were investigated. Then, the third generation spectral SWAN model was utilized to simulate wave parameters. Direction of natural radioactivity propagation was coupled with the preferable wave vectors and oceanic currents direction that face to any marine pollution, these last two factors will contribute to increase or decrease of pollution in each grid. The results were indicated that the natural radioactivity concentration between the grids 8600 and 8604 is gathered in the grid 8600 and between the grids 8605 and 8608 is propagated toward middle part of Oman Sea. Copyright © 2014 Elsevier Ltd. All rights reserved.
Numerical modelling of needle-grid electrodes for negative surface corona charging system
NASA Astrophysics Data System (ADS)
Zhuang, Y.; Chen, G.; Rotaru, M.
2011-08-01
Surface potential decay measurement is a simple and low cost tool to examine electrical properties of insulation materials. During the corona charging stage, a needle-grid electrodes system is often used to achieve uniform charge distribution on the surface of the sample. In this paper, a model using COMSOL Multiphysics has been developed to simulate the gas discharge. A well-known hydrodynamic drift-diffusion model was used. The model consists of a set of continuity equations accounting for the movement, generation and loss of charge carriers (electrons, positive and negative ions) coupled with Poisson's equation to take into account the effect of space and surface charges on the electric field. Four models with the grid electrode in different positions and several mesh sizes are compared with a model that only has the needle electrode. The results for impulse current and surface charge density on the sample clearly show the effect of the extra grid electrode with various positions.
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
[Poliomyelitis case surveillance data management in Burkina Faso].
Drabo, Koiné Maxime; Nana, Félicité; Kouassi, Kouassi Lazare; Konfé, Salifou; Hien, Hervé; Saizonou, Jacques; Ouedraogo, Tinoaga Laurent
2015-01-01
The global initiative for poliomyelitis eradication can only remain relevant if survey systems are regularly assessed. In order to identify shortcomings and to propose improvement, the data collection and transmission during case investigation were assessed in the Banfora health district in Burkina Faso. The survey targeted six (6) primary health centres, the district laboratory and the national laboratory, all involved in the poliomyelitis surveillance system. Data from registers, forms documenting suspected cases, stool sample forms and weekly reports were collected by means of a data grid. Data from actors involved in the poliomyelitis case investigation system were collected by means of an individual questionnaire. The reactivity of investigating suspected cases was satisfactory with a median alert questionnaire notification time of 18 hours. The completeness of the reporting system was satisfactory. Nevertheless, the promptness of data management by primary heath centres and the national laboratory remained unsatisfactory. Evaluation of data management revealed logistic and organization shortcomings. The overall efficacy of the poliomyelitis surveillance could be improved by using management tools for laboratory supplies, collecting data related to the homes of suspected cases and implementing a cold chain maintenance plan.
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.
2017-01-01
The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Soh, Woo-Yung; Yoon, Seokkwan
1989-01-01
A finite-volume lower-upper (LU) implicit scheme is used to simulate an inviscid flow in a tubine cascade. This approximate factorization scheme requires only the inversion of sparse lower and upper triangular matrices, which can be done efficiently without extensive storage. As an implicit scheme it allows a large time step to reach the steady state. An interactive grid generation program (TURBO), which is being developed, is used to generate grids. This program uses the control point form of algebraic grid generation which uses a sparse collection of control points from which the shape and position of coordinate curves can be adjusted. A distinct advantage of TURBO compared with other grid generation programs is that it allows the easy change of local mesh structure without affecting the grid outside the domain of independence. Sample grids are generated by TURBO for a compressor rotor blade and a turbine cascade. The turbine cascade flow is simulated by using the LU implicit scheme on the grid generated by TURBO.
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
MIB Galerkin method for elliptic interface problems.
Xia, Kelin; Zhan, Meng; Wei, Guo-Wei
2014-12-15
Material interfaces are omnipresent in the real-world structures and devices. Mathematical modeling of material interfaces often leads to elliptic partial differential equations (PDEs) with discontinuous coefficients and singular sources, which are commonly called elliptic interface problems. The development of high-order numerical schemes for elliptic interface problems has become a well defined field in applied and computational mathematics and attracted much attention in the past decades. Despite of significant advances, challenges remain in the construction of high-order schemes for nonsmooth interfaces, i.e., interfaces with geometric singularities, such as tips, cusps and sharp edges. The challenge of geometric singularities is amplified when they are associated with low solution regularities, e.g., tip-geometry effects in many fields. The present work introduces a matched interface and boundary (MIB) Galerkin method for solving two-dimensional (2D) elliptic PDEs with complex interfaces, geometric singularities and low solution regularities. The Cartesian grid based triangular elements are employed to avoid the time consuming mesh generation procedure. Consequently, the interface cuts through elements. To ensure the continuity of classic basis functions across the interface, two sets of overlapping elements, called MIB elements, are defined near the interface. As a result, differentiation can be computed near the interface as if there is no interface. Interpolation functions are constructed on MIB element spaces to smoothly extend function values across the interface. A set of lowest order interface jump conditions is enforced on the interface, which in turn, determines the interpolation functions. The performance of the proposed MIB Galerkin finite element method is validated by numerical experiments with a wide range of interface geometries, geometric singularities, low regularity solutions and grid resolutions. Extensive numerical studies confirm the designed second order convergence of the MIB Galerkin method in the L ∞ and L 2 errors. Some of the best results are obtained in the present work when the interface is C 1 or Lipschitz continuous and the solution is C 2 continuous.
Smart electric vehicle (EV) charging and grid integration apparatus and methods
Gadh, Rajit; Mal, Siddhartha; Prabhu, Shivanand; Chu, Chi-Cheng; Sheikh, Omar; Chung, Ching-Yen; He, Lei; Xiao, Bingjun; Shi, Yiyu
2015-05-05
An expert system manages a power grid wherein charging stations are connected to the power grid, with electric vehicles connected to the charging stations, whereby the expert system selectively backfills power from connected electric vehicles to the power grid through a grid tie inverter (if present) within the charging stations. In more traditional usage, the expert system allows for electric vehicle charging, coupled with user preferences as to charge time, charge cost, and charging station capabilities, without exceeding the power grid capacity at any point. A robust yet accurate state of charge (SOC) calculation method is also presented, whereby initially an open circuit voltage (OCV) based on sampled battery voltages and currents is calculated, and then the SOC is obtained based on a mapping between a previously measured reference OCV (ROCV) and SOC. The OCV-SOC calculation method accommodates likely any battery type with any current profile.
A COMPARISON OF INTERCELL METRICS ON DISCRETE GLOBAL GRID SYSTEMS
A discrete global grid system (DGGS) is a spatial data model that aids in global research by serving as a framework for environmental modeling, monitoring and sampling across the earth at multiple spatial scales. Topological and geometric criteria have been proposed to evaluate a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
P.C. Weaver
2009-02-17
Conduct verification surveys of grids at the DWI 1630 Site in Knoxville, Tennessee. The independent verification team (IVT) from ORISE, conducted verification activities in whole and partial grids, as completed by BJC. ORISE site activities included gamma surface scans and soil sampling within 33 grids; G11 through G14; H11 through H15; X14, X15, X19, and X21; J13 through J15 and J17 through J21; K7 through K9 and K13 through K15; L13 through L15; and M14 through M16
Soil as an archive of coal-fired power plant mercury deposition.
Rodríguez Martín, José Antonio; Nanos, Nikos
2016-05-05
Mercury pollution is a global environmental problem that has serious implications for human health. One of the most important sources of anthropogenic mercury emissions are coal-burning power plants. Hg accumulations in soil are associated with their atmospheric deposition. Our study provides the first assessment of soil Hg on the entire Spanish surface obtained from one sampling protocol. Hg spatial distribution was analysed with topsoil samples taken from 4000 locations in a regular sampling grid. The other aim was to use geostatistical techniques to verify the extent of soil contamination by Hg and to evaluate presumed Hg enrichment near the seven Spanish power plants with installed capacity above 1000 MW. The Hg concentration in Spanish soil fell within the range of 1-7564 μg kg(-1) (mean 67.2) and 50% of the samples had a concentration below 37 μg kg(-1). Evidence for human activity was found near all the coal-fired power plants, which reflects that metals have accumulated in the basin over many years. Values over 1000 μg kg(-1) have been found in soils in the vicinity of the Aboño, Soto de Ribera and Castellon power plants. However, soil Hg enrichment was detectable only close to the emission source, within an approximate range of only 15 km from the power plants. We associated this effect with airborne emissions and subsequent depositions as the potential distance through fly ash deposition. Hg associated with particles of ash tends to be deposited near coal combustion sources. Copyright © 2016 Elsevier B.V. All rights reserved.
Mascons, GRACE, and Time-variable Gravity
NASA Technical Reports Server (NTRS)
Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.
2006-01-01
The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.
Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.
2018-03-01
We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
NASA Astrophysics Data System (ADS)
Britt, Darrell Steven, Jr.
Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.
Q-Space Truncation and Sampling in Diffusion Spectrum Imaging
Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D.; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L.; McNab, Jennifer A.
2015-01-01
Purpose To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). Methods DSI data were acquired using the MGH-USC connectome scanner (Gmax=300mT/m) with bmax=30,000s/mm2, 17×17×17, 15×15×15 and 11×11×11 grids in ex vivo human brains and bmax=10,000s/mm2, 11×11×11 grid in vivo. An additional in vivo scan using bmax=7,000s/mm2, 11×11×11 grid was performed with a derated gradient strength of 40mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Results Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. Conclusion The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11×11×11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. PMID:26762670
Byers, S R; Beemer, O M; Lear, A S; Callan, R J
2014-01-01
Persistent hyperglycemia is common in alpacas and typically requires insulin administration for resolution; however, little is known about alpacas' response to different insulin formulations. To evaluate the effects of 3 insulin formulations on blood glucose concentrations and the use of a continuous glucose monitoring (CGM) system in alpacas. Six healthy alpacas. The CGM was installed in the left paralumbar fossa at the start of this crossover study and recorded data every 5 minutes. Regular insulin, NPH insulin, insulin glargine, and dextrose were administered to each alpaca over a 2-week period. Blood samples were collected for glucose testing at 0, 1, 2, 4, 6, 8, and 12 hours, and then every 6 hours after each administration of insulin or dextrose. Data were compared by using method comparison techniques, error grid plots, and ANOVA. Blood glucose concentrations decreased most rapidly after regular insulin administration when administered IV or SC as compared to the other formulations. The NPH insulin produced the longest suppression of blood glucose. The mean CGM interstitial compartment glucose concentrations were typically lower than the intravascular compartment glucose concentrations. The alpacas had no adverse reactions to the different insulin formulations. The NPH insulin might be more appropriate for long-term use in hyperglycemic alpacas because of its extended duration of action. A CGM is useful in monitoring glucose trends and reducing blood collection events, but it should not be the sole method for determining treatment protocols. Copyright © 2014 by the American College of Veterinary Internal Medicine.
TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE
NASA Technical Reports Server (NTRS)
Vu, B. T.
1994-01-01
TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.
A data fusion-based methodology for optimal redesign of groundwater monitoring networks
NASA Astrophysics Data System (ADS)
Hosseini, Marjan; Kerachian, Reza
2017-09-01
In this paper, a new data fusion-based methodology is presented for spatio-temporal (S-T) redesigning of Groundwater Level Monitoring Networks (GLMNs). The kriged maps of three different criteria (i.e. marginal entropy of water table levels, estimation error variances of mean values of water table levels, and estimation values of long-term changes in water level) are combined for determining monitoring sub-areas of high and low priorities in order to consider different spatial patterns for each sub-area. The best spatial sampling scheme is selected by applying a new method, in which a regular hexagonal gridding pattern and the Thiessen polygon approach are respectively utilized in sub-areas of high and low monitoring priorities. An Artificial Neural Network (ANN) and a S-T kriging models are used to simulate water level fluctuations. To improve the accuracy of the predictions, results of the ANN and S-T kriging models are combined using a data fusion technique. The concept of Value of Information (VOI) is utilized to determine two stations with maximum information values in both sub-areas with high and low monitoring priorities. The observed groundwater level data of these two stations are considered for the power of trend detection, estimating periodic fluctuations and mean values of the stationary components, which are used for determining non-uniform sampling frequencies for sub-areas. The proposed methodology is applied to the Dehgolan plain in northwestern Iran. The results show that a new sampling configuration with 35 and 7 monitoring stations and sampling intervals of 20 and 32 days, respectively in sub-areas with high and low monitoring priorities, leads to a more efficient monitoring network than the existing one containing 52 monitoring stations and monthly temporal sampling.
Stackable differential mobility analyzer for aerosol measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Meng-Dawn; Chen, Da-Ren
2007-05-08
A multi-stage differential mobility analyzer (MDMA) for aerosol measurements includes a first electrode or grid including at least one inlet or injection slit for receiving an aerosol including charged particles for analysis. A second electrode or grid is spaced apart from the first electrode. The second electrode has at least one sampling outlet disposed at a plurality different distances along its length. A volume between the first and the second electrode or grid between the inlet or injection slit and a distal one of the plurality of sampling outlets forms a classifying region, the first and second electrodes for chargingmore » to suitable potentials to create an electric field within the classifying region. At least one inlet or injection slit in the second electrode receives a sheath gas flow into an upstream end of the classifying region, wherein each sampling outlet functions as an independent DMA stage and classifies different size ranges of charged particles based on electric mobility simultaneously.« less
Brillouin zone grid refinement for highly resolved ab initio THz optical properties of graphene
NASA Astrophysics Data System (ADS)
Warmbier, Robert; Quandt, Alexander
2018-07-01
Optical spectra of materials can in principle be calculated within numerical frameworks based on Density Functional Theory. The huge numerical effort involved in these methods severely constraints the accuracy achievable in practice. In the case of the THz spectrum of graphene the primary limitation lays in the density of the reciprocal space sampling. In this letter we have developed a non-uniform sampling using grid refinement to achieve a high local sampling density with only moderate numerical effort. The resulting THz electron energy loss spectrum shows a plasmon signal below 50 meV with a ω(q) ∝√{ q } dispersion relation.
Rapid Decimation for Direct Volume Rendering
NASA Technical Reports Server (NTRS)
Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane
1997-01-01
An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.
Grid-cell-based crop water accounting for the famine early warning system
NASA Astrophysics Data System (ADS)
Verdin, James; Klaver, Robert
2002-06-01
Rainfall monitoring is a regular activity of food security analysts for sub-Saharan Africa due to the potentially disastrous impact of drought. Crop water accounting schemes are used to track rainfall timing and amounts relative to phenological requirements, to infer water limitation impacts on yield. Unfortunately, many rain gauge reports are available only after significant delays, and the gauge locations leave large gaps in coverage. As an alternative, a grid-cell-based formulation for the water requirement satisfaction index (WRSI) was tested for maize in Southern Africa. Grids of input variables were obtained from remote sensing estimates of rainfall, meteorological models, and digital soil maps. The spatial WRSI was computed for the 1996-97 and 1997-98 growing seasons. Maize yields were estimated by regression and compared with a limited number of reports from the field for the 1996-97 season in Zimbabwe. Agreement at a useful level (r = 0·80) was observed. This is comparable to results from traditional analysis with station data. The findings demonstrate the complementary role that remote sensing, modelling, and geospatial analysis can play in an era when field data collection in sub-Saharan Africa is suffering an unfortunate decline. Published in 2002 by John Wiley & Sons, Ltd.
Ackermann, Roland; Kammel, Robert; Merker, Marina; Kamm, Andreas; Tünnermann, Andreas; Nolte, Stefan
2013-01-01
Optical side-effects of fs-laser treatment in refractive surgery are investigated by means of a model eye. We show that rainbow glare is the predominant perturbation, which can be avoided by randomly distributing laser spots within the lens. For corneal applications such as fs-LASIK, even a regular grid with spot-to-spot distances of ~3 µm is sufficient to minimize rainbow glare perception. Contrast sensitivity is affected, when the lens is treated with large 3D-patterns. PMID:23413236
Smooth information flow in temperature climate network reflects mass transport
NASA Astrophysics Data System (ADS)
Hlinka, Jaroslav; Jajcay, Nikola; Hartman, David; Paluš, Milan
2017-03-01
A directed climate network is constructed by Granger causality analysis of air temperature time series from a regular grid covering the whole Earth. Using winner-takes-all network thresholding approach, a structure of a smooth information flow is revealed, hidden to previous studies. The relevance of this observation is confirmed by comparison with the air mass transfer defined by the wind field. Their close relation illustrates that although the information transferred due to the causal influence is not a physical quantity, the information transfer is tied to the transfer of mass and energy.
NASA Astrophysics Data System (ADS)
Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.
2015-12-01
Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus
2016-03-01
Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.
NASA Astrophysics Data System (ADS)
MacDonald, I. R.; Garcia-Pineda, O. G.; Solow, A.; Daneshgar, S.; Beet, A.
2013-12-01
Oil discharged as a result of the Deepwater Horizon disaster was detected on the surface of the Gulf of Mexico by synthetic aperture radar satellites from 25 April 2010 until 4 August 2010. SAR images were not restricted by daylight or cloud-cover. Distribution of this material is a tracer for potential environmental impacts and an indicator of impact mitigation due to response efforts and physical forcing factors. We used a texture classifying neural network algorithm for semi-supervised processing of 176 SAR images from the ENVISAT, RADARSAT I, and COSMO-SKYMED satellites. This yielded an estimate the proportion of oil-covered water within the region sampled by each image with a nominal resolution of 10,000 sq m (100m pixels), which was compiled as a 5-km equal area grid covering the northern Gulf of Mexico. Few images covered the entire impact area, so analysis was required to compile a regular time-series of the oil cover. A Gaussian kernel using a bandwidth of 2 d was used to estimate oil cover percent in each grid at noon and midnight throughout the interval. Variance and confidence intervals were calculated for each grid and for the global 12-h totals. Results animated across the impact region show the spread of oil under the influence of physical factors. Oil cover reached an early peak of 17032.26 sq km (sd 460.077) on 18 May, decreasing to 27% of this total on 4 June, following by sharp increase to an overall maximum of 18424.56 sq km (sd 424.726) on 19 June. There was a significant negative correlation between average wind stress and the total area of oil cover throughout the time-series. Correlation between response efforts including aerial and subsurface application of dispersants and burning of gathered oil was negative, positive, or indeterminate at different time segments during the event. Daily totals for oil-covered surface waters of the Gulf of Mexico during 25 April - 9 August 2010 with upper and lower 0.95 confidence limits on estimate. (No oil visible after 4 August.)
A New Stellar Atmosphere Grid and Comparisons with HST /STIS CALSPEC Flux Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohlin, Ralph C.; Fleming, Scott W.; Gordon, Karl D.
The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli and Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanzmore » and Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T {sub eff} = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope . Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.« less
A New Stellar Atmosphere Grid and Comparisons with HST/STIS CALSPEC Flux Distributions
NASA Astrophysics Data System (ADS)
Bohlin, Ralph C.; Mészáros, Szabolcs; Fleming, Scott W.; Gordon, Karl D.; Koekemoer, Anton M.; Kovács, József
2017-05-01
The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli & Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanz & Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T eff = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope. Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.
Davis, Tracy A.; Shelton, Jennifer L.
2014-01-01
Results for constituents with nonregulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in samples from 19 grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 27 grid wells. Chloride was detected at a concentration greater than the SMCL-CA upper benchmark of 500 mg/L in one grid well. TDS concentrations in three grid wells were greater than the SMCL-CA upper benchmark of 1,000 mg/L.
Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.
Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C
2017-10-01
The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Joint image and motion reconstruction for PET using a B-spline motion model.
Blume, Moritz; Navab, Nassir; Rafecas, Magdalena
2012-12-21
We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
Hanwella, Raveen; Jayasekera, Nicholas E L W; de Silva, Varuni A
2014-01-01
The main aim of this study was to assess the mental health status of the Navy Special Forces and regular forces three and a half years after the end of combat operations in mid 2009, and compare it with the findings in 2009. This cross sectional study was carried out in the Sri Lanka Navy (SLN), three and a half years after the end of combat operations. Representative samples of SLN Special Forces and regular forces deployed in combat areas were selected using simple random sampling. Only personnel who had served continuously in combat areas during the one year period prior to the end of combat operations were included in the study. The sample consisted of 220 Special Forces and 275 regular forces personnel. Compared to regular forces a significantly higher number of Special Forces personnel had experienced potentially traumatic events. Compared to the period immediately after end of combat operations, in the Special Forces, prevalence of psychological distress and fatigue showed a marginal increase while hazardous drinking and multiple physical symptoms showed a marginal decrease. In the regular forces, the prevalence of psychological distress, fatigue and multiple somatic symptoms declined and prevalence of hazardous drinking increased from 16.5% to 25.7%. During the same period prevalence of smoking doubled in both Special Forces and regular forces. Prevalence of PTSD reduced from 1.9% in Special Forces to 0.9% and in the regular forces from 2.07% to 1.1%. Three and a half years after the end of combat operations mental health problems have declined among SLN regular forces while there was no significant change among Special Forces. Hazardous drinking among regular forces and smoking among both Special Forces and regular forces have increased.
From the grid to the smart grid, topologically
NASA Astrophysics Data System (ADS)
Pagani, Giuliano Andrea; Aiello, Marco
2016-05-01
In its more visionary acceptation, the smart grid is a model of energy management in which the users are engaged in producing energy as well as consuming it, while having information systems fully aware of the energy demand-response of the network and of dynamically varying prices. A natural question is then: to make the smart grid a reality will the distribution grid have to be upgraded? We assume a positive answer to the question and we consider the lower layers of medium and low voltage to be the most affected by the change. In our previous work, we analyzed samples of the Dutch distribution grid (Pagani and Aiello, 2011) and we considered possible evolutions of these using synthetic topologies modeled after studies of complex systems in other technological domains (Pagani and Aiello, 2014). In this paper, we take an extra important step by defining a methodology for evolving any existing physical power grid to a good smart grid model, thus laying the foundations for a decision support system for utilities and governmental organizations. In doing so, we consider several possible evolution strategies and apply them to the Dutch distribution grid. We show how increasing connectivity is beneficial in realizing more efficient and reliable networks. Our proposal is topological in nature, enhanced with economic considerations of the costs of such evolutions in terms of cabling expenses and economic benefits of evolving the grid.
Visible-near infrared spectroscopy as a tool to improve mapping of soil properties
NASA Astrophysics Data System (ADS)
Evgrafova, Alevtina; Kühnel, Anna; Bogner, Christina; Haase, Ina; Shibistova, Olga; Guggenberger, Georg; Tananaev, Nikita; Sauheitl, Leopold; Spielvogel, Sandra
2017-04-01
Spectroscopic measurements, which are non-destructive, precise and rapid, can be used to predict soil properties and help estimate the spatial variability of soil properties at the pedon scale. These estimations are required for quantifying soil properties with higher precision, identifying the changes in soil properties and ecosystem response to climate change as well as increasing the estimation accuracy of soil-related models. Our objectives were to (i) predict soil properties for nested samples (n = 296) using the laboratory-based visible-near infrared (vis-NIR) spectra of air-dried (<2 mm) soil samples and values of measured soil properties for gridded samples (n = 174) as calibration and validation sets; (ii) estimate the precision and predictive accuracy of an empirical spectral model using (a) our own spectral library and (b) the global spectral library; (iii) support the global spectral library with obtained vis-NIR spectral data on permafrost-affected soils. The soil samples were collected from three permafrost-affected soil profiles underlain by permafrost at various depths between 23 cm to 57.5 cm below the surface (Cryosols) and one soil profile with no presence of permafrost within the upper 100 cm layer (Cambisol) in order to characterize the spatial distribution and variability of soil properties. The gridded soil samples (n = 174) were collected using an 80 cm wide grid with a mesh size of 10 cm on both axes. In addition, 300 nested soil samples were collected using a grid of 12 cm by 12 cm (25 samples per grid) from a hole of 1 cm in a diameter with a distance from the next sample of 1 cm. Due to a small amount of available soil material (< 1.5 g), 296 nested soil samples were analyzed only using vis-NIR spectroscopy. The air-dried mineral gridded soil samples (n = 174) were sieved through a 2-mm sieve and ground with an agate mortar prior to the elemental analysis. The soil organic carbon and total nitrogen concentrations (in %) were determined using a dry combustion method on the Vario EL cube analyzer (Elementar Analysensysteme GmbH, Germany). Inorganic C was removed from the mineral soil samples with pH values higher than 7 prior to the elemental analysis using the volatilization method (HCl, 6 hours). The pH of soil samples was measured in 0.01 M CaCl2 using a 1:2 soil:solution ratio. However, for soil sample with a high in organic matter content, a 1:10 ratio was applied. We also measured oxalate and dithionite extracted iron, aluminum and manganese oxides and hydroxides using inductively coupled plasma optical emission spectroscopy (Varian Vista MPX ICP-OES, Agilent Technologies, USA). We predicted the above-mentioned soil properties for all nested samples using partial least squares regression, which was performed using R program. We can conclude that vis-NIR spectroscopy can be used effectively in order to describe, estimate and further map the spatial patterns of soil properties using geostatistical methods. This research could also help to improve the global soil spectral library taking into account that only few previous applications of vis-NIR spectroscopy were conducted on permafrost-affected soils of Northern Siberia. Keywords: Visible-near infrared spectroscopy, vis-NIR, permafrost-affected soils, Siberia, partial least squares regression.
A Diagnostic Study of Computer Application of Structural Communication Grid
ERIC Educational Resources Information Center
Bahar, Mehmet; Aydin, Fatih; Karakirik, Erol
2009-01-01
In this article, Structural communication grid (SCG), an alternative measurement and evaluation technique, has been firstly summarised and the design, development and implementation of a computer based SCG system have been introduced. The system is then tested on a sample of 154 participants consisting of candidate students, science teachers and…
NASA Astrophysics Data System (ADS)
Machalek, P.; Kim, S. M.; Berry, R. D.; Liang, A.; Small, T.; Brevdo, E.; Kuznetsova, A.
2012-12-01
We describe how the Climate Corporation uses Python and Clojure, a language impleneted on top of Java, to generate climatological forecasts for precipitation based on the Advanced Hydrologic Prediction Service (AHPS) radar based daily precipitation measurements. A 2-year-long forecasts is generated on each of the ~650,000 CONUS land based 4-km AHPS grids by constructing 10,000 ensembles sampled from a 30-year reconstructed AHPS history for each grid. The spatial and temporal correlations between neighboring AHPS grids and the sampling of the analogues are handled by Python. The parallelization for all the 650,000 CONUS stations is further achieved by utilizing the MAP-REDUCE framework (http://code.google.com/edu/parallel/mapreduce-tutorial.html). Each full scale computational run requires hundreds of nodes with up to 8 processors each on the Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) distributed computing service resulting in 3 terabyte datasets. We further describe how we have productionalized a monthly run of the simulations process at full scale of the 4km AHPS grids and how the resultant terabyte sized datasets are handled.
Meng, Yuguang; Lei, Hao
2010-06-01
An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. (c) 2010 Wiley-Liss, Inc.
Polcicová, Gabriela; Tino, Peter
2004-01-01
We introduce topographic versions of two latent class models (LCM) for collaborative filtering. Latent classes are topologically organized on a square grid. Topographic organization of latent classes makes orientation in rating/preference patterns captured by the latent classes easier and more systematic. The variation in film rating patterns is modelled by multinomial and binomial distributions with varying independence assumptions. In the first stage of topographic LCM construction, self-organizing maps with neural field organized according to the LCM topology are employed. We apply our system to a large collection of user ratings for films. The system can provide useful visualization plots unveiling user preference patterns buried in the data, without loosing potential to be a good recommender model. It appears that multinomial distribution is most adequate if the model is regularized by tight grid topologies. Since we deal with probabilistic models of the data, we can readily use tools from probability and information theories to interpret and visualize information extracted by our system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Q; Xie, S
This report describes the Atmospheric Radiation Measurement (ARM) Best Estimate (ARMBE) 2-dimensional (2D) gridded surface data (ARMBE2DGRID) value-added product. Spatial variability is critically important to many scientific studies, especially those that involve processes of great spatial variations at high temporal frequency (e.g., precipitation, clouds, radiation, etc.). High-density ARM sites deployed at the Southern Great Plains (SGP) allow us to observe the spatial patterns of variables of scientific interests. The upcoming megasite at SGP with its enhanced spatial density will facilitate the studies at even finer scales. Currently, however, data are reported only at individual site locations at different time resolutionsmore » for different datastreams. It is difficult for users to locate all the data they need and requires extra effort to synchronize the data. To address these problems, the ARMBE2DGRID value-added product merges key surface measurements at the ARM SGP sites and interpolates the data to a regular 2D grid to facilitate the data application.« less
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
Theoretical prediction of the energy stability of graphene nanoblisters
NASA Astrophysics Data System (ADS)
Glukhova, O. E.; Slepchenkov, M. M.; Barkov, P. V.
2018-04-01
The paper presents the results of a theoretical prediction of the energy stability of graphene nanoblisters with various geometrical parameters. As a criterion for the evaluation of the stability of investigated carbon objects we propose to consider the value of local stress of the nanoblister atomic grid. Numerical evaluation of stresses experienced by atoms of the graphene blister framework was carried out by means of an original method for calculation of local stresses that is based on energy approach. Atomistic models of graphene nanoblisters corresponding to the natural experiment data were built for the first time in this work. New physical regularities of the influence of topology on the thermodynamic stability of nanoblisters were established as a result of the analysis of the numerical experiment data. We built the distribution of local stresses for graphene blister structures, whose atomic grid contains a variety of structural defects. We have shown how the concentration and location of defects affect the picture of the distribution of the maximum stresses experienced by the atoms of the nanoblisters.
Detection of faults in rotating machinery using periodic time-frequency sparsity
NASA Astrophysics Data System (ADS)
Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.
2016-11-01
This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.
A method of boundary equations for unsteady hyperbolic problems in 3D
NASA Astrophysics Data System (ADS)
Petropavlovsky, S.; Tsynkov, S.; Turkel, E.
2018-07-01
We consider interior and exterior initial boundary value problems for the three-dimensional wave (d'Alembert) equation. First, we reduce a given problem to an equivalent operator equation with respect to unknown sources defined only at the boundary of the original domain. In doing so, the Huygens' principle enables us to obtain the operator equation in a form that involves only finite and non-increasing pre-history of the solution in time. Next, we discretize the resulting boundary equation and solve it efficiently by the method of difference potentials (MDP). The overall numerical algorithm handles boundaries of general shape using regular structured grids with no deterioration of accuracy. For long simulation times it offers sub-linear complexity with respect to the grid dimension, i.e., is asymptotically cheaper than the cost of a typical explicit scheme. In addition, our algorithm allows one to share the computational cost between multiple similar problems. On multi-processor (multi-core) platforms, it benefits from what can be considered an effective parallelization in time.
High-resolution computer-aided moire
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1991-12-01
This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.
Bergkvist, Jonas; Ekström, Simon; Wallman, Lars; Löfgren, Mikael; Marko-Varga, György; Nilsson, Johan; Laurell, Thomas
2002-04-01
A recently introduced silicon microextraction chip (SMEC), used for on-line proteomic sample preparation, has proved to facilitate the process of protein identification by sample clean up and enrichment of peptides. It is demonstrated that a novel grid-SMEC design improves the operating characteristics for solid-phase microextraction, by reducing dispersion effects and thereby improving the sample preparation conditions. The structures investigated in this paper are treated both numerically and experimentally. The numerical approach is based on finite element analysis of the microfluidic flow in the microchip. The analysis is accomplished by use of the computational fluid dynamics-module FLOTRAN in the ANSYS software package. The modeling and analysis of the previously reported weir-SMEC design indicates some severe drawbacks, that can be reduced by changing the microextraction chip geometry to the grid-SMEC design. The overall analytical performance was thereby improved and also verified by experimental work. Matrix-assisted laser desorption/ionization mass spectra of model peptides extracted from both the weir-SMEC and the new grid-SMEC support the numerical analysis results. Further use of numerical modeling and analysis of the SMEC structures is also discussed and suggested in this work.
Tritium in water vapor in the shallow unsaturated zone at the Amargosa Desert Research Site
Healy, Richard W.; Striegl, Robert G.; Michel, Robert L.; Prudic, David E.; Andraski, Brian J.; Morganwalp, David W.; Buxton, Herbert T.
1999-01-01
Samples of water vapor in soil gas were obtained at the U.S. Geological Survey's Amargosa Desert Research Site in 1997 and 1998 from a depth of 1.5 m (meters) within a 300 m by 300 m grid that lies immediately to the south and west of a low-level radioactive-waste disposal site. The gas samples were analyzed for tritium. Fifty-eight samples were collected in May 1997; 61 samples were collected in June 1998. Measured tritium concentrations ranged from 16 ± 9 TU (tritium units) to 36,900 ± 300 TU in 1997, and from 6 ± 6 TU to 37,360 ± 450 TU in 1998. Concentrations decreased from northeast to southwest across the grid. In general, there was very little difference in tritium concentrations between the two sampling periods.
Low- Z polymer sample supports for fixed-target serial femtosecond X-ray crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feld, Geoffrey K.; Heymann, Michael; Benner, W. Henry
X-ray free-electron lasers (XFELs) offer a new avenue to the structural probing of complex materials, including biomolecules. Delivery of precious sample to the XFEL beam is a key consideration, as the sample of interest must be serially replaced after each destructive pulse. The fixed-target approach to sample delivery involves depositing samples on a thin-film support and subsequent serial introduction via a translating stage. Some classes of biological materials, including two-dimensional protein crystals, must be introduced on fixed-target supports, as they require a flat surface to prevent sample wrinkling. A series of wafer and transmission electron microscopy (TEM)-style grid supports constructedmore » of low- Z plastic have been custom-designed and produced. Aluminium TEM grid holders were engineered, capable of delivering up to 20 different conventional or plastic TEM grids using fixed-target stages available at the Linac Coherent Light Source (LCLS). As proof-of-principle, X-ray diffraction has been demonstrated from two-dimensional crystals of bacteriorhodopsin and three-dimensional crystals of anthrax toxin protective antigen mounted on these supports at the LCLS. In conclusion, the benefits and limitations of these low- Z fixed-target supports are discussed; it is the authors' belief that they represent a viable and efficient alternative to previously reported fixed-target supports for conducting diffraction studies with XFELs.« less
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations
NASA Technical Reports Server (NTRS)
Chan, William M.
2004-01-01
Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.
NASA Astrophysics Data System (ADS)
Kennedy, A. M.; Lane, J.; Ebert, M. A.
2014-03-01
Plan review systems often allow dose volume histogram (DVH) recalculation as part of a quality assurance process for trials. A review of the algorithms provided by a number of systems indicated that they are often very similar. One notable point of variation between implementations is in the location and frequency of dose sampling. This study explored the impact such variations can have on DVH based plan evaluation metrics (Normal Tissue Complication Probability (NTCP), min, mean and max dose), for a plan with small structures placed over areas of high dose gradient. Dose grids considered were exported from the original planning system at a range of resolutions. We found that for the CT based resolutions used in all but one plan review systems (CT and CT with guaranteed minimum number of sampling voxels in the x and y direction) results were very similar and changed in a similar manner with changes in the dose grid resolution despite the extreme conditions. Differences became noticeable however when resolution was increased in the axial (z) direction. Evaluation metrics also varied differently with changing dose grid for CT based resolutions compared to dose grid based resolutions. This suggests that if DVHs are being compared between systems that use a different basis for selecting sampling resolution it may become important to confirm that a similar resolution was used during calculation.
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Hoshino, Taiki; Kikuchi, Moriya; Murakami, Daiki; Harada, Yoshiko; Mitamura, Koji; Ito, Kiminori; Tanaka, Yoshihito; Sasaki, Sono; Takata, Masaki; Jinnai, Hiroshi; Takahara, Atsushi
2012-11-01
The performance of a fast pixel array detector with a grid mask resolution enhancer has been demonstrated for X-ray photon correlation spectroscopy (XPCS) measurements to investigate fast dynamics on a microscopic scale. A detecting system, in which each pixel of a single-photon-counting pixel array detector, PILATUS, is covered by grid mask apertures, was constructed for XPCS measurements of silica nanoparticles in polymer melts. The experimental results are confirmed to be consistent by comparison with other independent experiments. By applying this method, XPCS measurements can be carried out by customizing the hole size of the grid mask to suit the experimental conditions, such as beam size, detector size and sample-to-detector distance.
Images of the Retailing Environment: An Example of the Use of the Repertory Grid Methodology
ERIC Educational Resources Information Center
Hudson, Ray
1974-01-01
A necessary condition for studying cognitive images of environments is an appropriate method to define and measure these. Using a sample of students in Bristol, the Repertory Grid method was used to measure images of the retailing environment. The empirical results are discussed and possible future research is outlined. (BT)
Measuring the Flatness of Focal Plane for Very Large Mosaic CCD Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jiangang; Estrada, Juan; Cease, Herman
2010-06-08
Large mosaic multiCCD camera is the key instrument for modern digital sky survey. DECam is an extremely red sensitive 520 Megapixel camera designed for the incoming Dark Energy Survey (DES). It is consist of sixty two 4k x 2k and twelve 2k x 2k 250-micron thick fully-depleted CCDs, with a focal plane of 44 cm in diameter and a field of view of 2.2 square degree. It will be attached to the Blanco 4-meter telescope at CTIO. The DES will cover 5000 square-degrees of the southern galactic cap in 5 color bands (g, r, i, z, Y) in 5 yearsmore » starting from 2011. To achieve the science goal of constraining the Dark Energy evolution, stringent requirements are laid down for the design of DECam. Among them, the flatness of the focal plane needs to be controlled within a 60-micron envelope in order to achieve the specified PSF variation limit. It is very challenging to measure the flatness of the focal plane to such precision when it is placed in a high vacuum dewar at 173 K. We developed two image based techniques to measure the flatness of the focal plane. By imaging a regular grid of dots on the focal plane, the CCD offset along the optical axis is converted to the variation the grid spacings at different positions on the focal plane. After extracting the patterns and comparing the change in spacings, we can measure the flatness to high precision. In method 1, the regular dots are kept in high sub micron precision and cover the whole focal plane. In method 2, no high precision for the grid is required. Instead, we use a precise XY stage moves the pattern across the whole focal plane and comparing the variations of the spacing when it is imaged by different CCDs. Simulation and real measurements show that the two methods work very well for our purpose, and are in good agreement with the direct optical measurements.« less
Rousselet, Jérôme; Imbert, Charles-Edouard; Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google street view could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google street view. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google street view were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google street view network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant. PMID:24130675
Fast and precise dense grid size measurement method based on coaxial dual optical imaging system
NASA Astrophysics Data System (ADS)
Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei
2015-10-01
Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.
NASA Astrophysics Data System (ADS)
Kabas, T.; Leuprecht, A.; Bichler, C.; Kirchengast, G.
2010-12-01
South-eastern Austria is characteristic for experiencing a rich variety of weather and climate patterns. For this reason, the county of Feldbach was selected by the Wegener Center as a focus area for a pioneering observation experiment at very high resolution: The WegenerNet climate station network (in brief WegenerNet) comprises 151 meteorological stations within an area of about 20 km × 15 km (~ 1.4 km × 1.4 km station grid). All stations measure the main parameters temperature, humidity and precipitation with 5 minute sampling. Selected further stations include measurements of wind speed and direction completed by soil parameters as well as air pressure and net radiation. The collected data is integrated in an automatic processing system including data transfer, quality control, product generation, and visualization. Each station is equipped with an internet-attached data logger and the measurements are transferred as binary files via GPRS to the WegenerNet server in 1 hour intervals. The incoming raw data files of measured parameters as well as several operating values of the data logger are stored in a relational database (PostgreSQL). Next, the raw data pass the Quality Control System (QCS) in which the data are checked for its technical and physical plausibility (e.g., sensor specifications, temporal and spatial variability). In consideration of the data quality (quality flag), the Data Product Generator (DPG) results in weather and climate data products on various temporal scales (from 5 min to annual) for single stations and regular grids. Gridded data are derived by vertical scaling and squared inverse distance interpolation (1 km × 1 km and 0.01° × 0.01° grids). Both subsystems (QCS and DPG) are realized by the programming language Python. For application purposes the resulting data products are available via the bi-lingual (dt, en) WegenerNet data portal (www.wegenernet.org). At this time, the main interface is still online in a system in which MapServer is used to import spatial data by its database interface and to generate images of static geographic formats. However, a Java applet is additionally needed to display these images on the users local host. Furthermore, station data are visualized as time series by the scripting language PHP. Since February 2010, the visualization of gridded data products is a first step to a new data portal based on OpenLayers. In this GIS framework, all geographic information (e.g., OpenStreetMap) is displayed with MapServer. Furthermore, the visualization of all meteorological parameters are generated on the fly by a Python CGI script and transparently overlayed on the maps. Hence, station data and gridded data are visualized and further prepared for download in common data formats (csv, NetCDF). In conclusion, measured data and generated data products are provided with a data latency less than 1-2 hours in standard operation (near real time). Following an introduction of the processing system along the lines above, resulting data products are presented online at the WegenerNet data portal.
NASA Astrophysics Data System (ADS)
Li, Y.; McDougall, T. J.
2016-02-01
Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.
Burton, Carmen A.; Belitz, Kenneth
2008-01-01
Ground-water quality in the approximately 3,800 square-mile Southeast San Joaquin Valley study unit (SESJ) was investigated from October 2005 through February 2006 as part of the Priority Basin Assessment Project of Ground-Water Ambient Monitoring and Assessment (GAMA) Program. The GAMA Statewide Basin Assessment project was developed in response to the Ground-Water Quality Monitoring Act of 2001 and is being conducted by the California State Water Resources Control Board (SWRCB) in collaboration with the U.S. Geological Survey (USGS) and the Lawrence Livermore National Laboratory (LLNL). The SESJ study was designed to provide a spatially unbiased assessment of raw ground-water quality within SESJ, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 99 wells in Fresno, Tulare, and Kings Counties, 83 of which were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 16 of which were sampled to evaluate changes in water chemistry along ground-water flow paths or across alluvial fans (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine, and 1,2,3-trichloropropane), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon), and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. Quality-control samples (blanks, replicates, samples for matrix spikes) were collected at approximately 10 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Assessment of the quality-control data resulted in censoring of less than 1 percent of the detections of constituents measured in ground-water samples. This study did not attempt to evaluate the quality of drinking water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, and (or) blended with other waters to maintain acceptable drinking-water quality. Regulatory thresholds apply to the treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and other health-based thresholds established by the U.S. Environmental Protection Agency and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns by CDPH. Two VOCs were detected above health-based thresholds: 1,2-dibromo-3-chloropropane (DBCP), and benzene. DBCP was detected above the U.S. Environmental Protections Agency's maximum contaminant level (MCL-US) in three grid wells and five understanding wells. Benzene was detected above the CDPH's maximum contaminant level (MCL-CA) in one grid well. All pesticide detections were below health-based thresholds. Perchlorate was detected above its maximum contaminate level for California in one grid well. Nitrate was detected above the MCL-US in six samples from understanding wells, of which one was a public supply well. Two trace elements were detected above MCLs-US: arsenic and uranium. Arsenic was detected above the MCL-US in four grid wells and two understanding wells; uranium was detected above the MCL-US in one grid well and one understanding well. Gross alpha radiation was detected above MCLs-US in five samples; four of them understanding wells, and uranium isotope activity was greater than the MCL-US for one understanding well
Tuorila, H; Kramer, F M; Engell, D
2001-08-01
Non-restrained and restrained American women (N=157) chose a portion of a fat-free or regular-fat hot fudge, to be eaten on a portion of fat-free or regular-fat (depending on the experimental condition) ice cream. The subjects had tasted and rated samples of both fudge and ice cream earlier in the same session and, prior to the choice, they were informed of their own hedonic ratings of both fudges and of the respective fat contents ("fat-free" vs. "regular-fat"). The higher the hedonic difference (fat-free minus regular-fat) between hot fudge samples and the higher the individual restraint score, the more likely was the choice of the fat-free option. Also, the less hungry the subjects were prior to testing, the more likely they were to choose the fat-free version. On average, the hedonic difference between the hot fudge samples was roughly -0.5 for those choosing the fat-free option, while the corresponding value for subjects choosing the regular-fat version was -3 (9-point scale). The type of ice cream did not affect the choice. The data demonstrate the effects of a food (its hedonic quality), person (restrained status), and context (perceived hunger) on food choice. Copyright 2000 Academic Press.
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
Landlab: A numerical modeling framework for evolving Earth surfaces from mountains to the coast
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Adams, J. M.; Tucker, G. E.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.
2016-02-01
Landlab is an open-source, user-friendly, component-based modeling framework for exploring the evolution of Earth's surface. Landlab itself is not a model. Instead, it is a computational framework that facilitates the development of numerical models of coupled earth surface processes. The Landlab Python library includes a gridding engine and process components, along with support functions for tasks such as reading in DEM data and input variables, setting boundary conditions, and plotting and outputting data. Each user of Landlab builds his or her own unique model. The first step in building a Landlab model is generally initializing a grid, either regular (raster) or irregular (e.g. delaunay or radial), and process components. This initialization process involves reading in relevant parameter values and data. The process components act on the grid to alter grid properties over time. For example, a component exists that can track the growth, death, and succession of vegetation over time. There are also several components that evolve surface elevation, through processes such as fluvial sediment transport and linear diffusion, among others. Users can also build their own process components, taking advantage of existing functions in Landlab such as those that identify grid connectivity and calculate gradients and flux divergence. The general nature of the framework makes it applicable to diverse environments - from bedrock rivers to a pile of sand - and processes acting over a range of spatial and temporal scales. In this poster we illustrate how a user builds a model using Landlab and propose a number of ways in which Landlab can be applied in coastal environments - from dune migration to channelization of barrier islands. We seek input from the coastal community as to how the process component library can be expanded to explore the diverse phenomena that act to shape coastal environments.
Earth Observations taken by the Expedition 35 Crew
2013-03-16
ISS035-E-005438 (16 March 2013) --- One of the Expedition 35 crew members on the International Space Station used a still camera with a 400 millimeter lens to record this nocturnal image of the Phoenix, Arizona area. Like many large urban areas of the central and western United States, the Phoenix metropolitan area is laid out along a regular grid of city blocks and streets. While visible during the day, this grid is most evident at night, when the pattern of street lighting is clearly visible from above – in the case of this photograph, from the low Earth orbit vantage point of the International Space Station. The urban grid form encourages growth of a city outwards along its borders, by providing optimal access to new real estate. Fueled by the adoption of widespread personal automobile use during the 20th century, the Phoenix metropolitan area today includes 25 other municipalities (many of them largely suburban and residential in character) linked by a network of surface streets and freeways. The image area includes parts of several cities in the metropolitan area including Phoenix proper (right), Glendale (center), and Peoria (left). While the major street grid is oriented north-south, the northwest-southeast oriented Grand Avenue cuts across it at image center. Grand Avenue is a major transportation corridor through the western metropolitan area; the lighting patterns of large industrial and commercial properties are visible along its length. Other brightly lit properties include large shopping centers, strip centers, and gas stations which tend to be located at the intersections of north-south and east-west trending streets. While much of the land area highlighted in this image is urbanized, there are several noticeably dark areas. The Phoenix Mountains at upper right are largely public park and recreational land. To the west (image lower left), agricultural fields provide a sharp contrast to the lit streets of neighboring residential developments. The Salt River channel appears as a dark ribbon within the urban grid at lower right.
Mathany, Timothy M.; Landon, Matthew K.; Shelton, Jennifer L.; Belitz, Kenneth
2013-01-01
Groundwater quality in the approximately 2,170-square-mile Western San Joaquin Valley (WSJV) study unit was investigated by the U.S. Geological Survey (USGS) from March to July 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The WSJV study unit was the twenty-ninth study unit to be sampled as part of the GAMA-PBP. The GAMA Western San Joaquin Valley study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated groundwater quality throughout California. The primary aquifer system is defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the WSJV study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the WSJV study unit, groundwater samples were collected from 58 wells in 2 study areas (Delta-Mendota subbasin and Westside subbasin) in Stanislaus, Merced, Madera, Fresno, and Kings Counties. Thirty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and 19 wells were selected to aid in the understanding of aquifer-system flow and related groundwater-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], low-level fumigants, and pesticides and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally occurring inorganic constituents (trace elements, nutrients, dissolved organic carbon [DOC], major and minor ions, silica, total dissolved solids [TDS], alkalinity, total arsenic and iron [unfiltered] and arsenic, chromium, and iron species [filtered]). Isotopic tracers (stable isotopes of hydrogen, oxygen, and boron in water, stable isotopes of nitrogen and oxygen in dissolved nitrate, stable isotopes of sulfur in dissolved sulfate, isotopic ratios of strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance), dissolved standard gases (methane, carbon dioxide, nitrogen, oxygen, and argon), and dissolved noble gases (argon, helium-4, krypton, neon, and xenon) were measured to help identify sources and ages of sampled groundwater. In total, 245 constituents and 8 water-quality indicators were measured. Quality-control samples (blanks, replicates, or matrix spikes) were collected at 16 percent of the wells in the WSJV study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples all were within acceptable limits of variability. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 87 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 39 grid wells were detected at concentrations less than health-based benchmarks. Detections of organic and special-interest constituents from grid wells sampled in the WSJV study unit also were less than health-based benchmarks. In total, VOCs were detected in 12 of the 39 grid wells sampled (approximately 31 percent), pesticides and pesticide degradates were detected in 9 grid wells (approximately 23 percent), and perchlorate was detected in 15 grid wells (approximately 38 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells; most concentrations were less than health-based benchmarks. Exceptions include two detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L), 20 detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L, 2 detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L, 1 detection of selenium greater than the MCL-US of 50 μg/L, 2 detections of strontium greater than the HAL-US of 4,000 μg/L, and 3 detections of nitrate greater than the MCL-US of 10 μg/L. Results for inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, sulfate, and TDS) showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in five grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 16 grid wells. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 milligrams per liter (mg/L) were detected in 14 grid wells, and concentrations in 5 of these wells also were greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in 21 grid wells, and concentrations in 13 of these wells also were greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 36 grid wells, and concentrations in 20 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.
NASA Astrophysics Data System (ADS)
Singh, Gurjeet; Panda, Rabindra K.; Mohanty, Binayak P.; Jana, Raghavendra B.
2016-05-01
Strategic ground-based sampling of soil moisture across multiple scales is necessary to validate remotely sensed quantities such as NASA's Soil Moisture Active Passive (SMAP) product. In the present study, in-situ soil moisture data were collected at two nested scale extents (0.5 km and 3 km) to understand the trend of soil moisture variability across these scales. This ground-based soil moisture sampling was conducted in the 500 km2 Rana watershed situated in eastern India. The study area is characterized as sub-humid, sub-tropical climate with average annual rainfall of about 1456 mm. Three 3x3 km square grids were sampled intensively once a day at 49 locations each, at a spacing of 0.5 km. These intensive sampling locations were selected on the basis of different topography, soil properties and vegetation characteristics. In addition, measurements were also made at 9 locations around each intensive sampling grid at 3 km spacing to cover a 9x9 km square grid. Intensive fine scale soil moisture sampling as well as coarser scale samplings were made using both impedance probes and gravimetric analyses in the study watershed. The ground-based soil moisture samplings were conducted during the day, concurrent with the SMAP descending overpass. Analysis of soil moisture spatial variability in terms of areal mean soil moisture and the statistics of higher-order moments, i.e., the standard deviation, and the coefficient of variation are presented. Results showed that the standard deviation and coefficient of variation of measured soil moisture decreased with extent scale by increasing mean soil moisture.
Wetherbee, Gregory A.; Rhodes, Mark F.
2013-01-01
The U.S. Geological Survey Branch of Quality Systems operates the Precipitation Chemistry Quality Assurance project (PCQA) to provide independent, external quality-assurance for the National Atmospheric Deposition Program (NADP). NADP is composed of five monitoring networks that measure the chemical composition of precipitation and ambient air. PCQA and the NADP Program Office completed five short-term studies to investigate the effects of equipment performance with respect to the National Trends Network (NTN) and Mercury Deposition Network (MDN) data quality: sample evaporation from NTN collectors; sample volume and mercury loss from MDN collectors; mercury adsorption to MDN collector glassware, grid-type precipitation sensors for precipitation collectors, and the effects of an NTN collector wind shield on sample catch efficiency. Sample-volume evaporation from an NTN Aerochem Metrics (ACM) collector ranged between 1.1–33 percent with a median of 4.7 percent. The results suggest that weekly NTN sample evaporation is small relative to sample volume. MDN sample evaporation occurs predominantly in western and southern regions of the United States (U.S.) and more frequently with modified ACM collectors than with N-CON Systems Inc. collectors due to differences in airflow through the collectors. Variations in mercury concentrations, measured to be as high as 47.5 percent per week with a median of 5 percent, are associated with MDN sample-volume loss. Small amounts of mercury are also lost from MDN samples by adsorption to collector glassware irrespective of collector type. MDN 11-grid sensors were found to open collectors sooner, keep them open longer, and cause fewer lid cycles than NTN 7-grid sensors. Wind shielding an NTN ACM collector resulted in collection of larger quantities of precipitation while also preserving sample integrity.
Bryophyte colonization history of the virgin volcanic island Surtsey, Iceland
NASA Astrophysics Data System (ADS)
Ingimundardóttir, G. V.; Weibull, H.; Cronberg, N.
2014-03-01
The island Surtsey was formed in a volcanic eruption south of Iceland in 1963-1967 and has since then been protected and monitored by scientists. The first two moss species were found on Surtsey as early as 1967 and several new bryophyte species were discovered every year until 1973 when regular sampling ended. Systematic bryophyte inventories in a grid of 100 m × 100 m quadrats were made in 1971 and 1972. The number of observed species almost doubled between years with 36 species found in 1971 and 72 species in 1972. Here we report results from an inventory in 2008, when every other of the grid's quadrats were searched for bryophytes. Despite lower sampling intensity than in 1972, distributional expansion and contraction of earlier colonists was revealed as well as presence of new colonists. A total of 38 species were discovered, 15 of those were not encountered in 1972 and eight had never been reported from Surtsey before (Bryum elegans, Ceratodon heterophyllus, Didymodon rigidulus, Eurhynchium praelongum, Schistidium confertum, S. papillosum, Tortula hoppeana and T. muralis). Habitat loss due to erosion and reduced thermal activity in combination with successional vegetation changes are likely to have played a significant role in the decline of some bryophyte species which were abundant in 1972 (Leptobryum pyriforme, Schistidium apocarpum coll., Funaria hygrometrica, Philonotis spp., Pohlia spp, Schistidium strictum, Sanionia uncinata) while others have continued to thrive and expand (e.g. Schistidium maritimum, Racomitrium lanuginosum, R. ericoides, R. fasciculare and Bryum argenteum). Some species (especially Bryum spp.) benefit from the formation of new habitats, such as grassland within a gull colony, which was established in 1984. Several newcomers are rarely producing sporophytes on Iceland and unlikely to have dispersed by airborne spores. They are more likely to have been introduced to Surtsey by seagulls in the form of vegetative fragments or dispersal agents (Bryum elegans, Didymodon rigidulus, Eurhynchium praelongum, Ceratodon heterophyllus and Ulota phyllantha). The establishment of the gull colony also means that leakage of nutrients from the nesting area is, at least locally, downplaying the relative importance of nitrogen fixation by cyanobacteria growing in bryophyte shoots.
Bryophyte colonization history of the virgin volcanic island Surtsey, Iceland
NASA Astrophysics Data System (ADS)
Ingimundardóttir, G. V.; Weibull, H.; Cronberg, N.
2014-08-01
The island Surtsey was formed in a volcanic eruption south of Iceland in 1963-1967 and has since then been protected and monitored by scientists. The first two moss species were found on Surtsey as early as 1967 and several new bryophyte species were discovered every year until 1973 when regular sampling ended. Systematic bryophyte inventories in a grid of 100 m × 100 m quadrats were made in 1971 and 1972: the number of observed species doubled, with 36 species found in 1971 and 72 species in 1972. Here we report results from an inventory in 2008, when every other of the grid's quadrats were searched for bryophytes. Despite lower sampling intensity than in 1972, distributional expansion and contraction of earlier colonists was revealed as well as the presence of new colonists. A total of 38 species were discovered, 15 of those were not encountered in 1972 and eight had never been reported from Surtsey before (Bryum elegans, Ceratodon heterophyllus, Didymodon rigidulus, Eurhynchium praelongum, Schistidium confertum, S. papillosum, Tortula hoppeana and T. muralis). Habitat loss due to erosion and reduced thermal activity in combination with successional vegetation changes are likely to have played a significant role in the decline of some bryophyte species which were abundant in 1972 (Leptobryum pyriforme, Schistidium apocarpum coll., Funaria hygrometrica, Philonotis spp., Pohlia spp, Schistidium strictum, Sanionia uncinata) while others have continued to thrive and expand (e.g. Schistidium maritimum, Racomitrium lanuginosum, R. ericoides, R. fasciculare and Bryum argenteum). Some species (especially Bryum spp.) benefit from the formation of new habitats, such as grassland within a gull colony, which was established in 1984. Several newcomers are rarely producing sporophytes on Iceland and are unlikely to have been dispersed by airborne spores. They are more likely to have been introduced to Surtsey by seagulls in the form of vegetative fragments or dispersal agents (Bryum elegans, Didymodon rigidulus, Eurhynchium praelongum, Ceratodon heterophyllus and Ulota phyllantha). The establishment of the gull colony also means that leakage of nutrients from the nesting area is, at least locally, downplaying the relative importance of nitrogen fixation by cyanobacteria growing in bryophyte shoots.
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sampling scheme and the guidance document are available on EPA's PCB Web site at http://www.epa.gov/pcb, or... § 761.125(c) (2) through (4). Using its best engineering judgment, EPA may sample a statistically valid random or grid sampling technique, or both. When using engineering judgment or random “grab” samples, EPA...
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sampling scheme and the guidance document are available on EPA's PCB Web site at http://www.epa.gov/pcb, or... § 761.125(c) (2) through (4). Using its best engineering judgment, EPA may sample a statistically valid random or grid sampling technique, or both. When using engineering judgment or random “grab” samples, EPA...
Okubo, Torahiko; Osaki, Takako; Nozaki, Eriko; Uemura, Akira; Sakai, Kouhei; Matushita, Mizue; Matsuo, Junji; Nakamura, Shinji; Kamiya, Shigeru; Yamaguchi, Hiroyuki
2017-01-01
Although human occupancy is a source of airborne bacteria, the role of walkers on bacterial communities in built environments is poorly understood. Therefore, we visualized the impact of walker occupancy combined with other factors (temperature, humidity, atmospheric pressure, dust particles) on airborne bacterial features in the Sapporo underground pedestrian space in Sapporo, Japan. Air samples (n = 18; 4,800L/each sample) were collected at 8:00 h to 20:00 h on 3 days (regular sampling) and at early morning / late night (5:50 h to 7:50 h / 22:15 h to 24:45 h) on a day (baseline sampling), and the number of CFUs (colony forming units) OTUs (operational taxonomic units) and other factors were determined. The results revealed that temperature, humidity, and atmospheric pressure changed with weather. The number of walkers increased greatly in the morning and evening on each regular sampling day, although total walker numbers did not differ significantly among regular sampling days. A slight increase in small dust particles (0.3-0.5μm) was observed on the days with higher temperature regardless of regular or baseline sampling. At the period on regular sampling, CFU levels varied irregularly among days, and the OTUs of 22-phylum types were observed, with the majority being from Firmicutes or Proteobacteria (γ-), including Staphylococcus sp. derived from human individuals. The data obtained from regular samplings reveled that although no direct interaction of walker occupancy and airborne CFU and OTU features was observed upon Pearson's correlation analysis, cluster analysis indicated an obvious lineage consisting of walker occupancy, CFU numbers, OTU types, small dust particles, and seasonal factors (including temperature and humidity). Meanwhile, at the period on baseline sampling both walker and CFU numbers were similarly minimal. Taken together, the results revealed a positive correlation of walker occupancy with airborne bacteria that increased with increases in temperature and humidity in the presence of airborne small particles. Moreover, the results indicated that small dust particles at high temperature and humidity may be a crucial factor responsible for stabilizing the bacteria released from walkers in built environments. The findings presented herein advance our knowledge and understanding of the relationship between humans and bacterial communities in built environments, and will help improve public health in urban communities.
Okubo, Torahiko; Osaki, Takako; Nozaki, Eriko; Uemura, Akira; Sakai, Kouhei; Matushita, Mizue; Matsuo, Junji; Nakamura, Shinji; Kamiya, Shigeru
2017-01-01
Although human occupancy is a source of airborne bacteria, the role of walkers on bacterial communities in built environments is poorly understood. Therefore, we visualized the impact of walker occupancy combined with other factors (temperature, humidity, atmospheric pressure, dust particles) on airborne bacterial features in the Sapporo underground pedestrian space in Sapporo, Japan. Air samples (n = 18; 4,800L/each sample) were collected at 8:00 h to 20:00 h on 3 days (regular sampling) and at early morning / late night (5:50 h to 7:50 h / 22:15 h to 24:45 h) on a day (baseline sampling), and the number of CFUs (colony forming units) OTUs (operational taxonomic units) and other factors were determined. The results revealed that temperature, humidity, and atmospheric pressure changed with weather. The number of walkers increased greatly in the morning and evening on each regular sampling day, although total walker numbers did not differ significantly among regular sampling days. A slight increase in small dust particles (0.3–0.5μm) was observed on the days with higher temperature regardless of regular or baseline sampling. At the period on regular sampling, CFU levels varied irregularly among days, and the OTUs of 22-phylum types were observed, with the majority being from Firmicutes or Proteobacteria (γ-), including Staphylococcus sp. derived from human individuals. The data obtained from regular samplings reveled that although no direct interaction of walker occupancy and airborne CFU and OTU features was observed upon Pearson's correlation analysis, cluster analysis indicated an obvious lineage consisting of walker occupancy, CFU numbers, OTU types, small dust particles, and seasonal factors (including temperature and humidity). Meanwhile, at the period on baseline sampling both walker and CFU numbers were similarly minimal. Taken together, the results revealed a positive correlation of walker occupancy with airborne bacteria that increased with increases in temperature and humidity in the presence of airborne small particles. Moreover, the results indicated that small dust particles at high temperature and humidity may be a crucial factor responsible for stabilizing the bacteria released from walkers in built environments. The findings presented herein advance our knowledge and understanding of the relationship between humans and bacterial communities in built environments, and will help improve public health in urban communities. PMID:28922412
Communication requirements of sparse Cholesky factorization with nested dissection ordering
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1989-01-01
Load distribution schemes for minimizing the communication requirements of the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems are presented. The total data traffic in factoring an n x n sparse symmetric positive definite matrix representing an n-vertex regular two-dimensional grid graph using n exp alpha, alpha not greater than 1, processors are shown to be O(n exp 1 + alpha/2). It is O(n), when n exp alpha, alpha not smaller than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal.
Generalized Sheet Transition Condition FDTD Simulation of Metasurface
NASA Astrophysics Data System (ADS)
Vahabzadeh, Yousef; Chamanara, Nima; Caloz, Christophe
2018-01-01
We propose an FDTD scheme based on Generalized Sheet Transition Conditions (GSTCs) for the simulation of polychromatic, nonlinear and space-time varying metasurfaces. This scheme consists in placing the metasurface at virtual nodal plane introduced between regular nodes of the staggered Yee grid and inserting fields determined by GSTCs in this plane in the standard FDTD algorithm. The resulting update equations are an elegant generalization of the standard FDTD equations. Indeed, in the limiting case of a null surface susceptibility ($\\chi_\\text{surf}=0$), they reduce to the latter, while in the next limiting case of a time-invariant metasurface $[\\chi_\\text{surf}\
A Grid and Group Explanation of Emotional Intelligence in Selected Belizean Primary Schools
ERIC Educational Resources Information Center
Coc, Simeon
2012-01-01
Scope and Method of Study: The purpose of this qualitative study was to use grid and group theory to investigate and explain the contextual meaning and manifestation of Emotional Intelligence (EI) among faculty members in two Belizean Primary Schools. Purposive Sampling was used to select five faculty members from each of the two schools to…
Raster profile development for the spatial data transfer standard
Szemraj, John A.
1993-01-01
The Spatial Data Transfer Standard (SDTS), recently approved as Federal Information Processing Standard (FIPS) Publication 173, is designed to transfer various types of spatial data. Implementing all of the standard's options at one time is impractical. Profiles, or limited subsets of the SDTS, are the mechanisms by which the standards will be implemented. The development of a raster profile is being coordinated by the U.S. Geological Survey's (USGS) SDTS Task Force. This raster profile is intended to accommodate digital georeferenced image data and regularly spaces, georeferenced gridded data. The USGS's digital elevation models (DEMs) and digital orthophoto quadrangles (DOQs), National Oceanic and Atmospheric Administration's (NOAA) advanced very huh resolution radiometer (AVHRR) and Landsat data, and National Aeronautics and Space Administration's (NASA) Earth observing system (EOS) data are among the candidate data sets for this profile. Other raster profiles, designed to support nongeoreferenced and other types of "raw" sensor data will be consider in the future. As with the Topological Vector Profile (TVP) for the SDTS, development of the raster profile includes designing a prototype profile, testing the prototype profile using sample data sets, and finally, requesting and receiving FIPS approval.
Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo
NASA Astrophysics Data System (ADS)
Kosempa, M.; Chambers, D. P.
2016-02-01
Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?
Unstable bidimensional grids of liquid filaments: Drop pattern after breakups
NASA Astrophysics Data System (ADS)
Diez, Javier; Cuellar, Ingrith; Ravazzoli, Pablo; Gonzalez, Alejandro
2017-11-01
A rectangular grid formed by liquid filaments on a partially wetting substrate evolves in a series of breakups leading to arrays of drops with different shapes distributed in a rather regular bidimensional pattern. Our study is focused on the configuration produced when two long parallel filaments of silicone oil, which are placed upon a glass substrate previously coated with a fluorinated solution, are crossed perpendicularly by another pair of long parallel filaments. A remarkable feature of this kind of grids is that there are two qualitatively different types of drops. While one set is formed at the crossing points, the rest are consequence of the breakup of shorter filaments formed between the crossings. Here, we analyze the main geometric features of all types of drops, such as shape of the footprint and contact angle distribution along the drop periphery. The formation of a series of short filaments with similar geometric and physical properties allows us to have simultaneously quasi identical experiments to study the subsequent breakups. We develop a simple hydrodynamic model to predict the number of drops that results from a filament of given initial length and width. This model is able to yield the length intervals corresponding to a small number of drops. We acknowledge support from CONICET-Argentina (Grant PIP 844/2012) and ANPCyT-Argentina (Grant PICT 931/2012).
Micro/Nano-scale Strain Distribution Measurement from Sampling Moiré Fringes.
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi
2017-05-23
This work describes the measurement procedure and principles of a sampling moiré technique for full-field micro/nano-scale deformation measurements. The developed technique can be performed in two ways: using the reconstructed multiplication moiré method or the spatial phase-shifting sampling moiré method. When the specimen grid pitch is around 2 pixels, 2-pixel sampling moiré fringes are generated to reconstruct a multiplication moiré pattern for a deformation measurement. Both the displacement and strain sensitivities are twice as high as in the traditional scanning moiré method in the same wide field of view. When the specimen grid pitch is around or greater than 3 pixels, multi-pixel sampling moiré fringes are generated, and a spatial phase-shifting technique is combined for a full-field deformation measurement. The strain measurement accuracy is significantly improved, and automatic batch measurement is easily achievable. Both methods can measure the two-dimensional (2D) strain distributions from a single-shot grid image without rotating the specimen or scanning lines, as in traditional moiré techniques. As examples, the 2D displacement and strain distributions, including the shear strains of two carbon fiber-reinforced plastic specimens, were measured in three-point bending tests. The proposed technique is expected to play an important role in the non-destructive quantitative evaluations of mechanical properties, crack occurrences, and residual stresses of a variety of materials.
SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Jinfeng; Cao, Ruifen; Dai, Yumei
Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less
Low-cost cryo-light microscopy stage fabrication for correlated light/electron microscopy.
Carlson, David B; Evans, James E
2011-06-05
The coupling of cryo-light microscopy (cryo-LM) and cryo-electron microscopy (cryo-EM) poses a number of advantages for understanding cellular dynamics and ultrastructure. First, cells can be imaged in a near native environment for both techniques. Second, due to the vitrification process, samples are preserved by rapid physical immobilization rather than slow chemical fixation. Third, imaging the same sample with both cryo-LM and cryo-EM provides correlation of data from a single cell, rather than a comparison of "representative samples". While these benefits are well known from prior studies, the widespread use of correlative cryo-LM and cryo-EM remains limited due to the expense and complexity of buying or building a suitable cryogenic light microscopy stage. Here we demonstrate the assembly, and use of an inexpensive cryogenic stage that can be fabricated in any lab for less than $40 with parts found at local hardware and grocery stores. This cryo-LM stage is designed for use with reflected light microscopes that are fitted with long working distance air objectives. For correlative cryo-LM and cryo-EM studies, we adapt the use of carbon coated standard 3-mm cryo-EM grids as specimen supports. After adsorbing the sample to the grid, previously established protocols for vitrifying the sample and transferring/handling the grid are followed to permit multi-technique imaging. As a result, this setup allows any laboratory with a reflected light microscope to have access to direct correlative imaging of frozen hydrated samples.
Global Discrete Artificial Boundary Conditions for Time-Dependent Wave Propagation
NASA Technical Reports Server (NTRS)
Ryabenkii, V. S.; Tsynkov, S. V.; Turchaninov, V. I.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We construct global artificial boundary conditions (ABCs) for the numerical simulation of wave processes on unbounded domains using a special non-deteriorating algorithm that has been developed previously for the long-term computation of wave-radiation solutions. The ABCs are obtained directly for the discrete formulation of the problem; in so doing, neither a rational approximation of 'non-reflecting kernels,' nor discretization of the continuous boundary conditions is required. The extent of temporal nonlocality of the new ABCs appears fixed and limited; in addition, the ABCs can handle artificial boundaries of irregular shape on regular grids with no fitting/adaptation needed and no accuracy loss induced. The non-deteriorating algorithm, which is the core of the new ABCs is inherently three-dimensional, it guarantees temporally uniform grid convergence of the solution driven by a continuously operating source on arbitrarily long time intervals, and provides unimprovable linear computational complexity with respect to the grid dimension. The algorithm is based on the presence of lacunae, i.e., aft fronts of the waves, in wave-type solutions in odd-dimension spaces, It can, in fact, be built as a modification on top of any consistent and stable finite-difference scheme, making its grid convergence uniform in time and at the same time keeping the rate of convergence the same as that of the non-modified scheme. In the paper, we delineate the construction of the global lacunae-based ABCs in the framework of a discretized wave equation. The ABCs are obtained for the most general formulation of the problem that involves radiation of waves by moving sources (e.g., radiation of acoustic waves by a maneuvering aircraft). We also present systematic numerical results that corroborate the theoretical design properties of the ABCs' algorithm.
Global Discrete Artificial Boundary Conditions for Time-Dependent Wave Propagation
NASA Astrophysics Data System (ADS)
Ryaben'kii, V. S.; Tsynkov, S. V.; Turchaninov, V. I.
2001-12-01
We construct global artificial boundary conditions (ABCs) for the numerical simulation of wave processes on unbounded domains using a special nondeteriorating algorithm that has been developed previously for the long-term computation of wave-radiation solutions. The ABCs are obtained directly for the discrete formulation of the problem; in so doing, neither a rational approximation of “nonreflecting kernels” nor discretization of the continuous boundary conditions is required. The extent of temporal nonlocality of the new ABCs appears fixed and limited; in addition, the ABCs can handle artificial boundaries of irregular shape on regular grids with no fitting/adaptation needed and no accuracy loss induced. The nondeteriorating algorithm, which is the core of the new ABCs, is inherently three-dimensional, it guarantees temporally uniform grid convergence of the solution driven by a continuously operating source on arbitrarily long time intervals and provides unimprovable linear computational complexity with respect to the grid dimension. The algorithm is based on the presence of lacunae, i.e., aft fronts of the waves, in wave-type solutions in odd-dimensional spaces. It can, in fact, be built as a modification on top of any consistent and stable finite-difference scheme, making its grid convergence uniform in time and at the same time keeping the rate of convergence the same as that of the unmodified scheme. In this paper, we delineate the construction of the global lacunae-based ABCs in the framework of a discretized wave equation. The ABCs are obtained for the most general formulation of the problem that involves radiation of waves by moving sources (e.g., radiation of acoustic waves by a maneuvering aircraft). We also present systematic numerical results that corroborate the theoretical design properties of the ABC algorithm.
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
ERIC Educational Resources Information Center
Chiner, Esther; Cardona, Maria Cristina
2013-01-01
This study examined regular education teachers' perceptions of inclusion in elementary and secondary schools in Spain and how these perceptions may differ depending on teaching experience, skills, and the availability of resources and supports. Stratified random sampling procedures were used to draw a representative sample of 336 general education…
An irregular lattice method for elastic wave propagation
NASA Astrophysics Data System (ADS)
O'Brien, Gareth S.; Bean, Christopher J.
2011-12-01
Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.
Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.
2010-01-01
The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.
Meter Scale Heterogeneities in the Oceanic Mantle Revealed in Ophiolites Peridotites
NASA Astrophysics Data System (ADS)
Haller, M. B.; Walker, R. J.; Day, J. M.; O'Driscoll, B.; Daly, J. S.
2016-12-01
Mid-ocean ridge basalts and other oceanic mantle-derived rocks do not capture the depleted endmember isotopic compositions present in oceanic peridotites. Ophiolites are especially useful in interrogating this issue as field-based observations can be paired with geochemical investigations over a wide range of geologic time. Grid sampling methods (3m x 3m) at the 497 Ma Leka Ophiolite Complex (LOC), Norway, and the 1.95 Ga Jormua Ophiolite Complex (JOC), Finland, offer an opportunity to study mantle domains at the meter and kilometer scale, and over a one billion year timespan. The lithology of each locality predominately comprises harzburgite, hosting layers and lenses of dunite and pyroxenite. Here, we combine highly siderophile elements (HSE) and Re-Os isotopic analysis of these rocks with major and trace element measurements. Harzburgites at individual LOC grid sites show variations in γOs(497 Ma) (-2.1 to +2.2) at the meter scale. Analyses of adjacent, more radiogenic dunites within the same LOC grid, reveal that dunites may either have similar γOs to their host harzburgite, or different, implying interactions between spatially associated rock types may differ at the meter scale. Averaged γOs values between the mantle sections of two LOC grid sites (+1.3 and -0.4) separated by 5 km indicate km-scale heterogeneity in the convecting upper mantle. Pd/Ir and Ru/Ir ratios are scattered and do not obviously correlate with γOs values. Analyses of pyroxenites within LOC grid sections, thin section observations of relict olivine grains, and whole rock major and trace element data are also examined to shed light on the causes of the isotopic heterogeneities in the LOC. Data from JOC grid sampling will be presented as well.
NASA Astrophysics Data System (ADS)
Han, Hao; Gao, Hao; Xing, Lei
2017-08-01
Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... developed by the Midwest Research Institute (MRI) for use in enforcement inspections: “Verification of PCB... the MRI report “Field Manual for Grid Sampling of PCB Spill Sites to Verify Cleanup.” Both the MRI...
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... developed by the Midwest Research Institute (MRI) for use in enforcement inspections: “Verification of PCB... the MRI report “Field Manual for Grid Sampling of PCB Spill Sites to Verify Cleanup.” Both the MRI...
40 CFR 761.130 - Sampling requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... developed by the Midwest Research Institute (MRI) for use in enforcement inspections: “Verification of PCB... the MRI report “Field Manual for Grid Sampling of PCB Spill Sites to Verify Cleanup.” Both the MRI...
Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker
2016-01-01
Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...
NASA Astrophysics Data System (ADS)
Benjamin, Christopher J.; Wright, Kyle J.; Bolton, Scott C.; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L.; Jiang, Wen; Thompson, David H.
2016-10-01
We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with Nα, Nα-dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His6-T7 bacteriophage and His6-GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His6-GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.
Benjamin, Christopher J; Wright, Kyle J; Bolton, Scott C; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L; Jiang, Wen; Thompson, David H
2016-10-17
We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with N α , N α -dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His 6 -T7 bacteriophage and His 6 -GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His 6 -GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.
The 3DGRAPE book: Theory, users' manual, examples
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1989-01-01
A users' manual for a new three-dimensional grid generator called 3DGRAPE is presented. The program, written in FORTRAN, is capable of making zonal (blocked) computational grids in or about almost any shape. Grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. The smoothness for which elliptic methods are known is seen here, including smoothness across zonal boundaries. An introduction giving the history, motivation, capabilities, and philosophy of 3DGRAPE is presented first. Then follows a chapter on the program itself. The input is then described in detail. A chapter on reading the output and debugging follows. Three examples are then described, including sample input data and plots of output. Last is a chapter on the theoretical development of the method.
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.
1994-04-01
measure RRI - Rapid Response Initiative RT - retention time s - seconds SDG - sample delivery group I SI - site investigation SMCLS - secondary maximum...tape and a compass and each grid node was marked with a wooden stake or fluorescent orange paint. At least one point on the grid was surveyed so the
Patrick L. Zimmerman; Greg C. Liknes
2010-01-01
Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...
NASA Astrophysics Data System (ADS)
Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna
2018-02-01
Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.
Predictors of College Retention and Performance between Regular and Special Admissions
ERIC Educational Resources Information Center
Kim, Johyun
2015-01-01
This predictive correlational research study examined the effect of cognitive, demographic, and socioeconomic variables as predictors of regular and special admission students' first-year GPA and retention among a sample of 7,045 students. Findings indicated high school GPA and ACT scores were the two most effective predictors of regular and…
NASA Astrophysics Data System (ADS)
Bogunović, Igor; Pereira, Paulo; Šeput, Miranda
2016-04-01
Soil organic carbon (SOC), pH, available phosphorus (P), and potassium (K) are some of the most important factors to soil fertility. These soil parameters are highly variable in space and time, with implications to crop production. The aim of this work is study the spatial variability of SOC, pH, P and K in an organic farm located in river Rasa valley (Croatia). A regular grid (100 x 100 m) was designed and 182 samples were collected on Silty Clay Loam soil. P, K and SOC showed moderate heterogeneity with coefficient of variation (CV) of 21.6%, 32.8% and 51.9%, respectively. Soil pH record low spatial variability with CV of 1.5%. Soil pH, P and SOC did not follow normal distribution. Only after a Box-Cox transformation, data respected the normality requirements. Directional exponential models were the best fitted and used to describe spatial autocorrelation. Soil pH, P and SOC showed strong spatial dependence with nugget to sill ratio with 13.78%, 0.00% and 20.29%, respectively. Only K recorded moderate spatial dependence. Semivariogram ranges indicate that future sampling interval could be 150 - 200 m in order to reduce sampling costs. Fourteen different interpolation models for mapping soil properties were tested. The method with lowest Root Mean Square Error was the most appropriated to map the variable. The results showed that radial basis function models (Spline with Tension and Completely Regularized Spline) for P and K were the best predictors, while Thin Plate Spline and inverse distance weighting models were the least accurate. The best interpolator for pH and SOC was the local polynomial with the power of 1, while the least accurate were Thin Plate Spline. According to soil nutrient maps investigated area record very rich supply with K while P supply was insufficient on largest part of area. Soil pH maps showed mostly neutral reaction while individual parts of alkaline soil indicate the possibility of penetration of seawater and salt accumulation in the soil profile. Future research should focus on spatial patterns on soil pH, electrical conductivity and sodium adsorption ratio. Keywords: geostatistics, semivariogram, interpolation models, soil chemical properties
Nonuniform sampling by quantiles.
Craft, D Levi; Sonstrom, Reilly E; Rovnyak, Virginia G; Rovnyak, David
2018-03-01
A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license. Copyright © 2018 Elsevier Inc. All rights reserved.
Nonuniform sampling by quantiles
NASA Astrophysics Data System (ADS)
Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David
2018-03-01
A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.
Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo
Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less
Q-space truncation and sampling in diffusion spectrum imaging.
Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L; McNab, Jennifer A
2016-12-01
To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). DSI data were acquired using the MGH-USC connectome scanner (G max = 300 mT/m) with b max = 30,000 s/mm 2 , 17 × 17 × 17, 15 × 15 × 15 and 11 × 11 × 11 grids in ex vivo human brains and b max = 10,000 s/mm 2 , 11 × 11 × 11 grid in vivo. An additional in vivo scan using b max =7,000 s/mm 2 , 11 × 11 × 11 grid was performed with a derated gradient strength of 40 mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11 × 11 × 11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. Magn Reson Med 76:1750-1763, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Densmore, Jill N.; Fram, Miranda S.; Belitz, Kenneth
2009-01-01
Ground-water quality in the approximately 1,630 square-mile Owens and Indian Wells Valleys study unit (OWENS) was investigated in September-December 2006 as part of the Priority Basin Project of Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in collaboration with the California State Water Resources Control Board (SWRCB). The Owens and Indian Wells Valleys study was designed to provide a spatially unbiased assessment of raw ground-water quality within OWENS study unit, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 74 wells in Inyo, Kern, Mono, and San Bernardino Counties. Fifty-three of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 21 wells were selected to evaluate changes in water chemistry in areas of interest (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents [volatile organic compounds (VOCs), pesticides and pesticide degradates, pharmaceutical compounds, and potential wastewater- indicator compounds], constituents of special interest [perchlorate, N-nitrosodimethylamine (NDMA), and 1,2,3- trichloropropane (1,2,3-TCP)], naturally occurring inorganic constituents [nutrients, major and minor ions, and trace elements], radioactive constituents, and microbial indicators. Naturally occurring isotopes [tritium, and carbon-14, and stable isotopes of hydrogen and oxygen in water], and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. This study evaluated the quality of raw ground water in the aquifer in the OWENS study unit and did not attempt to evaluate the quality of treated water delivered to consumers. Water supplied to consumers typically is treated after withdrawal from the ground, disinfected, and blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and non-regulatory thresholds established for aesthetic concerns (secondary maximum contamination levels, SMCL-CA) by CDPH. VOCs and pesticides were detected in samples from less than one-third of the grid wells; all detections were below health-based thresholds, and most were less than one-one hundredth of threshold values. All detections of perchlorate and nutrients in samples from OWENS were below health-based thresholds. Most detections of trace elements in ground-water samples from OWENS wells were below health-based thresholds. In samples from the 53 grid wells, three constituents were detected at concentrations above USEPA maximum contaminant levels: arsenic in 5 samples, uranium in 4 samples, and fluoride in 1 sample. Two constituents were detected at concentrations above CDPH notification levels (boron in 9 samples and vanadium in 1 sample), and two were above USEPA lifetime health advisory levels (molybdenum in 3 samples and strontium in 1 sample). Most of the samples from OWENS wells had concentrations of major elements, TDS, and trace elements below the non-enforceable standards set for aesthetic concerns. Samples from nine grid wells had concentrations of manganese, iron, or TDS above the SMCL-CAs.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-06-06
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.
Geographic Gossip: Efficient Averaging for Sensor Networks
NASA Astrophysics Data System (ADS)
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-01-01
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fielding, E.J.; Barazangi, M.; Isacks, B.L.
Topography and heterogeneous crustal structure have major effects on the propagation of regional seismic phases. We are collecting topographical, geological, and geophysical datasets for Eurasia into an information system that can be accessed via Internet connections. Now available are digital topography, satellite imagery, and data on sedimentary basins and crustal structure thicknesses. New datasets for Eurasia include maps of depth to Moho beneath Europe and Scandinavia. We have created regularly spaced grids of the crustal thickness values from these maps that can be used to create profiles of crustal structure. These profiles can be compared by an analyst or anmore » automatic program with the crustal seismic phases received along the propagation path to better understand and predict the path effects on phase amplitudes, a key to estimating magnitudes and yields, and for understanding variations in travel-time delays for phases such as Pn, important for improving regional event locations. The gridded data could also be used to model propagation of crustal phases in three dimensions. Digital elevation models, Satellite imagery, Geographic information systems, Lg Propagation, Moho, Geology, Crustal structure, Topographic relief.« less
Shelton, Jennifer L.; Fram, Miranda S.; Belitz, Kenneth
2013-01-01
Groundwater quality in the 39,000-square-kilometer Cascade Range and Modoc Plateau (CAMP) study unit was investigated by the U.S. Geological Survey (USGS) from July through October 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CAMP study unit is the thirty-second study unit to be sampled as part of the GAMA PBP. The GAMA CAMP study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as that part of the aquifer corresponding to the open or screened intervals of wells listed in the California Department of Public Health (CDPH) database for the CAMP study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifer system; shallow groundwater may be more vulnerable to surficial contamination. In the CAMP study unit, groundwater samples were collected from 90 wells and springs in 6 study areas (Sacramento Valley Eastside, Honey Lake Valley, Cascade Range and Modoc Plateau Low Use Basins, Shasta Valley and Mount Shasta Volcanic Area, Quaternary Volcanic Areas, and Tertiary Volcanic Areas) in Butte, Lassen, Modoc, Plumas, Shasta, Siskiyou, and Tehama Counties. Wells and springs were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Groundwater samples were analyzed for field water-quality indicators, organic constituents, perchlorate, inorganic constituents, radioactive constituents, and microbial indicators. Naturally occurring isotopes and dissolved noble gases also were measured to provide a dataset that will be used to help interpret the sources and ages of the sampled groundwater in subsequent reports. In total, 221 constituents were investigated for this study. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at approximately 10 percent of the wells in the CAMP study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 90 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 90 grid wells in the CAMP study unit were detected at concentrations less than drinking-water benchmarks. Of the 148 organic constituents analyzed, 27 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and nonregulatory health-based benchmarks, and all were less than 1/10 of benchmark levels. One or more organic constituents were detected in 52 percent of the grid wells in the CAMP study unit: VOCs were detected in 30 percent, and pesticides and pesticide degradates were detected in 31 percent. Trace elements, major ions, nutrients, and radioactive constituents were sampled for at 90 grid wells in the CAMP study unit, and most detected concentrations were less than health-based benchmarks. Exceptions include three detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (µg/L), two detections of boron greater than the CDPH notification level (NL-CA) of 1,000 µg/L, two detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 µg/L, two detections of vanadium greater than the CDPH notification level (NL-CA) of 50 µg/L, one detection of nitrate, as nitrogen, greater than the MCL-US of 10 milligrams per liter (mg/L), two detections of uranium greater than the MCL-US of 30 µg/L and the MCL-CA of 20 picocuries per liter (pCi/L), one detection of radon-222 greater than the proposed MCL-US of 4,000 pCi/L, and two detections of gross alpha particle activity greater than the MCL-US of 15 pCi/L. Results for inorganic constituents with non-regulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 µg/L were detected in four grid wells. Manganese concentrations greater than the SMCL-CA of 50 µg/L were detected in nine grid wells. Chloride and TDS were detected at concentrations greater than the upper SMCL-CA benchmarks of 500 mg/L and 1,000 mg/L, respectively, in one grid well. Microbial indicators (total coliform and Escherichia coli [E. coli]) were detected in 11 percent of the 83 grid wells sampled for these analyses in the CAMP study unit. The presence of total coliform was detected in nine grid wells, and the presence of E. coli was detected in one of these same grid wells.
Predictors of regular cigarette smoking among adolescent females: Does body image matter?
Kaufman, Annette R.; Augustson, Erik M.
2013-01-01
This study examined how factors associated with body image predict regular smoking in adolescent females. Data were from the National Longitudinal Study of Adolescent Health (Add Health), a study of health-related behaviors in a nationally representative sample of adolescents in grades 7 through 12. Females in Waves I and II (n=6,956) were used for this study. Using SUDAAN to adjust for the sampling frame, univariate and multivariate analyses were performed to investigate if baseline body image factors, including perceived weight, perceived physical development, trying to lose weight, and self-esteem, were predictive of regular smoking status 1 year later. In univariate analyses, perceived weight (p<.01), perceived physical development (p<.0001), trying to lose weight (p<.05), and self-esteem (p<.0001) significantly predicted regular smoking 1 year later. In the logistic regression model, perceived physical development (p<.05), and self-esteem (p<.001) significantly predicted regular smoking. The more developed a female reported being in comparison to other females her age, the more likely she was to be a regular smoker. Lower self-esteem was predictive of regular smoking. Perceived weight and trying to lose weight failed to reach statistical significance in the multivariate model. This current study highlights the importance of perceived physical development and self-esteem when predicting regular smoking in adolescent females. Efforts to promote positive self-esteem in young females may be an important strategy when creating interventions to reduce regular cigarette smoking. PMID:18686177
FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry
NASA Astrophysics Data System (ADS)
Dai, Jisheng; Liu, An; Lau, Vincent K. N.
2018-05-01
This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.
A guide to the use of the pressure disk rotor model as implemented in INS3D-UP
NASA Technical Reports Server (NTRS)
Chaffin, Mark S.
1995-01-01
This is a guide for the use of the pressure disk rotor model that has been placed in the incompressible Navier-Stokes code INS3D-UP. The pressure disk rotor model approximates a helicopter rotor or propeller in a time averaged manner and is intended to simulate the effect of a rotor in forward flight on the fuselage or the effect of a propeller on other aerodynamic components. The model uses a modified actuator disk that allows the pressure jump across the disk to vary with radius and azimuth. The cyclic and collective blade pitch angles needed to achieve a specified thrust coefficient and zero moment about the hub are predicted. The method has been validated with experimentally measured mean induced inflow velocities as well as surface pressures on a generic fuselage. Overset grids, sometimes referred to as Chimera grids, are used to simplify the grid generation process. The pressure disk model is applied to a cylindrical grid which is embedded in the grid or grids used for the rest of the configuration. This document will outline the development of the method, and present input and results for a sample case.
Robust Control of Wide Bandgap Power Electronics Device Enabled Smart Grid
NASA Astrophysics Data System (ADS)
Yao, Tong
In recent years, wide bandgap (WBG) devices enable power converters with higher power density and higher efficiency. On the other hand, smart grid technologies are getting mature due to new battery technology and computer technology. In the near future, the two technologies will form the next generation of smart grid enabled by WBG devices. This dissertation deals with two applications: silicon carbide (SiC) device used for medium voltage level interface (7.2 kV to 240 V) and gallium nitride (GaN) device used for low voltage level interface (240 V/120 V). A 20 kW solid state transformer (SST) is designed with 6 kHz switching frequency SiC rectifier. Then three robust control design methods are proposed for each of its smart grid operation modes. In grid connected mode, a new LCL filter design method is proposed considering grid voltage THD, grid current THD and current regulation loop robust stability with respect to the grid impedance change. In grid islanded mode, micro synthesis method combined with variable structure control is used to design a robust controller for grid voltage regulation. For grid emergency mode, multivariable controller designed using Hinfinity synthesis method is proposed for accurate power sharing. Controller-hardware-in-the-loop (CHIL) testbed considering 7-SST system is setup with Real Time Digital Simulator (RTDS). The real TMS320F28335 DSP and Spartan 6 FPGA control board is used to interface a switching model SST in RTDS. And the proposed control methods are tested. For low voltage level application, a 3.3 kW smart grid hardware is built with 3 GaN inverters. The inverters are designed with the GaN device characterized using the proposed multi-function double pulse tester. The inverter is controlled by onboard TMS320F28379D dual core DSP with 200 kHz sampling frequency. Each inverter is tested to process 2.2 kW power with overall efficiency of 96.5 % at room temperature. The smart grid monitor system and fault interrupt devices (FID) based on Arduino Mega2560 are built and tested. The smart grid cooperates with GaN inverters through CAN bus communication. At last, the three GaN inverters smart grid achieved the function of grid connected to islanded mode smooth transition.
A simple homogeneous model for regular and irregular metallic wire media samples
NASA Astrophysics Data System (ADS)
Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.
2018-02-01
To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.
Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle
2016-01-01
With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757
NASA Astrophysics Data System (ADS)
Yan, Jin; Song, Xiao; Gong, Guanghong
2016-02-01
We describe a metric named averaged ratio between complementary profiles to represent the distortion of map projections, and the shape regularity of spherical cells derived from map projections or non-map-projection methods. The properties and statistical characteristics of our metric are investigated. Our metric (1) is a variable of numerical equivalence to both scale component and angular deformation component of Tissot indicatrix, and avoids the invalidation when using Tissot indicatrix and derived differential calculus for evaluating non-map-projection based tessellations where mathematical formulae do not exist (e.g., direct spherical subdivisions), (2) exhibits simplicity (neither differential nor integral calculus) and uniformity in the form of calculations, (3) requires low computational cost, while maintaining high correlation with the results of differential calculus, (4) is a quasi-invariant under rotations, and (5) reflects the distortions of map projections, distortion of spherical cells, and the associated distortions of texels. As an indicator of quantitative evaluation, we investigated typical spherical tessellation methods, some variants of tessellation methods, and map projections. The tessellation methods we evaluated are based on map projections or direct spherical subdivisions. The evaluation involves commonly used Platonic polyhedrons, Catalan polyhedrons, etc. Quantitative analyses based on our metric of shape regularity and an essential metric of area uniformity implied that (1) Uniform Spherical Grids and its variant show good qualities in both area uniformity and shape regularity, and (2) Crusta, Unicube map, and a variant of Unicube map exhibit fairly acceptable degrees of area uniformity and shape regularity.
TRMM .25 deg x .25 deg Gridded Precipitation Text Product
NASA Technical Reports Server (NTRS)
Stocker, Erich; Kelley, Owen
2009-01-01
Since the launch of the Tropical Rainfall Measuring Mission (TRMM), the Precipitation Measurement Missions science team has endeavored to provide TRMM precipitation retrievals in a variety of formats that are more easily usable by the broad science community than the standard Hierarchical Data Format (HDF) in which TRMM data is produced and archived. At the request of users, the Precipitation Processing System (PPS) has developed a .25 x .25 gridded product in an easily used ASCII text format. The entire TRMM mission data has been made available in this format. The paper provides the details of this new precipitation product that is designated with the TRMM designator 3G68.25. The format is packaged into daily files. It provides hourly precipitation information from the TRMM microwave imager (TMI), precipitation radar (PR), and TMI/PR combined rain retrievals. A major advantage of this approach is the inclusion only of rain data, compression when a particular grid has no rain from the PR or combined, and its direct ASCII text format. For those interested only in rain retrievals and whether rain is convection or stratiform, these products provide a huge reduction in the data volume inherent in the standard TRMM products. This paper provides examples of the 3G68 data products and their uses. It also provides information about C tools that can be used to aggregate daily files into larger time samples. In addition, it describes the possibilities inherent in the spatial sampling which allows resampling into coarser spatial sampling. The paper concludes with information about downloading the gridded text data products.
Monninger, Mitchell K; Nguessan, Chrystal A; Blancett, Candace D; Kuehl, Kathleen A; Rossi, Cynthia A; Olschner, Scott P; Williams, Priscilla L; Goodman, Steven L; Sun, Mei G
2016-12-01
Transmission electron microscopy can be used to observe the ultrastructure of viruses and other microbial pathogens with nanometer resolution. In a transmission electron microscope (TEM), the image is created by passing an electron beam through a specimen with contrast generated by electron scattering from dense elements in the specimen. Viruses do not normally contain dense elements, so a negative stain that places dense heavy metal salts around the sample is added to create a dark border. To prepare a virus sample for a negative stain transmission electron microscopy, a virus suspension is applied to a TEM grid specimen support, which is a 3mm diameter fragile specimen screen coated with a few nanometers of plastic film. Then, deionized (dI) water rinses and a negative stain solution are applied to the grid. All infectious viruses must be handled in a biosafety cabinet (BSC) and many require a biocontainment laboratory environment. Staining viruses in biosafety levels (BSL) 3 and 4 is especially challenging because the support grids are small, fragile, and easily moved by air currents. In this study we evaluated a new device for negative staining viruses called mPrep/g capsule. It is a capsule that holds up to two TEM grids during all processing steps and for storage after staining is complete. This study reports that the mPrep/g capsule method is valid and effective to negative stain virus specimens, especially in high containment laboratory environments. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Scientific data interpolation with low dimensional manifold model
NASA Astrophysics Data System (ADS)
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Spatial Distribution of Soil Fauna In Long Term No Tillage
NASA Astrophysics Data System (ADS)
Corbo, J. Z. F.; Vieira, S. R.; Siqueira, G. M.
2012-04-01
The soil is a complex system constituted by living beings, organic and mineral particles, whose components define their physical, chemical and biological properties. Soil fauna plays an important role in soil and may reflect and interfere in its functionality. These organisms' populations may be influenced by management practices, fertilization, liming and porosity, among others. Such changes may reduce the composition and distribution of soil fauna community. Thus, this study aimed to determine the spatial variability of soil fauna in consolidated no-tillage system. The experimental area is located at Instituto Agronômico in Campinas (São Paulo, Brazil). The sampling was conducted in a Rhodic Eutrudox, under no tillage system and 302 points distributed in a 3.2 hectare area in a regular grid of 10.00 m x 10.00 m were sampled. The soil fauna was sampled with "Pitfall Traps" method and traps remained in the area for seven days. Data were analyzed using descriptive statistics to determine the main statistical moments (mean variance, coefficient of variation, standard deviation, skewness and kurtosis). Geostatistical tools were used to determine the spatial variability of the attributes using the experimental semivariogram. For the biodiversity analysis, Shannon and Pielou indexes and richness were calculated for each sample. Geostatistics has proven to be a great tool for mapping the spatial variability of groups from the soil epigeal fauna. The family Formicidae proved to be the most abundant and dominant in the study area. The parameters of descriptive statistics showed that all attributes studied showed lognormal frequency distribution for groups from the epigeal soil fauna. The exponential model was the most suited for the obtained data, for both groups of epigeal soil fauna (Acari, Araneae, Coleoptera, Formicidae and Coleoptera larva), and the other biodiversity indexes. The sampling scheme (10.00 m x 10.00 m) was not sufficient to detect the spatial variability for all groups of soil epigeal fauna found in this study.
A Catchment-Based Land Surface Model for GCMs and the Framework for its Evaluation
NASA Technical Reports Server (NTRS)
Ducharen, A.; Koster, R. D.; Suarez, M. J.; Kumar, P.
1998-01-01
A new GCM-scale land surface modeling strategy that explicitly accounts for subgrid soil moisture variability and its effects on evaporation and runoff is now being explored. In a break from traditional modeling strategies, the continental surface is disaggregated into a mosaic of hydrological catchments, with boundaries that are not dictated by a regular grid but by topography. Within each catchment, the variability of soil moisture is deduced from TOP-MODEL equations with a special treatment of the unsaturated zone. This paper gives an overview of this new approach and presents the general framework for its off-line evaluation over North-America.
SAMI: Sydney-AAO Multi-object Integral field spectrograph pipeline
NASA Astrophysics Data System (ADS)
Allen, J. T.; Green, A. W.; Fogarty, L. M. R.; Sharp, R.; Nielsen, J.; Konstantopoulos, I.; Taylor, E. N.; Scott, N.; Cortese, L.; Richards, S. N.; Croom, S.; Owers, M. S.; Bauer, A. E.; Sweet, S. M.; Bryant, J. J.
2014-07-01
The SAMI (Sydney-AAO Multi-object Integral field spectrograph) pipeline reduces data from the Sydney-AAO Multi-object Integral field spectrograph (SAMI) for the SAMI Galaxy Survey. The python code organizes SAMI data and, along with the AAO 2dfdr package, carries out all steps in the data reduction, from raw data to fully calibrated datacubes. The principal steps are: data management, use of 2dfdr to produce row-stacked spectra, flux calibration, correction for telluric absorption, removal of atmospheric dispersion, alignment of dithered exposures, and drizzling onto a regular output grid. Variance and covariance information is tracked throughout the pipeline. Some quality control routines are also included.
Stochastic sampling of quadrature grids for the evaluation of vibrational expectation values
NASA Astrophysics Data System (ADS)
López Ríos, Pablo; Monserrat, Bartomeu; Needs, Richard J.
2018-02-01
The thermal lines method for the evaluation of vibrational expectation values of electronic observables [B. Monserrat, Phys. Rev. B 93, 014302 (2016), 10.1103/PhysRevB.93.014302] was recently proposed as a physically motivated approximation offering balance between the accuracy of direct Monte Carlo integration and the low computational cost of using local quadratic approximations. In this paper we reformulate thermal lines as a stochastic implementation of quadrature-grid integration, analyze the analytical form of its bias, and extend the method to multiple-point quadrature grids applicable to any factorizable harmonic or anharmonic nuclear wave function. The bias incurred by thermal lines is found to depend on the local form of the expectation value, and we demonstrate that the use of finer quadrature grids along selected modes can eliminate this bias, while still offering an ˜30 % lower computational cost than direct Monte Carlo integration in our tests.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033
2016-09-15
Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less
International Symposium on Grids and Clouds (ISGC) 2014
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).
I/O Parallelization for the Goddard Earth Observing System Data Assimilation System (GEOS DAS)
NASA Technical Reports Server (NTRS)
Lucchesi, Rob; Sawyer, W.; Takacs, L. L.; Lyster, P.; Zero, J.
1998-01-01
The National Aeronautics and Space Administration (NASA) Data Assimilation Office (DAO) at the Goddard Space Flight Center (GSFC) has developed the GEOS DAS, a data assimilation system that provides production support for NASA missions and will support NASA's Earth Observing System (EOS) in the coming years. The GEOS DAS will be used to provide background fields of meteorological quantities to EOS satellite instrument teams for use in their data algorithms as well as providing assimilated data sets for climate studies on decadal time scales. The DAO has been involved in prototyping parallel implementations of the GEOS DAS for a number of years and is now embarking on an effort to convert the production version from shared-memory parallelism to distributed-memory parallelism using the portable Message-Passing Interface (MPI). The GEOS DAS consists of two main components, an atmospheric General Circulation Model (GCM) and a Physical-space Statistical Analysis System (PSAS). The GCM operates on data that are stored on a regular grid while PSAS works with observational data that are scattered irregularly throughout the atmosphere. As a result, the two components have different data decompositions. The GCM is decomposed horizontally as a checkerboard with all vertical levels of each box existing on the same processing element(PE). The dynamical core of the GCM can also operate on a rotated grid, which requires communication-intensive grid transformations during GCM integration. PSAS groups observations on PEs in a more irregular and dynamic fashion.
NASA Astrophysics Data System (ADS)
Penven, Pierrick; Debreu, Laurent; Marchesiello, Patrick; McWilliams, James C.
What most clearly distinguishes near-shore and off-shore currents is their dominant spatial scale, O (1-30) km near-shore and O (30-1000) km off-shore. In practice, these phenomena are usually both measured and modeled with separate methods. In particular, it is infeasible for any regular computational grid to be large enough to simultaneously resolve well both types of currents. In order to obtain local solutions at high resolution while preserving the regional-scale circulation at an affordable computational cost, a 1-way grid embedding capability has been integrated into the Regional Oceanic Modeling System (ROMS). It takes advantage of the AGRIF (Adaptive Grid Refinement in Fortran) Fortran 90 package based on the use of pointers. After a first evaluation in a baroclinic vortex test case, the embedding procedure has been applied to a domain that covers the central upwelling region off California, around Monterey Bay, embedded in a domain that spans the continental U.S. Pacific Coast. Long-term simulations (10 years) have been conducted to obtain mean-seasonal statistical equilibria. The final solution shows few discontinuities at the parent-child domain boundary and a valid representation of the local upwelling structure, at a CPU cost only slightly greater than for the inner region alone. The solution is assessed by comparison with solutions for the whole US Pacific Coast at both low and high resolutions and to solutions for only the inner region at high resolution with mean-seasonal boundary conditions.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.
1990-01-01
PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.
NASA Astrophysics Data System (ADS)
Ahmed, Shamim; Miorelli, Roberto; Calmon, Pierre; Anselmi, Nicola; Salucci, Marco
2018-04-01
This paper describes Learning-By-Examples (LBE) technique for performing quasi real time flaw localization and characterization within a conductive tube based on Eddy Current Testing (ECT) signals. Within the framework of LBE, the combination of full-factorial (i.e., GRID) sampling and Partial Least Squares (PLS) feature extraction (i.e., GRID-PLS) techniques are applied for generating a suitable training set in offine phase. Support Vector Regression (SVR) is utilized for model development and inversion during offine and online phases, respectively. The performance and robustness of the proposed GIRD-PLS/SVR strategy on noisy test set is evaluated and compared with standard GRID/SVR approach.
Laser-induced superhydrophobic grid patterns on PDMS for droplet arrays formation
NASA Astrophysics Data System (ADS)
Farshchian, Bahador; Gatabi, Javad R.; Bernick, Steven M.; Park, Sooyeon; Lee, Gwan-Hyoung; Droopad, Ravindranath; Kim, Namwon
2017-02-01
We demonstrate a facile single step laser treatment process to render a polydimethylsiloxane (PDMS) surface superhydrophobic. By synchronizing a pulsed nanosecond laser source with a motorized stage, superhydrophobic grid patterns were written on the surface of PDMS. Hierarchical micro and nanostructures were formed in the irradiated areas while non-irradiated areas were covered by nanostructures due to deposition of ablated particles. Arrays of droplets form spontaneously on the laser-patterned PDMS with superhydrophobic grid pattern when the PDMS sample is simply immersed in and withdrawn from water due to different wetting properties of the irradiated and non-irradiated areas. The effects of withdrawal speed and pitch size of superhydrophobic grid on the size of formed droplets were investigated experimentally. The droplet size increases initially with increasing the withdrawal speed and then does not change significantly beyond certain points. Moreover, larger droplets are formed by increasing the pitch size of the superhydrophobic grid. The droplet arrays formed on the laser-patterned PDMS with wettability contrast can be used potentially for patterning of particles, chemicals, and bio-molecules and also for cell screening applications.
Wildlife monitoring across multiple spatial scales using grid-based sampling
Kevin S. McKelvey; Samuel A. Cushman; Michael K. Schwartz; Leonard F. Ruggiero
2009-01-01
Recently, noninvasive genetic sampling has become the most effective way to reliably sample occurrence of many species. In addition, genetic data provide a rich data source enabling the monitoring of population status. The combination of genetically based animal data collected at known spatial coordinates with vegetation, topography, and other available covariates...
Back surface reflectors for solar cells
NASA Technical Reports Server (NTRS)
Chai, A. T.
1980-01-01
Sample solar cells were fabricated to study the effects of various back surface reflectors on the device performance. They are typical 50 micrometers thick, space quality, silicon solar cells except for variations of the back contact configuration. The back surfaces of the sample cells are polished to a mirror like finish, and have either conventional full contacts or grid finger contacts. Measurements and evaluation of various metallic back surface reflectors, as well as cells with total internal reflection, are presented. Results indicate that back surface reflectors formed using a grid finger back contact are more effective reflectors than cells with full back metallization and that Au, Ag, or Cu are better back surface reflector metals than Al.
The Numerical Simulation of Time Dependent Flow Structures Over a Natural Gravel Surface.
NASA Astrophysics Data System (ADS)
Hardy, R. J.; Lane, S. N.; Ferguson, R. I.; Parsons, D. R.
2004-05-01
Research undertaken over the last few years has demonstrated the importance of the structure of gravel river beds for understanding the interaction between fluid flow and sediment transport processes. This includes the observation of periodic high-speed fluid wedges interconnected by low-speed flow regions. Our understanding of these flows has been enhanced significantly through a series of laboratory experiments and supported by field observations. However, the potential of high resolution three dimensional Computational Fluid Dynamics (CFD) modeling has yet to be fully developed. This is largely the result of the problems of designing numerically stable meshes for use with complex bed topographies and that Reynolds averaged turbulence schemes are applied. This paper develops two novel techniques for dealing with these issues. The first is the development and validation of a method for representing the complex surface topography of gravel-bed rivers in high resolution three-dimensional computational fluid dynamic models. This is based upon a porosity treatment with a regular structured grid and the application of a porosity modification to the mass conservation equation in which: fully blocked cells are assigned a porosity of zero; fully unblocked cells are assigned a porosity of one; and partly blocked cells are assigned a porosity of between 0 and 1, according to the percentage of the cell volume that is blocked. The second is the application of Large Eddy Simulation (LES) which enables time dependent flow structures to be numerically predicted over the complex bed topographies. The regular structured grid with the embedded porosity algorithm maintains a constant grid cell size throughout the domain implying a constant filter scale for the LES simulation. This enables the prediction of coherent structures, repetitive quasi-cyclic large-scale turbulent motions, over the gravel surface which are of a similar magnitude and frequency to those previously observed in both flume and field studies. These structures are formed by topographic forcing within the domain and are scaled with the flow depth. Finally, this provides the numerical framework for the prediction of sediment transport within a time dependent framework. The turbulent motions make a significant contribution to the turbulent shear stress and the pressure fluctuations which significantly affect the forces acting on the bed and potentially control sediment motion.
Kumar, Pawan; Khan, Abdul M.; Inder, Deep; Mehra, Anu
2014-01-01
Background: Job satisfaction is a pleasant emotional state associated with the appreciation of one's work and contributes immensely to performance in an organization. The purpose of this study was to assess the comparative job satisfaction among regular and staff on contract in Government Primary Urban Health Centers in Delhi, India. Materials and Methods: The study was conducted in 2013, on a sample of 333 health care providers who were selected using a multistage random sampling technique. The sample included medical officers (MOs), auxiliary nurses and midwives (ANMs), pharmacists and laboratory technicians (LTs)/laboratory assistants (LAs) among regular and staff on contract. Analysis was done using SPSS version 18, and appropriate statistical tests were applied. Results: The job satisfaction for all the regular staff that is, MOs, ANMs, pharmacists, LAs, and LTs were relatively higher (3.3 ± 0.44) than the contract staff (2.7 ± 0.45) with ‘t’value 10.54 (P < 0.01). The mean score for regular and contract MOs was 3.2 ± 0.46 and 2.7 ± 0.56, respectively, and the same trends were found between regular and ANMs on the contract which was 3.4 ± 0.30 and 2.7 ± 0.38, regular and pharmacists on the contract was 3.3 ± 0.50 and 2.8 ± 0.41, respectively. The differences between groups were significant with a P < 0.01. Conclusion: Overall job satisfaction level was relatively low in both regular and contract staff. The factors contributing to satisfaction level were privileges, interpersonal relations, working-environment, patient relationship, the organization's facilities, career development, and the scarcity of human resources (HRs). Therefore, specific recommendations are suggested to policy makers to take cognizance of the scarcity of HRs and the on-going experimentation with different models under primary health care system. PMID:24987280
Mercury Slovenian soils: High, medium and low sample density geochemical maps
NASA Astrophysics Data System (ADS)
Gosar, Mateja; Šajn, Robert; Teršič, Tamara
2017-04-01
Regional geochemical survey was conducted in whole territory of Slovenia (20273 km2). High, medium and low sample density surveys were compared. High sample density represented the regional geochemical data set supplemented by local high-density sampling data (irregular grid, n=2835). Medium-density soil sampling was performed in a 5 x 5 km grid (n=817) and low-density geochemical survey was conducted in a sampling grid 25 x 25 km (n=54). Mercury distribution in Slovenian soils was determined with models of mercury distribution in soil using all three data sets. A distinct Hg anomaly in western part of Slovenia is evident on all three models. It is a consequence of 500-years of mining and ore processing in the second largest mercury mine in the world, the Idrija mine. The determined mercury concentrations revealed an important difference between the western and the eastern parts of the country. For the medium scale geochemical mapping is the median value (0.151 mg /kg) for western Slovenia almost 2-fold higher than the median value (0.083 mg/kg) in eastern Slovenia. Besides the Hg median for the western part of Slovenia exceeds the Hg median for European soil by a factor of 4 (Gosar et al., 2016). Comparing these sample density surveys, it was shown that high sampling density allows the identification and characterization of anthropogenic influences on a local scale, while medium- and low-density sampling reveal general trends in the mercury spatial distribution, but are not appropriate for identifying local contamination in industrial regions and urban areas. The resolution of the pattern generated is the best when the high-density survey on a regional scale is supplemented with the geochemical data of the high-density surveys on a local scale. References: Gosar, M, Šajn, R, Teršič, T. Distribution pattern of mercury in the Slovenian soil: geochemical mapping based on multiple geochemical datasets. Journal of geochemical exploration, 2016, 167/38-48.
NASA Technical Reports Server (NTRS)
Ferlemann, Paul G.; Gollan, Rowan J.
2010-01-01
Computational design and analysis of three-dimensional hypersonic inlets with shape transition has been a significant challenge due to the complex geometry and grid required for three-dimensional viscous flow calculations. Currently, the design process utilizes an inviscid design tool to produce initial inlet shapes by streamline tracing through an axisymmetric compression field. However, the shape is defined by a large number of points rather than a continuous surface and lacks important features such as blunt leading edges. Therefore, a design system has been developed to parametrically construct true CAD geometry and link the topology of a structured grid to the geometry. The Adaptive Modeling Language (AML) constitutes the underlying framework that is used to build the geometry and grid topology. Parameterization of the CAD geometry allows the inlet shapes produced by the inviscid design tool to be generated, but also allows a great deal of flexibility to modify the shape to account for three-dimensional viscous effects. By linking the grid topology to the parametric geometry, the GridPro grid generation software can be used efficiently to produce a smooth hexahedral multiblock grid. To demonstrate the new capability, a matrix of inlets were designed by varying four geometry parameters in the inviscid design tool. The goals of the initial design study were to explore inviscid design tool geometry variations with a three-dimensional analysis approach, demonstrate a solution rate which would enable the use of high-fidelity viscous three-dimensional CFD in future design efforts, process the results for important performance parameters, and perform a sample optimization.
NASA Astrophysics Data System (ADS)
Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi
2016-10-01
Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Yaosuo
The matrix converter solid state transformer (MC-SST), formed from the back-to-back connection of two three-to-single-phase matrix converters, is studied for use in the interconnection of two ac grids. The matrix converter topology provides a light weight and low volume single-stage bidirectional ac-ac power conversion without the need for a dc link. Thus, the lifetime limitations of dc-bus storage capacitors are avoided. However, space vector modulation of this type of MC-SST requires to compute vectors for each of the two MCs, which must be carefully coordinated to avoid commutation failure. An additional controller is also required to control power exchange betweenmore » the two ac grids. In this paper, model predictive control (MPC) is proposed for an MC-SST connecting two different ac power grids. The proposed MPC predicts the circuit variables based on the discrete model of MC-SST system and the cost function is formulated so that the optimal switch vector for the next sample period is selected, thereby generating the required grid currents for the SST. Simulation and experimental studies are carried out to demonstrate the effectiveness and simplicity of the proposed MPC for such MC-SST-based grid interfacing systems.« less
Inference and Biogeochemical Response of Vertical Velocities inside a Mode Water Eddy
NASA Astrophysics Data System (ADS)
Barceló-Llull, B.; Pallas Sanz, E.; Sangrà, P.
2016-02-01
With the aim to study the modulation of the biogeochemical fluxes by the ageostrophic secondary circulation in anticyclonic mesoscale eddies, a typical eddy of the Canary Eddy Corridor was interdisciplinary surveyed on September 2014 in the framework of the PUMP project. The eddy was elliptical shaped, 4 month old, 110 km diameter and 400 m depth. It was an intrathermocline type often also referred as mode water eddy type. We inferred the mesoscale vertical velocity field resolving a generalized omega equation from the 3D density and ADCP velocity fields of a five-day sampled CTD-SeaSoar regular grid centred on the eddy. The grid transects where 10 nautical miles apart. Although complex, in average, the inferred omega velocity field (hereafter w) shows a dipolar structure with downwelling velocities upstream of the propagation path (west) and upwelling velocities downstream. The w at the eddy center was zero and maximum values were located at the periphery attaining ca. 6 m day-1. Coinciding with the occurrence of the vertical velocities cells a noticeable enhancement of phytoplankton biomass was observed at the eddy periphery respect to the far field. A corresponding upward diapycnal flux of nutrients was also observed at the periphery. As minimum velocities where reached at the eddy center, lineal Ekman pumping mechanism was discarded. Minimum values of phytoplankton biomass where also observed at the eddy center. The possible mechanisms for such dipolar w cell are still being investigated, but an analysis of the generalized omega equation forcing terms suggest that it may be a combination of horizontal deformation and advection of vorticity by the ageostrophic current (related to nonlinear Ekman pumping). As expected for Trades, the wind was rather constant and uniform with a speed of ca. 5 m s-1. Diagnosed nonlinear Ekman pumping leaded also to a dipolar cell that mirrors the omega w dipolar cell.
Performance of two updated blood glucose monitoring systems: an evaluation following ISO 15197:2013.
Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Jendrike, Nina; Haug, Cornelia; Freckmann, Guido
2016-05-01
Objective For patients with diabetes, regular self-monitoring of blood glucose (SMBG) is essential to ensure adequate glycemic control. Therefore, accurate and reliable blood glucose measurements with SMBG systems are necessary. The international standard ISO 15197 describes requirements for SMBG systems, such as limits within which 95% of glucose results have to fall to reach acceptable system accuracy. The 2013 version of this standard sets higher demands, especially regarding system accuracy, than the currently still valid edition. ISO 15197 can be applied by manufacturers to receive a CE mark for their system. Research design and methods This study was an accuracy evaluation following ISO 15197:2013 section 6.3 of two recently updated SMBG systems (Contour * and Contour TS; Bayer Consumer Care AG, Basel, Switzerland) with an improved algorithm to investigate whether the systems fulfill the requirements of the new standard. For this purpose, capillary blood samples of approximately 100 participants were measured with three test strip lots of both systems and deviations from glucose values obtained with a hexokinase-based comparison method (Cobas Integra † 400 plus; Roche Instrument Center, Rotkreuz, Switzerland) were determined. Percentages of values within the acceptance criteria of ISO 15197:2013 were calculated. This study was registered at clinicaltrials.gov (NCT02358408). Main outcome Both updated systems fulfilled the system accuracy requirements of ISO 15197:2013 as 98.5% to 100% of the results were within the stipulated limits. Furthermore, all results were within the clinically non-critical zones A and B of the consensus error grid for type 1 diabetes. Conclusions The technical improvement of the systems ensured compliance with ISO 15197 in the hands of healthcare professionals even in its more stringent 2013 version. Alternative presentation of system accuracy results in radar plots provides additional information with certain advantages. In addition, the surveillance error grid offers a modern tool to assess a system's clinical performance.
Fine resolution 3D temperature fields off Kerguelen from instrumented penguins
NASA Astrophysics Data System (ADS)
Charrassin, Jean-Benoît; Park, Young-Hyang; Le Maho, Yvon; Bost, Charles-André
2004-12-01
The use of diving animals as autonomous vectors of oceanographic instruments is rapidly increasing, because this approach yields cost-efficient new information and can be used in previously poorly sampled areas. However, methods for analyzing the collected data are still under development. In particular, difficulties may arise from the heterogeneous data distribution linked to animals' behavior. Here we show how raw temperature data collected by penguin-borne loggers were transformed to a regular gridded dataset that provided new information on the local circulation off Kerguelen. A total of 16 king penguins ( Aptenodytes patagonicus) were equipped with satellite-positioning transmitters and with temperature-time-depth recorders (TTDRs) to record dive depth and sea temperature. The penguins' foraging trips recorded during five summers ranged from 140 to 600 km from the colony and 11,000 dives >100 m were recorded. Temperature measurements recorded during diving were used to produce detailed 3D temperature fields of the area (0-200 m). The data treatment included dive location, determination of the vertical profile for each dive, averaging and gridding of those profiles onto 0.1°×0.1° cells, and optimal interpolation in both the horizontal and vertical using an objective analysis. Horizontal fields of temperature at the surface and 100 m are presented, as well as a vertical section along the main foraging direction of the penguins. Compared to conventional temperature databases (Levitus World Ocean Atlas and historical stations available in the area), the 3D temperature fields collected from penguins are extremely finely resolved, by one order finer. Although TTDRs were less accurate than conventional instruments, such a high spatial resolution of penguin-derived data provided unprecedented detailed information on the upper level circulation pattern east of Kerguelen, as well as the iron-enrichment mechanism leading to a high primary production over the Kerguelen Plateau.
On the "Optimal" Choice of Trial Functions for Modelling Potential Fields
NASA Astrophysics Data System (ADS)
Michel, Volker
2015-04-01
There are many trial functions (e.g. on the sphere) available which can be used for the modelling of a potential field. Among them are orthogonal polynomials such as spherical harmonics and radial basis functions such as spline or wavelet basis functions. Their pros and cons have been widely discussed in the last decades. We present an algorithm, the Regularized Functional Matching Pursuit (RFMP), which is able to choose trial functions of different kinds in order to combine them to a stable approximation of a potential field. One main advantage of the RFMP is that the constructed approximation inherits the advantages of the different basis systems. By including spherical harmonics, coarse global structures can be represented in a sparse way. However, the additional use of spline basis functions allows a stable handling of scattered data grids. Furthermore, the inclusion of wavelets and scaling functions yields a multiscale analysis of the potential. In addition, ill-posed inverse problems (like a downward continuation or the inverse gravimetric problem) can be regularized with the algorithm. We show some numerical examples to demonstrate the possibilities which the RFMP provides.
Making MUSIC: A multiple sampling ionization chamber
NASA Astrophysics Data System (ADS)
Shumard, B.; Henderson, D. J.; Rehm, K. E.; Tang, X. D.
2007-08-01
A multiple sampling ionization chamber (MUSIC) was developed for use in conjunction with the Atlas scattering chamber (ATSCAT). This chamber was developed to study the (α, p) reaction in stable and radioactive beams. The gas filled ionization chamber is used as a target and detector for both particles in the outgoing channel (p + beam particles for elastic scattering or p + residual nucleus for (α, p) reactions). The MUSIC detector is followed by a Si array to provide a trigger for anode events. The anode events are gated by a gating grid so that only (α, p) reactions where the proton reaches the Si detector result in an anode event. The MUSIC detector is a segmented ionization chamber. The active length of the chamber is 11.95 in. and is divided into 16 equal anode segments (3.5 in. × 0.70 in. with 0.3 in. spacing between pads). The dead area of the chamber was reduced by the addition of a Delrin snout that extends 0.875 in. into the chamber from the front face, to which a mylar window is affixed. 0.5 in. above the anode is a Frisch grid that is held at ground potential. 0.5 in. above the Frisch grid is a gating grid. The gating grid functions as a drift electron barrier, effectively halting the gathering of signals. Setting two sets of alternating wires at differing potentials creates a lateral electric field which traps the drift electrons, stopping the collection of anode signals. The chamber also has a reinforced mylar exit window separating the Si array from the target gas. This allows protons from the (α, p) reaction to be detected. The detection of these protons opens the gating grid to allow the drift electrons released from the ionizing gas during the (α, p) reaction to reach the anode segment below the reaction.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...
2017-09-28
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Scientific data interpolation with low dimensional manifold model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Wei; Wang, Bao; Barnard, Richard C.
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Lorenzetti, Valentina; Solowij, Nadia; Fornito, Alex; Lubman, Dan Ian; Yucel, Murat
2014-01-01
Cannabis is the most widely used illicit drug worldwide, though it is unclear whether its regular use is associated with persistent alterations in brain morphology. This review examines evidence from human structural neuroimaging investigations of regular cannabis users and focuses on achieving three main objectives. These include examining whether the literature to date provides evidence that alteration of brain morphology in regular cannabis users: i) is apparent, compared to non-cannabis using controls; ii) is associated with patterns of cannabis use; and with iii) measures of psychopathology and neurocognitive performance. The published findings indicate that regular cannabis use is associated with alterations in medial temporal, frontal and cerebellar brain regions. Greater brain morphological alterations were evident among samples that used at higher doses for longer periods. However, the evidence for an association between brain morphology and cannabis use parameters was mixed. Further, there is poor evidence for an association between measures of brain morphology and of psychopathology symptoms/neurocognitive performance. Overall, numerous methodological issues characterize the literature to date. These include investigation of small sample sizes, heterogeneity across studies in sample characteristics (e.g., sex, comorbidity) and in employed imaging techniques, as well as the examination of only a limited number of brain regions. These factors make it difficult to draw firm conclusions from the existing findings. Nevertheless, this review supports the notion that regular cannabis use is associated with alterations of brain morphology, and highlights the need to consider particular methodological issues when planning future cannabis research.
Kumar, Pawan; Mehra, Anu; Inder, Deep; Sharma, Nandini
2016-01-01
Motivated and committed employees deliver better health care, which results in better outcomes and higher patient satisfaction. To assess the Organizational Commitment and Intrinsic Motivation of Primary Health Care Providers (HCPs) in New Delhi, India. Study was conducted in 2013 on a sample of 333 HCPs who were selected using multistage stage random sampling technique. The sample includes medical officers, auxiliary nurses and midwives, and pharmacists and laboratory technicians/assistants among regular and contractual staff. Data were collected using the pretested structured questionnaire for organization commitment (OC), job satisfiers, and intrinsic job motivation. Analysis was done by using SPSS version 18 and appropriate statistical tests were applied. The mean score for OC for entire regular staff is 1.6 ± 0.39 and contractual staff is 1.3 ± 0.45 which has statistically significant difference (t = 5.57; P = 0.00). In both regular and contractual staff, none of them show high emotional attachment with the organization and does not feel part of the family in the organization. Contractual staff does not feel proud to work in a present organization for rest of their career. Intrinsic motivation is high in both regular and contractual groups but intergroup difference is significant (t = 2.38; P < 0.05). Contractual staff has more dissatisfier than regular, and the difference is significant (P < 0.01). Organizational commitment and intrinsic motivation of contractual staff are lesser than the permanent staff. Appropriate changes are required in the predictors of organizational commitment and factors responsible for satisfaction in the organization to keep the contractual human resource motivated and committed to the organization.
Bayarri, S; Carbonell, I; Costell, E
2012-12-01
The effect of the 2 common consumption temperatures, refrigeration temperature (10°C) and room temperature (22°C), on the viscoelasticity, mechanical properties, and perceived texture of commercial cream cheeses was studied. Two samples with different fat contents, regular and low fat, from each of 4 selected commercial brands were analyzed. The selection criteria were based on identification of brands with different percentages of fat content reduction between the regular- and low-fat samples (35, 50, 84, and 98.5%). The fat content of regular-fat samples ranged from 19.8 to 26.0% (wt/wt), and that of low-fat samples ranged from 0.3 to 13.0% (wt/wt). Viscoelasticity was measured in a controlled-stress rheometer using parallel-plate geometry, and the mechanical characteristics of samples were measured using the spreadability test. Differences in the intensity of thickness, creaminess, and roughness between the regular- and low-fat samples of each commercial brand were evaluated at each of the selected temperatures by using the paired comparisons test. At 10°C, all samples showed higher viscoelastic modulus values, firmness, and stickiness, and lower spreadability than when they were measured at 22°C. Differences in viscoelasticity and mechanical properties between each pair of samples of the same brand were greater at 10°C than at 22°C because of the influence not only of fat content but also of fat state. Ingestion temperature did not modify the sensory differences detected between each pair of samples in terms of creaminess and roughness, but it did modify the differences detected in thickness. The joint consideration of sample composition, fat state, and product behavior during oral processing could explain the differences detected in thickness perceived because of measurement temperatures. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Sample Introduction Using the Hildebrand Grid Nebulizer for Plasma Spectrometry
1988-01-01
linear dynamic ranges, precision, and peak width were de- termined for elements in methanol and acetonitrile solutions. , (1)> The grid nebulizer was...FIA) with ICP-OES detection were evaluated. Detec- tion limits, linear dynamic ranges, precision, and peak width were de- termined for elements in...Concentration vs. Log Peak Area for Mn, 59 Cd, Zn, Au, Ni in Methanol (CMSC) 3-28 Log Concentration vs. Log Peak Area for Mn, 60 Cd, Au, Ni in
Image stretching on a curved surface to improve satellite gridding
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1975-01-01
A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.
Accurate determination of segmented X-ray detector geometry
Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton
2015-01-01
Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Gholami, Somayeh; Nedaie, Hassan Ali; Longo, Francesco; Ay, Mohammad Reza; Dini, Sharifeh A.; Meigooni, Ali S.
2017-01-01
Purpose: The clinical efficacy of Grid therapy has been examined by several investigators. In this project, the hole diameter and hole spacing in Grid blocks were examined to determine the optimum parameters that give a therapeutic advantage. Methods: The evaluations were performed using Monte Carlo (MC) simulation and commonly used radiobiological models. The Geant4 MC code was used to simulate the dose distributions for 25 different Grid blocks with different hole diameters and center-to-center spacing. The therapeutic parameters of these blocks, namely, the therapeutic ratio (TR) and geometrical sparing factor (GSF) were calculated using two different radiobiological models, including the linear quadratic and Hug–Kellerer models. In addition, the ratio of the open to blocked area (ROTBA) is also used as a geometrical parameter for each block design. Comparisons of the TR, GSF, and ROTBA for all of the blocks were used to derive the parameters for an optimum Grid block with the maximum TR, minimum GSF, and optimal ROTBA. A sample of the optimum Grid block was fabricated at our institution. Dosimetric characteristics of this Grid block were measured using an ionization chamber in water phantom, Gafchromic film, and thermoluminescent dosimeters in Solid Water™ phantom materials. Results: The results of these investigations indicated that Grid blocks with hole diameters between 1.00 and 1.25 cm and spacing of 1.7 or 1.8 cm have optimal therapeutic parameters (TR > 1.3 and GSF~0.90). The measured dosimetric characteristics of the optimum Grid blocks including dose profiles, percentage depth dose, dose output factor (cGy/MU), and valley-to-peak ratio were in good agreement (±5%) with the simulated data. Conclusion: In summary, using MC-based dosimetry, two radiobiological models, and previously published clinical data, we have introduced a method to design a Grid block with optimum therapeutic response. The simulated data were reproduced by experimental data. PMID:29296035
NASA Astrophysics Data System (ADS)
Zolina, Olga; Simmer, Clemens; Kapala, Alice; Mächel, Hermann; Gulev, Sergey; Groisman, Pavel
2014-05-01
We present new high resolution precipitation daily grids developed at Meteorological Institute, University of Bonn and German Weather Service (DWD) under the STAMMEX project (Spatial and Temporal Scales and Mechanisms of Extreme Precipitation Events over Central Europe). Daily precipitation grids have been developed from the daily-observing precipitation network of DWD, which runs one of the World's densest rain gauge networks comprising more than 7500 stations. Several quality-controlled daily gridded products with homogenized sampling were developed covering the periods 1931-onwards (with 0.5 degree resolution), 1951-onwards (0.25 degree and 0.5 degree), and 1971-2000 (0.1 degree). Different methods were tested to select the best gridding methodology that minimizes errors of integral grid estimates over hilly terrain. Besides daily precipitation values with uncertainty estimates (which include standard estimates of the kriging uncertainty as well as error estimates derived by a bootstrapping algorithm), the STAMMEX data sets include a variety of statistics that characterize temporal and spatial dynamics of the precipitation distribution (quantiles, extremes, wet/dry spells, etc.). Comparisons with existing continental-scale daily precipitation grids (e.g., CRU, ECA E-OBS, GCOS) which include considerably less observations compared to those used in STAMMEX, demonstrate the added value of high-resolution grids for extreme rainfall analyses. These data exhibit spatial variability pattern and trends in precipitation extremes, which are missed or incorrectly reproduced over Central Europe from coarser resolution grids based on sparser networks. The STAMMEX dataset can be used for high-quality climate diagnostics of precipitation variability, as a reference for reanalyses and remotely-sensed precipitation products (including the upcoming Global Precipitation Mission products), and for input into regional climate and operational weather forecast models. We will present numerous application of the STAMMEX grids spanning from case studies of the major Central European floods to long-term changes in different precipitation statistics, including those accounting for the alternation of dry and wet periods and precipitation intensities associated with prolonged rainy episodes.
Susong, D.D.; Abbott, M.L.; Krabbenhoft, D.P.
2003-01-01
Snow was sampled and analyzed for total mercury (THg) on the Idaho National Engineering and Environmental Laboratory (INEEL) and surrounding region prior to the start-up of a large (9-11 g/h) gaseous mercury emission source. The objective was to determine the effects of the source on local and regional atmospheric deposition of mercury. Snow samples collected from 48 points on a polar grid near the source had THg concentrations that ranged from 4.71 to 27.26 ng/L; snow collected from regional background sites had THg concentrations that ranged from 0.89 to 16.61 ng/L. Grid samples had higher concentrations than the regional background sites, which was unexpected because the source was not operating yet. Emission of Hg from soils is a possible source of Hg in snow on the INEEL. Evidence from Hg profiles in snow and from unfiltered/filtered split samples supports this hypothesis. Ongoing work on the INEEL is investigating Hg fluxes from soils and snow.
Designing efficient surveys: spatial arrangement of sample points for detection of invasive species
Ludek Berec; John M. Kean; Rebecca Epanchin-Niell; Andrew M. Liebhold; Robert G. Haight
2015-01-01
Effective surveillance is critical to managing biological invasions via early detection and eradication. The efficiency of surveillance systems may be affected by the spatial arrangement of sample locations. We investigate how the spatial arrangement of sample points, ranging from random to fixed grid arrangements, affects the probability of detecting a target...
Remediation of hazardous material spills is often costly and entails cumbersome procedures. he traditional method is to drill core samples in the area where the contaminant is thought to be present and then analyze these samples in a laboratory. he denser the sampling grid, the m...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheong, K; Lee, M; Kang, S
2014-06-01
Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases. This work was supported by a Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean Ministry of Science, ICT and Future Planning (No. 2013043498)« less
Li, Chen; Habler, Gerlinde; Baldwin, Lisa C; Abart, Rainer
2018-01-01
Focused ion beam (FIB) sample preparation technique in plan-view geometry allows direct correlations of the atomic structure study via transmission electron microscopy with micrometer-scale property measurements. However, one main technical difficulty is that a large amount of material must be removed underneath the specimen. Furthermore, directly monitoring the milling process is difficult unless very large material volumes surrounding the TEM specimen site are removed. In this paper, a new cutting geometry is introduced for FIB lift-out sample preparation with plan-view geometry. Firstly, an "isolated" cuboid shaped specimen is cut out, leaving a "bridge" connecting it with the bulk material. Subsequently the two long sides of the "isolated" cuboid are wedged, forming a triangular prism shape. A micromanipulator needle is used for in-situ transfer of the specimen to a FIB TEM grid, which has been mounted parallel with the specimen surface using a simple custom-made sample slit. Finally, the grid is transferred to the standard FIB grid holder for final thinning with standard procedures. This new cutting geometry provides clear viewing angles for monitoring the milling process, which solves the difficulty of judging whether the specimen has been entirely detached from the bulk material, with the least possible damage to the surrounding materials. With an improved success rate and efficiency, this plan-view FIB lift-out specimen preparation technique should have a wide application for material science. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Optimizing the Distribution of Tie Points for the Bundle Adjustment of HRSC Image Mosaics
NASA Astrophysics Data System (ADS)
Bostelmann, J.; Breitkopf, U.; Heipke, C.
2017-07-01
For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.
Bayesian function-on-function regression for multilevel functional data.
Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S
2015-09-01
Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images. © 2015, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Warsi, Saif A.
1989-01-01
A detailed operating manual is presented for a grid generating program that produces 3-D meshes for advanced turboprops. The code uses both algebraic and elliptic partial differential equation methods to generate single rotation and counterrotation, H or C type meshes for the z - r planes and H type for the z - theta planes. The code allows easy specification of geometrical constraints (such as blade angle, location of bounding surfaces, etc.), mesh control parameters (point distribution near blades and nacelle, number of grid points desired, etc.), and it has good runtime diagnostics. An overview is provided of the mesh generation procedure, sample input dataset with detailed explanation of all input, and example meshes.
Kalinin, Sergei V.; Balke, Nina; Borisevich, Albina Y.; Jesse, Stephen; Maksymovych, Petro; Kim, Yunseok; Strelcov, Evgheni
2014-06-10
An excitation voltage biases an ionic conducting material sample over a nanoscale grid. The bias sweeps a modulated voltage with increasing maximal amplitudes. A current response is measured at grid locations. Current response reversal curves are mapped over maximal amplitudes of the bias cycles. Reversal curves are averaged over the grid for each bias cycle and mapped over maximal bias amplitudes for each bias cycle. Average reversal curve areas are mapped over maximal amplitudes of the bias cycles. Thresholds are determined for onset and ending of electrochemical activity. A predetermined number of bias sweeps may vary in frequency where each sweep has a constant number of cycles and reversal response curves may indicate ionic diffusion kinetics.
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1980-01-01
A method for generating two dimensional finite difference grids about airfoils and other shapes by the use of the Poisson differential equation is developed. The inhomogeneous terms are automatically chosen such that two important effects are imposed on the grid at both the inner and outer boundaries. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. A FORTRAN computer program has been written to use this method. A description of the program, a discussion of the control parameters, and a set of sample cases are included.
Xu, Hui Qiu; Huang, Yin Hua; Wu, Zhi Feng; Cheng, Jiong; Li, Cheng
2016-10-01
Based on 641 agricultural top soil samples (0-20 cm) and land use map in 2005 of Guangzhou, we used single-factor pollution indices and Pearson/Spearman correlation and partial redundancy analyses and quantified the soil contamination with As and Cd and their relationships with landscape heterogeneity at three grid scales of 2 km×2 km, 5 km×5 km, and 10 km×10 km as well as the determinant landscape heterogeneity factors at a certain grid scale. 5.3% and 7.2% of soil samples were contaminated with As and Cd, respectively. At the three scales, the agricultural soil As and Cd contamination were generally significantly correlated with parent materials' composition, river/road density and landscape patterns of several land use types, indicating the parent materials, sewage irrigation and human activities (e.g., industrial and traffic activities, and the additions of pesticides and fertilizers) were possibly the main input pathways of trace metals. Three subsets of landscape heterogeneity variables (i.e., parent materials, distance-density variables, and landscape patterns) could explain 12.7%-42.9% of the variation of soil contamination with As and Cd, of which the explanatory power increased with the grid scale and the determinant factors varied with scales. Parent materials had higher contribution to the variations of soil contamination at the 2 and 10 km grid scales, while the contributions of landscape patterns and distance-density variables generally increased with the grid scale. Adjusting the distribution of cropland and optimizing the landscape pattern of land use types are important ways to reduce soil contamination at local scales, which urban planners and decision makers should pay more attention to.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
Ray, Mary C.; Kulongoski, Justin T.; Belitz, Kenneth
2009-01-01
Ground-water quality in the approximately 620-square-mile San Francisco Bay study unit (SFBAY) was investigated from April through June 2007 as part of the Priority Basin project of the Ground-Water Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of raw ground-water quality, as well as a statistically consistent basis for comparing water quality throughout California. Samples in SFBAY were collected from 79 wells in San Francisco, San Mateo, Santa Clara, Alameda, and Contra Costa Counties. Forty-three of the wells sampled were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Thirty-six wells were sampled to aid in evaluation of specific water-quality issues (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, pharmaceutical compounds, and potential wastewater-indicator compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally occurring inorganic constituents (nutrients, major and minor ions, trace elements, chloride and bromide isotopes, and uranium and strontium isotopes), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, carbon-14 isotopes, and stable isotopes of hydrogen, oxygen, nitrogen, boron, and carbon), and dissolved noble gases (noble gases were analyzed in collaboration with Lawrence Livermore National Laboratory) also were measured to help identify the source and age of the sampled ground water. Quality-control samples (blank samples, replicate samples, matrix spike samples) were collected for approximately one-third of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Assessment of the quality-control information from the field blanks resulted in applying 'V' codes to approximately 0.1 percent of the data collected for ground-water samples (meaning a constituent was detected in blanks as well as the corresponding environmental data). See the Appendix section 'Quality-Control-Sample Results'. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, and (or) blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is delivered to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. VOCs were detected in about one-half of the grid wells, while pesticides were detected in about one-fifth of the grid wells. Concentrations of all VOCs and pesticides detected in samples from all SFBAY wells were below health-based thresholds. No pharmaceutical compounds were detected in any SFBAY well. One potential wastewater-indicator compound, caffeine, was detected in one grid well in SFBAY. Concentrations of most trace elements and nutrients detected in samples from all SFBAY wells were below health-based thresholds. Exceptions include nitrate, detected above the USEPA maximum contaminant level (MCL-US) in 3samples; arsenic, above the USEPA maximum contaminant level (MCL-US) in 3 samples; c
Nonlinear refraction and reflection travel time tomography
Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.
1998-01-01
We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.
ELIPGRID-PC: A PC program for calculating hot spot probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1994-10-01
ELIPGRID-PC, a new personal computer program has been developed to provide easy access to Singer`s 1972 ELIPGRID algorithm for hot-spot detection probabilities. Three features of the program are the ability to determine: (1) the grid size required for specified conditions, (2) the smallest hot spot that can be sampled with a given probability, and (3) the approximate grid size resulting from specified conditions and sampling cost. ELIPGRID-PC also provides probability of hit versus cost data for graphing with spread-sheets or graphics software. The program has been successfully tested using Singer`s published ELIPGRID results. An apparent error in the original ELIPGRIDmore » code has been uncovered and an appropriate modification incorporated into the new program.« less