Ubiquity of, and geostatistics for, nonstationary increment random fields
NASA Astrophysics Data System (ADS)
O'Malley, Daniel; Cushman, John H.
2013-07-01
Nonstationary random fields such as fractional Brownian motion and fractional Lévy motion have been studied extensively in the hydrology literature. On the other hand, random fields that have nonstationary increments have seen little study. A mathematical argument is presented that demonstrates processes with stationary increments are the exception and processes with nonstationary increments are far more abundant. The abundance of nonstationary increment processes has important implications, e.g., in kriging where a translation-invariant variogram implicitly assumes stationarity of the increments. An approach to kriging for processes with nonstationary increments is presented and accompanied by some numerical results.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
A comparative study of Gaussian geostatistical models and Gaussian Markov random field models1
Song, Hae-Ryoung; Fuentes, Montserrat; Ghosh, Sujit
2008-01-01
Gaussian geostatistical models (GGMs) and Gaussian Markov random fields (GM-RFs) are two distinct approaches commonly used in spatial models for modeling point referenced and areal data, respectively. In this paper, the relations between GGMs and GMRFs are explored based on approximations of GMRFs by GGMs, and approximations of GGMs by GMRFs. Two new metrics of approximation are proposed: (i) the Kullback-Leibler discrepancy of spectral densities and (ii) the chi-squared distance between spectral densities. The distances between the spectral density functions of GGMs and GMRFs measured by these metrics are minimized to obtain the approximations of GGMs and GMRFs. The proposed methodologies are validated through several empirical studies. We compare the performance of our approach to other methods based on covariance functions, in terms of the average mean squared prediction error and also the computational time. A spatial analysis of a dataset on PM2.5 collected in California is presented to illustrate the proposed method. PMID:19337581
A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco
2016-04-01
We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.
GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES
This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...
Universal Components of Random Nodal Sets
NASA Astrophysics Data System (ADS)
Gayet, Damien; Welschinger, Jean-Yves
2016-11-01
We give, as L grows to infinity, an explicit lower bound of order {L^{n/m}} for the expected Betti numbers of the vanishing locus of a random linear combination of eigenvectors of P with eigenvalues below L. Here, P denotes an elliptic self-adjoint pseudo-differential operator of order {m > 0}, bounded from below and acting on the sections of a Riemannian line bundle over a smooth closed n-dimensional manifold M equipped with some Lebesgue measure. In fact, for every closed hypersurface {Σ} of R^n, we prove that there exists a positive constant {p_Σ} depending only on {Σ}, such that for every large enough L and every {x in M}, a component diffeomorphic to {Σ} appears with probability at least {p_Σ} in the vanishing locus of a random section and in the ball of radius {L^{-1/m}} centered at x. These results apply in particular to Laplace-Beltrami and Dirichlet-to-Neumann operators.
Randomization methods in emergency setting trials: a descriptive review
Moe‐Byrne, Thirimon; Oddie, Sam; McGuire, William
2015-01-01
Background Quasi‐randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic indicators between treatment groups in trials using true randomization versus trials using quasi‐randomization. Results Seven reviews contained 16 trials that used true randomization and 11 that used quasi‐randomization. Baseline group imbalance was identified in four trials using true randomization (25%) and in two quasi‐randomized trials (18%). Of the four truly randomized trials with imbalance, three concealed treatment allocation adequately. Clinical heterogeneity and poor reporting limited the assessment of trial recruitment outcomes. Conclusions We did not find strong or consistent evidence that quasi‐randomization is associated with selection bias more often than true randomization. High risk of bias judgements for quasi‐randomized emergency studies should therefore not be assumed in systematic reviews. Clinical heterogeneity across trials within reviews, coupled with limited availability of relevant trial accrual data, meant it was not possible to adequately explore the possibility that true randomization might result in slower trial recruitment rates, or the recruitment of less representative populations. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. PMID:26333419
Imprecise (fuzzy) information in geostatistics
Bardossy, A.; Bogardi, I.; Kelly, W.E.
1988-05-01
A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in a fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.
Randomization Methods in Emergency Setting Trials: A Descriptive Review
ERIC Educational Resources Information Center
Corbett, Mark Stephen; Moe-Byrne, Thirimon; Oddie, Sam; McGuire, William
2016-01-01
Background: Quasi-randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods: We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic…
Fractal and geostatistical methods for modeling of a fracture network
Chiles, J.P.
1988-08-01
The modeling of fracture networks is useful for fluid flow and rock mechanics studies. About 6600 fracture traces were recorded on drifts of a uranium mine in a granite massif. The traces have an extension of 0.20-20 m. The network was studied by fractal and by geostatistical methods but can be considered neither as a fractal with a constant dimension nor a set of purely randomly located fractures. Two kinds of generalization of conventional models can still provide more flexibility for the characterization of the network: (a) a nonscaling fractal model with variable similarity dimension (for a 2-D network of traces, the dimension varying from 2 for the 10-m scale to 1 for the centimeter scale, (b) a parent-daughter model with a regionalized density; the geostatistical study allows a 3-D model to be established where: fractures are assumed to be discs; fractures are grouped in clusters or swarms; and fracturation density is regionalized (with two ranges at about 30 and 300 m). The fractal model is easy to fit and to simulate along a line, but 2-D and 3-D simulations are more difficult. The geostatistical model is more complex, but easy to simulate, even in 3-D.
Geostatistics and spatial analysis in biological anthropology.
Relethford, John H
2008-05-01
A variety of methods have been used to make evolutionary inferences based on the spatial distribution of biological data, including reconstructing population history and detection of the geographic pattern of natural selection. This article provides an examination of geostatistical analysis, a method used widely in geology but which has not often been applied in biological anthropology. Geostatistical analysis begins with the examination of a variogram, a plot showing the relationship between a biological distance measure and the geographic distance between data points and which provides information on the extent and pattern of spatial correlation. The results of variogram analysis are used for interpolating values of unknown data points in order to construct a contour map, a process known as kriging. The methods of geostatistical analysis and discussion of potential problems are applied to a large data set of anthropometric measures for 197 populations in Ireland. The geostatistical analysis reveals two major sources of spatial variation. One pattern, seen for overall body and craniofacial size, shows an east-west cline most likely reflecting the combined effects of past population dispersal and settlement. The second pattern is seen for craniofacial height and shows an isolation by distance pattern reflecting rapid spatial changes in the midlands region of Ireland, perhaps attributable to the genetic impact of the Vikings. The correspondence of these results with other analyses of these data and the additional insights generated from variogram analysis and kriging illustrate the potential utility of geostatistical analysis in biological anthropology.
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
Robust geostatistical analysis of spatial data
NASA Astrophysics Data System (ADS)
Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.
2013-04-01
Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R
Robust geostatistical analysis of spatial data
NASA Astrophysics Data System (ADS)
Papritz, A.; Künsch, H. R.; Schwierz, C.; Stahel, W. A.
2012-04-01
Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outlying observations may results from errors (e.g. in data transcription) or from local perturbations in the processes that are responsible for a given pattern of spatial variation. As an example, the spatial distribution of some trace metal in the soils of a region may be distorted by emissions of local anthropogenic sources. Outliers affect the modelling of the large-scale spatial variation, the so-called external drift or trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) [2] proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) [1] for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation. Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled
Multiple Extended Target Tracking With Labeled Random Finite Sets
NASA Astrophysics Data System (ADS)
Beard, Michael; Reuter, Stephan; Granstrom, Karl; Vo, Ba-Tuong; Vo, Ba-Ngu; Scheel, Alexander
2016-04-01
Targets that generate multiple measurements at a given instant in time are commonly known as extended targets. These present a challenge for many tracking algorithms, as they violate one of the key assumptions of the standard measurement model. In this paper, a new algorithm is proposed for tracking multiple extended targets in clutter, that is capable of estimating the number of targets, as well the trajectories of their states, comprising the kinematics, measurement rates and extents. The proposed technique is based on modelling the multi-target state as a generalised labelled multi-Bernoulli (GLMB) random finite set (RFS), within which the extended targets are modelled using gamma Gaussian inverse Wishart (GGIW) distributions. A cheaper variant of the algorithm is also proposed, based on the labelled multi-Bernoulli (LMB) filter. The proposed GLMB/LMB-based algorithms are compared with an extended target version of the cardinalised probability hypothesis density (CPHD) filter, and simulation results show that the (G)LMB has improved estimation and tracking performance.
Determining the Significance of Item Order in Randomized Problem Sets
ERIC Educational Resources Information Center
Pardos, Zachary A.; Heffernan, Neil T.
2009-01-01
Researchers who make tutoring systems would like to know which sequences of educational content lead to the most effective learning by their students. The majority of data collected in many ITS systems consist of answers to a group of questions of a given skill often presented in a random sequence. Following work that identifies which items…
Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.
Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale
2016-08-01
Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.
Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.
Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale
2016-08-01
Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. PMID:27566774
A Practical Primer on Geostatistics
Olea, Ricardo A.
2009-01-01
THE CHALLENGE Most geological phenomena are extraordinarily complex in their interrelationships and vast in their geographical extension. Ordinarily, engineers and geoscientists are faced with corporate or scientific requirements to properly prepare geological models with measurements involving a small fraction of the entire area or volume of interest. Exact description of a system such as an oil reservoir is neither feasible nor economically possible. The results are necessarily uncertain. Note that the uncertainty is not an intrinsic property of the systems; it is the result of incomplete knowledge by the observer. THE AIM OF GEOSTATISTICS The main objective of geostatistics is the characterization of spatial systems that are incompletely known, systems that are common in geology. A key difference from classical statistics is that geostatistics uses the sampling location of every measurement. Unless the measurements show spatial correlation, the application of geostatistics is pointless. Ordinarily the need for additional knowledge goes beyond a few points, which explains the display of results graphically as fishnet plots, block diagrams, and maps. GEOSTATISTICAL METHODS Geostatistics is a collection of numerical techniques for the characterization of spatial attributes using primarily two tools: probabilistic models, which are used for spatial data in a manner similar to the way in which time-series analysis characterizes temporal data, or pattern recognition techniques. The probabilistic models are used as a way to handle uncertainty in results away from sampling locations, making a radical departure from alternative approaches like inverse distance estimation methods. DIFFERENCES WITH TIME SERIES On dealing with time-series analysis, users frequently concentrate their attention on extrapolations for making forecasts. Although users of geostatistics may be interested in extrapolation, the methods work at their best interpolating. This simple difference has
Model Selection for Geostatistical Models
Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.
2006-02-01
We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.
Self-similar continuous cascades supported by random Cantor sets: Application to rainfall data.
Muzy, Jean-François; Baïle, Rachel
2016-05-01
We introduce a variant of continuous random cascade models that extends former constructions introduced by Barral-Mandelbrot and Bacry-Muzy in the sense that they can be supported by sets of arbitrary fractal dimension. The so-introduced sets are exactly self-similar stationary versions of random Cantor sets formerly introduced by Mandelbrot as "random cutouts." We discuss the main mathematical properties of our construction and compute its scaling properties. We then illustrate our purpose on several numerical examples and we consider a possible application to rainfall data. We notably show that our model allows us to reproduce remarkably the distribution of dry period durations.
Self-similar continuous cascades supported by random Cantor sets: Application to rainfall data.
Muzy, Jean-François; Baïle, Rachel
2016-05-01
We introduce a variant of continuous random cascade models that extends former constructions introduced by Barral-Mandelbrot and Bacry-Muzy in the sense that they can be supported by sets of arbitrary fractal dimension. The so-introduced sets are exactly self-similar stationary versions of random Cantor sets formerly introduced by Mandelbrot as "random cutouts." We discuss the main mathematical properties of our construction and compute its scaling properties. We then illustrate our purpose on several numerical examples and we consider a possible application to rainfall data. We notably show that our model allows us to reproduce remarkably the distribution of dry period durations. PMID:27300908
Distribution of Modelling Spatial Processes Using Geostatistical Analysis
NASA Astrophysics Data System (ADS)
Grynyshyna-Poliuga, Oksana; Stanislawska, Iwona; Swiatek, Anna
The Geostatistical Analyst uses sample points taken at different locations in a landscape and creates (interpolates) a continuous surface. The Geostatistical Analyst provides two groups of interpolation techniques: deterministic and geostatistical. All methods rely on the similarity of nearby sample points to create the surface. Deterministic techniques use mathematical functions for interpolation. Geostatistics relies on both statistical and mathematical methods, which can be used to create surfaces and assess the uncertainty of the predictions. The first step in geostatistical analysis is variography: computing and modelling a semivariogram. A semivariogram is one of the significant functions to indicate spatial correlation in observations measured at sample locations. It is commonly represented as a graph that shows the variance in measure with distance between all pairs of sampled locations. Such a graph is helpful to build a mathematical model that describes the variability of the measure with location. Modeling of relationship among sample locations to indicate the variability of the measure with distance of separation is called semivariogram modelling. It is applied to applications involving estimating the value of a measure at a new location. Our work presents the analysis of the data following the steps as given below: identification of data set periods, constructing and modelling the empirical semivariogram for single location and using the Kriging mapping function as modelling of TEC maps in mid-latitude during disturbed and quiet days. Based on the semivariogram, weights for the kriging interpolation are estimated. Additional observations do, in general, not provide relevant extra information to the interpolation, because the spatial correlation is well described with the semivariogram. Keywords: Semivariogram, Kriging, modelling, Geostatistics, TEC
Hanks, Ephraim M.; Schliep, Erin M.; Hooten, Mevin B.; Hoeting, Jennifer A.
2015-01-01
In spatial generalized linear mixed models (SGLMMs), covariates that are spatially smooth are often collinear with spatially smooth random effects. This phenomenon is known as spatial confounding and has been studied primarily in the case where the spatial support of the process being studied is discrete (e.g., areal spatial data). In this case, the most common approach suggested is restricted spatial regression (RSR) in which the spatial random effects are constrained to be orthogonal to the fixed effects. We consider spatial confounding and RSR in the geostatistical (continuous spatial support) setting. We show that RSR provides computational benefits relative to the confounded SGLMM, but that Bayesian credible intervals under RSR can be inappropriately narrow under model misspecification. We propose a posterior predictive approach to alleviating this potential problem and discuss the appropriateness of RSR in a variety of situations. We illustrate RSR and SGLMM approaches through simulation studies and an analysis of malaria frequencies in The Gambia, Africa.
ERIC Educational Resources Information Center
Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel
2012-01-01
A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…
NASA Technical Reports Server (NTRS)
Lee, L. C.
1974-01-01
The propagation of waves in a random medium is studied in the 'quasi-optics' and the 'Markov random process' approximations. Under these assumptions, a Fokker-Planck equation satisfied by the characteristic functional of the random wave field is derived. A complete set of moment equations with different transverse coordinates and different wave numbers is then obtained from the Fokker-Planck equation of the characteristic functional. The application of those results to the pulse smearing of the pulsar signal and the frequency correlation function of the wave intensity in interstellar scintillation is briefly discussed.
Geostatistics for high resolution geomorphometry: from spatial continuity to surface texture
NASA Astrophysics Data System (ADS)
Trevisani, Sebastiano
2015-04-01
This presentation introduces the use of geostatistics in the context of high-resolution geomorphometry. The application of geostatistics to geomorphometry permits a shift in perspective, moving our attention more toward spatial continuity description than toward the inference of a spatial continuity model. This change in perspective opens interesting directions in the application of geostatistical methods in geomorphometry. Geostatistical methodologies have been extensively applied and adapted in the context of remote sensing, leading to many interesting applications aimed at the analysis of the complex patterns characterizing imagery. Among these applications the analysis of image texture has to be mentioned. In fact, the analysis of image texture reverts to the analysis of surface texture when the analyzed image is a raster representation of a digital terrain model. The main idea is to use spatial-continuity indices as multiscale and directional descriptors of surface texture, including the important aspect related to surface roughness. In this context we introduce some examples regarding the application of geostatistics for image analysis and surface texture characterization. We also show as in presence of complex morphological settings there is the need to use alternative indices of spatial continuity, less sensitive to hotspots and to non-stationarity that often characterize surface morphology. This introduction is mainly dedicated to univariate geostatistics; however the same concepts could be exploited by means of multivariate as well as multipoint geostatistics.
Local Geostatistical Models and Big Data in Hydrological and Ecological Applications
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2015-04-01
The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property
Geostatistical enhancement of european hydrological predictions
NASA Astrophysics Data System (ADS)
Pugliese, Alessio; Castellarin, Attilio; Parajka, Juraj; Arheimer, Berit; Bagli, Stefano; Mazzoli, Paolo; Montanari, Alberto; Blöschl, Günter
2016-04-01
Geostatistical Enhancement of European Hydrological Prediction (GEEHP) is a research experiment developed within the EU funded SWITCH-ON project, which proposes to conduct comparative experiments in a virtual laboratory in order to share water-related information and tackle changes in the hydrosphere for operational needs (http://www.water-switch-on.eu). The main objective of GEEHP deals with the prediction of streamflow indices and signatures in ungauged basins at different spatial scales. In particular, among several possible hydrological signatures we focus in our experiment on the prediction of flow-duration curves (FDCs) along the stream-network, which has attracted an increasing scientific attention in the last decades due to the large number of practical and technical applications of the curves (e.g. hydropower potential estimation, riverine habitat suitability and ecological assessments, etc.). We apply a geostatistical procedure based on Top-kriging, which has been recently shown to be particularly reliable and easy-to-use regionalization approach, employing two different type of streamflow data: pan-European E-HYPE simulations (http://hypeweb.smhi.se/europehype) and observed daily streamflow series collected in two pilot study regions, i.e. Tyrol (merging data from Austrian and Italian stream gauging networks) and Sweden. The merger of the two study regions results in a rather large area (~450000 km2) and might be considered as a proxy for a pan-European application of the approach. In a first phase, we implement a bidirectional validation, i.e. E-HYPE catchments are set as training sites to predict FDCs at the same sites where observed data are available, and vice-versa. Such a validation procedure reveals (1) the usability of the proposed approach for predicting the FDCs over the entire river network of interest using alternatively observed data and E-HYPE simulations and (2) the accuracy of E-HYPE-based predictions of FDCs in ungauged sites. In a
Variation analysis of lake extension in space and time from MODIS images using random sets
NASA Astrophysics Data System (ADS)
Zhao, Xi; Stein, Alfred; Zhang, Xiang; Feng, Lian; Chen, Xiaoling
2014-08-01
Understanding inundation in wetlands may benefit from a joint variation analysis in changes of size, shape, position and extent of water bodies. In this study, we modeled wetland inundation as a random spread process and used random sets to characterize stochastic properties of water body extents. Periodicity, trend and random components were captured by monthly and yearly random sets that were derived from multitemporal images. The Covering-Distance matrix and related operators summarized and visualized the spatial pattern and quantified the similarity of different inundation stages. The study was carried out on the Poyang Lake wetland area in China, and MODIS images for a period of eleven years were used. Results revealed the substantial seasonal dynamic pattern of the inundation and a subtle interannual change in its extension from 2000 to 2010. Various spatial properties including the size, shape, position and extent are visible: areas of high flooding risk are very elongated and locate along the water channel; few of the inundation areas tend to be more circular and spread extensively; the majority of the inundation areas have various extent and size in different month and year. Large differences in the spatial distribution of inundation extents were shown to exist between months from different seasons. A unique spatial pattern occurred during those months that a dramatic flooding or recession happened. Yearly random sets gave detailed information on the spatial distributions of inundation frequency and showed a shrinking trend from 2000 to 2009. 2003 is the partition year in the declining trend and 2010 breaking the trend as an abnormal year. Besides, probability bounds were derived from the model for a region that was attacked by flooding. This ability of supporting decision making is shown in a simple management scenario. We conclude that a random sets analysis is a valuable addition to a frequency analysis that quantifies inundation variation in space and
Importance of stationarity for geostatistical assessment of environmental contamination
Dagdelen, K.; Turner, A.K.
1996-12-31
This paper describes a geostatistical case study to assess TCE contamination from multiple point sources that is migrating through the geologically complex conditions with several aquifers. The paper highlights the importance of the stationarity assumption by demonstrating how biased assessments of TCE contamination result when ordinary kriging of the data that violates stationarity assumptions. Division of the data set into more homogeneous geologic and hydrologic zones improved the accuracy of the estimates. Indicator kriging offers an alternate method for providing a stochastic model that is more appropriate for the data. Further improvement in the estimates results when indicator kriging is applied to individual subregional data sets that are based on geological considerations. This further enhances the data homogeneity and makes use of stationary model more appropriate. By combining geological and geostatistical evaluations, more realistic maps may be produced that reflect the hydrogeological environment and provide a sound basis for future investigations and remediation.
Building Damage-Resilient Dominating Sets in Complex Networks against Random and Targeted Attacks
NASA Astrophysics Data System (ADS)
Molnár, F.; Derzsy, N.; Szymanski, B. K.; Korniss, G.
2015-02-01
We study the vulnerability of dominating sets against random and targeted node removals in complex networks. While small, cost-efficient dominating sets play a significant role in controllability and observability of these networks, a fixed and intact network structure is always implicitly assumed. We find that cost-efficiency of dominating sets optimized for small size alone comes at a price of being vulnerable to damage; domination in the remaining network can be severely disrupted, even if a small fraction of dominator nodes are lost. We develop two new methods for finding flexible dominating sets, allowing either adjustable overall resilience, or dominating set size, while maximizing the dominated fraction of the remaining network after the attack. We analyze the efficiency of each method on synthetic scale-free networks, as well as real complex networks.
Building damage-resilient dominating sets in complex networks against random and targeted attacks.
Molnár, F; Derzsy, N; Szymanski, B K; Korniss, G
2015-02-09
We study the vulnerability of dominating sets against random and targeted node removals in complex networks. While small, cost-efficient dominating sets play a significant role in controllability and observability of these networks, a fixed and intact network structure is always implicitly assumed. We find that cost-efficiency of dominating sets optimized for small size alone comes at a price of being vulnerable to damage; domination in the remaining network can be severely disrupted, even if a small fraction of dominator nodes are lost. We develop two new methods for finding flexible dominating sets, allowing either adjustable overall resilience, or dominating set size, while maximizing the dominated fraction of the remaining network after the attack. We analyze the efficiency of each method on synthetic scale-free networks, as well as real complex networks.
Factor-based Geostatistics for Groundwater Modeling
NASA Astrophysics Data System (ADS)
Savelyeva, E.; Pavlova, M.
2012-04-01
Analysis of groundwater levels is an important stage preceding modeling the filtration and migration processes in the hydro-geological environment. The boundary conditions are due to a pressure field, which strongly depends on groundwater levels, their spatial and temporal variability. Hydro-physical measurements are usually performed at a set of unhomogeneously spatially distributed wells according to some temporal scheme. Thus, it is an irregular spatio-temporal data set with a whole luggage of problems concerning organization of a spatio-temporal metrics system. These problems also affect modeling of a spatio-temporal correlation structure. There are different ways how to overcome these problems and obtain a reasonable model of spatio-temporal correlation structures. But still all these approaches are limited in future forecasting features. This work proposes an alternative approach - a factor-based space-time geostatistics. This method opens a set of possibilities concerning future modeling: possibility to use additional information to present different future scenario, characterization of uncertainty, probabilistic description of critical events. The basic idea is to replace a system of spatially correlated wells by a set of independent factors compressing data with a possibility of back transformation at the prescribed level of accuracy. Factors can be obtained by principle component analysis, independent sources and artificial neural network with a "bottle-neck". The selection of a method depends on the features of initial data and the process under study. All factors are time series nevertheless how they were obtained. A set of factors contains the main features of the groundwater level patterns. Groundwater levels modeling and forecasting is performed through modeling of these time series. This work considers three different stochastic approaches for modeling and forecasting of time series with hydrological origins: stochastic process with a deterministic
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
Multisource passive acoustic tracking: an application of random finite set data fusion
NASA Astrophysics Data System (ADS)
Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung
2010-04-01
Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.
Stability of Dominating Sets in Complex Networks against Random and Targeted Attacks
NASA Astrophysics Data System (ADS)
Molnar, F.; Derzsy, N.; Szymanski, B. K.; Korniss, G.
2014-03-01
Minimum dominating sets (MDS) are involved in efficiently controlling and monitoring many social and technological networks. However, MDS influence over the entire network may be significantly reduced when some MDS nodes are disabled due to random breakdowns or targeted attacks against nodes in the network. We investigate the stability of domination in scale-free networks in such scenarios. We define stability as the fraction of nodes in the network that are still dominated after some nodes have been removed, either randomly, or by targeting the highest-degree nodes. We find that although the MDS is the most cost-efficient solution (requiring the least number of nodes) for reaching every node in an undamaged network, it is also very sensitive to damage. Further, we investigate alternative methods for finding dominating sets that are less efficient (more costly) than MDS but provide better stability. Finally we construct an algorithm based on greedy node selection that allows us to precisely control the balance between domination stability and cost, to achieve any desired stability at minimum cost, or the best possible stability at any given cost. Analysis of our method shows moderate improvement of domination cost efficiency against random breakdowns, but substantial improvements against targeted attacks. Supported by DARPA, DTRA, ARL NS-CTA, ARO, and ONR.
Spin-glass phase transitions and minimum energy of the random feedback vertex set problem
NASA Astrophysics Data System (ADS)
Qin, Shao-Meng; Zeng, Ying; Zhou, Hai-Jun
2016-08-01
A feedback vertex set (FVS) of an undirected graph contains vertices from every cycle of this graph. Constructing a FVS of sufficiently small cardinality is very difficult in the worst cases, but for random graphs this problem can be efficiently solved by converting it into an appropriate spin-glass model [H.-J. Zhou, Eur. Phys. J. B 86, 455 (2013), 10.1140/epjb/e2013-40690-1]. In the present work we study the spin-glass phase transitions and the minimum energy density of the random FVS problem by the first-step replica-symmetry-breaking (1RSB) mean-field theory. For both regular random graphs and Erdös-Rényi graphs, we determine the inverse temperature βl at which the replica-symmetric mean-field theory loses its local stability, the inverse temperature βd of the dynamical (clustering) phase transition, and the inverse temperature βs of the static (condensation) phase transition. These critical inverse temperatures all change with the mean vertex degree in a nonmonotonic way, and βd is distinct from βs for regular random graphs of vertex degrees K >60 , while βd are identical to βs for Erdös-Rényi graphs at least up to mean vertex degree c =512 . We then derive the zero-temperature limit of the 1RSB theory and use it to compute the minimum FVS cardinality.
Spin-glass phase transitions and minimum energy of the random feedback vertex set problem.
Qin, Shao-Meng; Zeng, Ying; Zhou, Hai-Jun
2016-08-01
A feedback vertex set (FVS) of an undirected graph contains vertices from every cycle of this graph. Constructing a FVS of sufficiently small cardinality is very difficult in the worst cases, but for random graphs this problem can be efficiently solved by converting it into an appropriate spin-glass model [H.-J. Zhou, Eur. Phys. J. B 86, 455 (2013)EPJBFY1434-602810.1140/epjb/e2013-40690-1]. In the present work we study the spin-glass phase transitions and the minimum energy density of the random FVS problem by the first-step replica-symmetry-breaking (1RSB) mean-field theory. For both regular random graphs and Erdös-Rényi graphs, we determine the inverse temperature β_{l} at which the replica-symmetric mean-field theory loses its local stability, the inverse temperature β_{d} of the dynamical (clustering) phase transition, and the inverse temperature β_{s} of the static (condensation) phase transition. These critical inverse temperatures all change with the mean vertex degree in a nonmonotonic way, and β_{d} is distinct from β_{s} for regular random graphs of vertex degrees K>60, while β_{d} are identical to β_{s} for Erdös-Rényi graphs at least up to mean vertex degree c=512. We then derive the zero-temperature limit of the 1RSB theory and use it to compute the minimum FVS cardinality. PMID:27627285
Random set tracking and entropy based control applied to distributed sensor networks
NASA Astrophysics Data System (ADS)
Stein, David; Witkoskie, James; Theophanis, Stephen; Kuklinski, Walter
2007-04-01
This paper describes an integrated approach to sensor fusion and resource management applicable to sensor networks. The sensor fusion and tracking algorithm is based on the theory of random sets. Tracking is herein considered to be the estimation of parameters in a state space such that for a given target certain components, e.g., position and velocity, are time varying and other components, e.g., identifying features, are stationary. The fusion algorithm provides at each time step the posterior probability density function, known as the global density, on the state space, and the control algorithm identifies the set of sensors that should be used at the next time step in order to minimize, subject to constraints, an approximation of the expected entropy of the global density. The random set approach to target tracking models association ambiguity by statistically weighing all possible hypotheses and associations. Computational complexity is managed by approximating the posterior Global Density using a Gaussian mixture density and using an approach based on the Kulbach-Leibler metric to limit the number of components in the Gaussian mixture representation. A closed form approximation of the expected entropy of the global density, expressed as a Gaussian mixture density, at the next time step for a given set of proposed measurements is developed. Optimal sensor selection involves a search over subsets of sensors, and the computational complexity of this search is managed by employing the Mobius transformation. Field and simulated data from a sensor network comprised of multiple range radars, and acoustic arrays, that measure angle of arrival, are used to demonstrate the approach to sensor fusion and resource management.
Blatti, Charles; Sinha, Saurabh
2016-01-01
Motivation: Analysis of co-expressed gene sets typically involves testing for enrichment of different annotations or ‘properties’ such as biological processes, pathways, transcription factor binding sites, etc., one property at a time. This common approach ignores any known relationships among the properties or the genes themselves. It is believed that known biological relationships among genes and their many properties may be exploited to more accurately reveal commonalities of a gene set. Previous work has sought to achieve this by building biological networks that combine multiple types of gene–gene or gene–property relationships, and performing network analysis to identify other genes and properties most relevant to a given gene set. Most existing network-based approaches for recognizing genes or annotations relevant to a given gene set collapse information about different properties to simplify (homogenize) the networks. Results: We present a network-based method for ranking genes or properties related to a given gene set. Such related genes or properties are identified from among the nodes of a large, heterogeneous network of biological information. Our method involves a random walk with restarts, performed on an initial network with multiple node and edge types that preserve more of the original, specific property information than current methods that operate on homogeneous networks. In this first stage of our algorithm, we find the properties that are the most relevant to the given gene set and extract a subnetwork of the original network, comprising only these relevant properties. We then re-rank genes by their similarity to the given gene set, based on a second random walk with restarts, performed on the above subnetwork. We demonstrate the effectiveness of this algorithm for ranking genes related to Drosophila embryonic development and aggressive responses in the brains of social animals. Availability and Implementation: DRaWR was implemented as
Reducing spatial uncertainty in climatic maps through geostatistical analysis
NASA Astrophysics Data System (ADS)
Pesquer, Lluís; Ninyerola, Miquel; Pons, Xavier
2014-05-01
), applying different interpolation methods/parameters are shown: RMS (mm) error values obtained from the independent test set (20 % of the samples) follow, according to this order: IDW (exponent=1.5, 2, 2.5, 3) / SPT (tension=100, 125, 150, 175, 200) / OK. LOOCV: 92.5; 80.2; 74.2; 72.3 / 181.6; 90.6; 75.7; 71.1; 69.4; 68.8 RS: 101.2; 89.6; 83.9; 81.9 / 115.1; 92.4; 84.0; 81.4; 80.9; 81.1 / 81.1 EU: 57.4; 51.3; 53.1; 55.5 / 59.1; 57.1; 55.9; 55.0; 54.3 / 51.8 A3D: 48.3; 49.8; 52.5; 62.2 / 57.1; 54.4; 52.5; 51.2; 50.2 / 49.7 To study these results, a geostatistical analysis of uncertainty has been done. Main results: variogram analysis of the error (using the test set) shows that the total sill is reduced (50% EU, 60% A3D) when using the two new approaches, while the spatialized standard deviation model calculated from the OK shows significantly lower values when compared to the RS. In conclusion, A3D and EU highly improve LOOCV and RS, whereas A3D slightly improves EU. Also, LOOCV only shows slightly better results than RS, suggesting that non-random-split increases the power of both fitting-test steps. * Ninyerola, Pons, Roure. A methodological approach of climatological modelling of air temperature and precipitation through GIS techniques. IJC, 2000; 20:1823-1841.
Random sets technique for information fusion applied to estimation of brain functional images
NASA Astrophysics Data System (ADS)
Smith, Therese M.; Kelly, Patrick A.
1999-05-01
A new mathematical technique for information fusion based on random sets, developed and described by Goodman, Mahler and Nguyen (The Mathematics of Data Fusion, Kluwer, 1997) can be useful for estimation of functional brian images. Many image estimation algorithms employ prior models that incorporate general knowledge about sizes, shapes and locations of brain regions. Recently, algorithms have been proposed using specific prior knowledge obtained from other imaging modalities (for example, Bowsher, et al., IEEE Trans. Medical Imaging, 1996). However, there is more relevant information than is presently used. A technique that permits use of additional prior information about activity levels would improve the quality of prior models, and hence, of the resulting image estimate. The use of random sets provides this capability because it allows seemingly non-statistical (or ambiguous) information such as that contained in inference rules to be represented and combined with observations in a single statistical model, corresponding to a global joint density. This paper illustrates the use of this approach by constructing an example global joint density function for brain functional activity from measurements of functional activity, anatomical information, clinical observations and inference rules. The estimation procedure is tested on a data phantom with Poisson noise.
NASA Astrophysics Data System (ADS)
Nowak, W.; de Barros, F. P. J.; Rubin, Y.
2010-03-01
Geostatistical optimal design optimizes subsurface exploration for maximum information toward task-specific prediction goals. Until recently, most geostatistical design studies have assumed that the geostatistical description (i.e., the mean, trends, covariance models and their parameters) is given a priori. This contradicts, as emphasized by Rubin and Dagan (1987a), the fact that only few or even no data at all offer support for such assumptions prior to the bulk of exploration effort. We believe that geostatistical design should (1) avoid unjustified a priori assumptions on the geostatistical description, (2) instead reduce geostatistical model uncertainty as secondary design objective, (3) rate this secondary objective optimal for the overall prediction goal, and (4) be robust even under inaccurate geostatistical assumptions. Bayesian Geostatistical Design follows these guidelines by considering uncertain covariance model parameters. We transfer this concept from kriging-like applications to geostatistical inverse problems. We also deem it inappropriate to consider parametric uncertainty only within a single covariance model. The Matérn family of covariance functions has an additional shape parameter. Controlling model shape by a parameter converts covariance model selection to parameter identification and resembles Bayesian model averaging over a continuous spectrum of covariance models. This is appealing since it generalizes Bayesian model averaging from a finite number to an infinite number of models. We illustrate how our approach fulfills the above four guidelines in a series of synthetic test cases. The underlying scenarios are to minimize the prediction variance of (1) contaminant concentration or (2) arrival time at an ecologically sensitive location by optimal placement of hydraulic head and log conductivity measurements. Results highlight how both the impact of geostatistical model uncertainty and the sampling network design vary according to the
Generalized index for spatial data sets as a measure of complete spatial randomness
NASA Astrophysics Data System (ADS)
Hackett-Jones, Emily J.; Davies, Kale J.; Binder, Benjamin J.; Landman, Kerry A.
2012-06-01
Spatial data sets, generated from a wide range of physical systems can be analyzed by counting the number of objects in a set of bins. Previous work has been limited to equal-sized bins, which are inappropriate for some domains (e.g., circular). We consider a nonequal size bin configuration whereby overlapping or nonoverlapping bins cover the domain. A generalized index, defined in terms of a variance between bin counts, is developed to indicate whether or not a spatial data set, generated from exclusion or nonexclusion processes, is at the complete spatial randomness (CSR) state. Limiting values of the index are determined. Using examples, we investigate trends in the generalized index as a function of density and compare the results with those using equal size bins. The smallest bin size must be much larger than the mean size of the objects. We can determine whether a spatial data set is at the CSR state or not by comparing the values of a generalized index for different bin configurations—the values will be approximately the same if the data is at the CSR state, while the values will differ if the data set is not at the CSR state. In general, the generalized index is lower than the limiting value of the index, since objects do not have access to the entire region due to blocking by other objects. These methods are applied to two applications: (i) spatial data sets generated from a cellular automata model of cell aggregation in the enteric nervous system and (ii) a known plant data distribution.
Model-Based Geostatistical Mapping of the Prevalence of Onchocerca volvulus in West Africa
O’Hanlon, Simon J.; Slater, Hannah C.; Cheke, Robert A.; Boatin, Boakye A.; Coffeng, Luc E.; Pion, Sébastien D. S.; Boussinesq, Michel; Zouré, Honorat G. M.; Stolk, Wilma A.; Basáñez, María-Gloria
2016-01-01
Background The initial endemicity (pre-control prevalence) of onchocerciasis has been shown to be an important determinant of the feasibility of elimination by mass ivermectin distribution. We present the first geostatistical map of microfilarial prevalence in the former Onchocerciasis Control Programme in West Africa (OCP) before commencement of antivectorial and antiparasitic interventions. Methods and Findings Pre-control microfilarial prevalence data from 737 villages across the 11 constituent countries in the OCP epidemiological database were used as ground-truth data. These 737 data points, plus a set of statistically selected environmental covariates, were used in a Bayesian model-based geostatistical (B-MBG) approach to generate a continuous surface (at pixel resolution of 5 km x 5km) of microfilarial prevalence in West Africa prior to the commencement of the OCP. Uncertainty in model predictions was measured using a suite of validation statistics, performed on bootstrap samples of held-out validation data. The mean Pearson’s correlation between observed and estimated prevalence at validation locations was 0.693; the mean prediction error (average difference between observed and estimated values) was 0.77%, and the mean absolute prediction error (average magnitude of difference between observed and estimated values) was 12.2%. Within OCP boundaries, 17.8 million people were deemed to have been at risk, 7.55 million to have been infected, and mean microfilarial prevalence to have been 45% (range: 2–90%) in 1975. Conclusions and Significance This is the first map of initial onchocerciasis prevalence in West Africa using B-MBG. Important environmental predictors of infection prevalence were identified and used in a model out-performing those without spatial random effects or environmental covariates. Results may be compared with recent epidemiological mapping efforts to find areas of persisting transmission. These methods may be extended to areas where
Book Review Geostatistical Analysis of Compositional Data
Carle, S F
2007-03-26
Compositional data are represented as vector variables with individual vector components ranging between zero and a positive maximum value representing a constant sum constraint, usually unity (or 100 percent). The earth sciences are flooded with spatial distributions of compositional data, such as concentrations of major ion constituents in natural waters (e.g. mole, mass, or volume fractions), mineral percentages, ore grades, or proportions of mutually exclusive categories (e.g. a water-oil-rock system). While geostatistical techniques have become popular in earth science applications since the 1970s, very little attention has been paid to the unique mathematical properties of geostatistical formulations involving compositional variables. The book 'Geostatistical Analysis of Compositional Data' by Vera Pawlowsky-Glahn and Ricardo Olea (Oxford University Press, 2004), unlike any previous book on geostatistics, directly confronts the mathematical difficulties inherent to applying geostatistics to compositional variables. The book righteously justifies itself with prodigious referencing to previous work addressing nonsensical ranges of estimated values and error, spurious correlation, and singular cross-covariance matrices.
ERIC Educational Resources Information Center
Snyder, Patricia
2011-01-01
In this commentary, developments related to conducting randomized controlled trials in authentic preschool settings that include young children with disabilities are discussed in relation to the Strain and Bovey study.
ERIC Educational Resources Information Center
Fives, Allyn; Russell, Daniel W.; Canavan, John; Lyons, Rena; Eaton, Patricia; Devaney, Carmel; Kearns, Norean; O'Brien, Aoife
2015-01-01
In a randomized controlled trial (RCT), treatments are assigned randomly and treatments are withheld from participants. Is it ethically permissible to conduct an RCT in a social setting? This paper addresses two conditions for justifying RCTs: that there should be a state of equipoise and that the trial should be scientifically promising.…
Random finite set multi-target trackers: stochastic geometry for space situational awareness
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong
2015-05-01
This paper describes the recent development in the random finite set RFS paradigm in multi-target tracking. Over the last decade the Probability Hypothesis Density filter has become synonymous with the RFS approach. As result the PHD filter is often wrongly used as a performance benchmark for the RFS approach. Since there is a suite of RFS-based multi-target tracking algorithms, benchmarking tracking performance of the RFS approach by using the PHD filter, the cheapest of these, is misleading. Such benchmarking should be performed with more sophisticated RFS algorithms. In this paper we outline the high-performance RFS-based multi-target trackers such that the Generalized Labled Multi-Bernoulli filter, and a number of efficient approximations and discuss extensions and applications of these filters. Applications to space situational awareness are discussed.
A Particle Multi-Target Tracker for Superpositional Measurements Using Labeled Random Finite Sets
NASA Astrophysics Data System (ADS)
Papi, Francesco; Kim, Du Yong
2015-08-01
In this paper we present a general solution for multi-target tracking with superpositional measurements. Measurements that are functions of the sum of the contributions of the targets present in the surveillance area are called superpositional measurements. We base our modelling on Labeled Random Finite Set (RFS) in order to jointly estimate the number of targets and their trajectories. This modelling leads to a labeled version of Mahler's multi-target Bayes filter. However, a straightforward implementation of this tracker using Sequential Monte Carlo (SMC) methods is not feasible due to the difficulties of sampling in high dimensional spaces. We propose an efficient multi-target sampling strategy based on Superpositional Approximate CPHD (SA-CPHD) filter and the recently introduced Labeled Multi-Bernoulli (LMB) and Vo-Vo densities. The applicability of the proposed approach is verified through simulation in a challenging radar application with closely spaced targets and low signal-to-noise ratio.
A randomized algorithm for two-cluster partition of a set of vectors
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Khandeev, V. I.
2015-02-01
A randomized algorithm is substantiated for the strongly NP-hard problem of partitioning a finite set of vectors of Euclidean space into two clusters of given sizes according to the minimum-of-the sum-of-squared-distances criterion. It is assumed that the centroid of one of the clusters is to be optimized and is determined as the mean value over all vectors in this cluster. The centroid of the other cluster is fixed at the origin. For an established parameter value, the algorithm finds an approximate solution of the problem in time that is linear in the space dimension and the input size of the problem for given values of the relative error and failure probability. The conditions are established under which the algorithm is asymptotically exact and runs in time that is linear in the space dimension and quadratic in the input size of the problem.
NASA Astrophysics Data System (ADS)
Sparaciari, Carlo; Paris, Matteo G. A.
2013-01-01
We address measurement schemes where certain observables Xk are chosen at random within a set of nondegenerate isospectral observables and then measured on repeated preparations of a physical system. Each observable has a probability zk to be measured, with ∑kzk=1, and the statistics of this generalized measurement is described by a positive operator-valued measure. This kind of scheme is referred to as quantum roulettes, since each observable Xk is chosen at random, e.g., according to the fluctuating value of an external parameter. Here we focus on quantum roulettes for qubits involving the measurements of Pauli matrices, and we explicitly evaluate their canonical Naimark extensions, i.e., their implementation as indirect measurements involving an interaction scheme with a probe system. We thus provide a concrete model to realize the roulette without destroying the signal state, which can be measured again after the measurement or can be transmitted. Finally, we apply our results to the description of Stern-Gerlach-like experiments on a two-level system.
Improving practice in community-based settings: a randomized trial of supervision – study protocol
2013-01-01
Background Evidence-based treatments for child mental health problems are not consistently available in public mental health settings. Expanding availability requires workforce training. However, research has demonstrated that training alone is not sufficient for changing provider behavior, suggesting that ongoing intervention-specific supervision or consultation is required. Supervision is notably under-investigated, particularly as provided in public mental health. The degree to which supervision in this setting includes ‘gold standard’ supervision elements from efficacy trials (e.g., session review, model fidelity, outcome monitoring, skill-building) is unknown. The current federally-funded investigation leverages the Washington State Trauma-focused Cognitive Behavioral Therapy Initiative to describe usual supervision practices and test the impact of systematic implementation of gold standard supervision strategies on treatment fidelity and clinical outcomes. Methods/Design The study has two phases. We will conduct an initial descriptive study (Phase I) of supervision practices within public mental health in Washington State followed by a randomized controlled trial of gold standard supervision strategies (Phase II), with randomization at the clinician level (i.e., supervisors provide both conditions). Study participants will be 35 supervisors and 130 clinicians in community mental health centers. We will enroll one child per clinician in Phase I (N = 130) and three children per clinician in Phase II (N = 390). We use a multi-level mixed within- and between-subjects longitudinal design. Audio recordings of supervision and therapy sessions will be collected and coded throughout both phases. Child outcome data will be collected at the beginning of treatment and at three and six months into treatment. Discussion This study will provide insight into how supervisors can optimally support clinicians delivering evidence-based treatments. Phase I will
GEOPACK, a comprehensive user-friendly geostatistical software system, was developed to help in the analysis of spatially correlated data. The software system was developed to be used by scientists, engineers, regulators, etc., with little experience in geostatistical techniques...
A Randomized Clinical Trial of Smoking Cessation Treatments Provided in HIV Clinical Care Settings
2013-01-01
Introduction: Identifying successful smoking treatment interventions and methods of delivery is critical given the smoking rates among HIV-positive populations and the medical implications of smoking in this population. This study compared the efficacy of 3 smoking cessation interventions provided in HIV clinical treatment settings. Methods: Following a baseline assessment, 209 HIV-positive smokers were randomly assigned to 1 of 3 conditions in a parallel group design. Treatment conditions were individual counseling plus nicotine replacement treatment (NRT), a computer-based Internet smoking treatment plus NRT, and self-help plus NRT. Smoking status was determined at follow-up assessments completed at 12, 24, 36, and 52 weeks following treatment initiation. Results: Cessation rates ranged from 15% to 29%; however, no statistically significant differences in abstinence were found among the treatment conditions over time. Those employed, those who reported a greater desire to quit, or those with lower mood disturbance scores were more likely to achieve abstinence (p < .01). The number of cigarettes participants reported smoking in the 24hr prior to each assessment significantly declined over time (p < .001). Conclusions: Although we found no differences in abstinence rates across groups, the results indicate that integration of smoking cessation interventions is feasible in HIV clinical treatment settings, and cessation results are promising. The overall abstinence rates we report are comparable to those found in similar treatment studies across multiple populations. Further research is warranted. PMID:23430708
H-Area/ITP Geostatistical Assessment of In-Situ and Engineering Properties
Wyatt, D.; Bartlett, S.F.; Rouhani, S.; Lin, Y.
1995-09-05
This report describes the results of a geostatistical investigation conducted in response to tasks and requirements set out in the SRS Request for Proposal Number 93044EQ, dated January 4, 1994. The project tasks were defined in the Statement of Scope (SOS) Number 93044, Revision 0, dated December 1, 1993, developed by the Savannah River Site Geotechnical Services Department.
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
Integration of geologic interpretation into geostatistical simulation
Carle, S.F.
1997-06-01
Embedded Markov chain analysis has been used to quantify geologic interpretation of juxtapositional tendencies of geologic facies. Such interpretations can also be translated into continuous-lag Markov chain models of spatial variability for use in geostatistical simulation of facies architecture.
Ice Surface Classification using Geo-Statistical Texture-Parameters
NASA Astrophysics Data System (ADS)
Wallin, B. F.; Herzfeld, U. C.
2009-12-01
The morphological features of ice surfaces contain valuable information on the state of morphogenetic, environmental, and dynamic processes that the ice has experienced. To effectively use this information, however, the scale and rapid evolution of earth's cryospheric systems necessitate intermediate processing and abstraction as vital tools. We demonstrate methods for automated detection of ice surface classes using robust geostatistical texture parameters and several clustering/classification techniques. By measuring texture parameters as aggregates of regional spatial point distributions, a great variety of surface types can be characterized and detected, at the expense of computational complexity. Unsupervised clustering algorithms such as OPTICS identify numerically distinct sets of values which then seed segmentation algorithms. The results of the methods applied to several data-sets are presented, including RADARSAT, ATM, and ICESAT observations over both land and sea-ice. The surface roughness characteristics are analyzed at the various scales covered by the data-sets.
Nguyen, Hung T; Kreinovich, Vladik
2014-01-01
To help computers make better decisions, it is desirable to describe all our knowledge in computer-understandable terms. This is easy for knowledge described in terms on numerical values: we simply store the corresponding numbers in the computer. This is also easy for knowledge about precise (well-defined) properties which are either true or false for each object: we simply store the corresponding "true" and "false" values in the computer. The challenge is how to store information about imprecise properties. In this paper, we overview different ways to fully store the expert information about imprecise properties. We show that in the simplest case, when the only source of imprecision is disagreement between different experts, a natural way to store all the expert information is to use random sets; we also show how fuzzy sets naturally appear in such random-set representation. We then show how the random-set representation can be extended to the general ("fuzzy") case when, in addition to disagreements, experts are also unsure whether some objects satisfy certain properties or not. PMID:25386045
The Effect of Distributed Practice in Undergraduate Statistics Homework Sets: A Randomized Trial
ERIC Educational Resources Information Center
Crissinger, Bryan R.
2015-01-01
Most homework sets in statistics courses are constructed so that students concentrate or "mass" their practice on a certain topic in one problem set. Distributed practice homework sets include review problems in each set so that practice on a topic is distributed across problem sets. There is a body of research that points to the…
Mixed-point geostatistical simulation: A combination of two- and multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Cordua, Knud Skou; Hansen, Thomas Mejer; Gulbrandsen, Mats Lundh; Barnes, Christophe; Mosegaard, Klaus
2016-09-01
Multiple-point-based geostatistical methods are used to model complex geological structures. However, a training image containing the characteristic patterns of the Earth model has to be provided. If no training image is available, two-point (i.e., covariance-based) geostatistical methods are typically applied instead because these methods provide fewer constraints on the Earth model. This study is motivated by the case where 1-D vertical training images are available through borehole logs, whereas little or no information about horizontal dependencies exists. This problem is solved by developing theory that makes it possible to combine information from multiple- and two-point geostatistics for different directions, leading to a mixed-point geostatistical model. An example of combining information from the multiple-point-based single normal equation simulation algorithm and two-point-based sequential indicator simulation algorithm is provided. The mixed-point geostatistical model is used for conditional sequential simulation based on vertical training images from five borehole logs and a range parameter describing the horizontal dependencies.
ERIC Educational Resources Information Center
Vaden-Kiernan, Michael; Jones, Debra Hughes; Rudo, Zena
2008-01-01
SEDL is providing analytic and technical support to three large-scale randomized controlled trials assessing the efficacy of promising literacy curriculum in afterschool settings on student academic achievement. In the field of educational research, competition among research organizations and researchers can often impede collaborative efforts in…
Gomes, Anderson S L; Raposo, Ernesto P; Moura, André L; Fewo, Serge I; Pincheira, Pablo I R; Jerez, Vladimir; Maia, Lauro J Q; de Araújo, Cid B
2016-01-01
Random lasers have been recently exploited as a photonic platform for studies of complex systems. This cross-disciplinary approach opened up new important avenues for the understanding of random-laser behavior, including Lévy-type distributions of strong intensity fluctuations and phase transitions to a photonic spin-glass phase. In this work, we employ the Nd:YBO random laser system to unveil, from a single set of measurements, the physical origin of the complex correspondence between the Lévy fluctuation regime and the replica-symmetry-breaking transition to the spin-glass phase. A novel unexpected finding is also reported: the trend to suppress the spin-glass behavior for high excitation pulse energies. The present description from first principles of this correspondence unfolds new possibilities to characterize other random lasers, such as random fiber lasers, nanolasers and small lasers, which include plasmonic-based, photonic-crystal and bio-derived nanodevices. The statistical nature of the emission provided by random lasers can also impact on their prominent use as sources for speckle-free laser imaging, which nowadays represents one of the most promising applications of random lasers, with expected progress even in cancer research. PMID:27292095
Gomes, Anderson S. L.; Raposo, Ernesto P.; Moura, André L.; Fewo, Serge I.; Pincheira, Pablo I. R.; Jerez, Vladimir; Maia, Lauro J. Q.; de Araújo, Cid B.
2016-01-01
Random lasers have been recently exploited as a photonic platform for studies of complex systems. This cross-disciplinary approach opened up new important avenues for the understanding of random-laser behavior, including Lévy-type distributions of strong intensity fluctuations and phase transitions to a photonic spin-glass phase. In this work, we employ the Nd:YBO random laser system to unveil, from a single set of measurements, the physical origin of the complex correspondence between the Lévy fluctuation regime and the replica-symmetry-breaking transition to the spin-glass phase. A novel unexpected finding is also reported: the trend to suppress the spin-glass behavior for high excitation pulse energies. The present description from first principles of this correspondence unfolds new possibilities to characterize other random lasers, such as random fiber lasers, nanolasers and small lasers, which include plasmonic-based, photonic-crystal and bio-derived nanodevices. The statistical nature of the emission provided by random lasers can also impact on their prominent use as sources for speckle-free laser imaging, which nowadays represents one of the most promising applications of random lasers, with expected progress even in cancer research. PMID:27292095
NASA Astrophysics Data System (ADS)
Gomes, Anderson S. L.; Raposo, Ernesto P.; Moura, André L.; Fewo, Serge I.; Pincheira, Pablo I. R.; Jerez, Vladimir; Maia, Lauro J. Q.; de Araújo, Cid B.
2016-06-01
Random lasers have been recently exploited as a photonic platform for studies of complex systems. This cross-disciplinary approach opened up new important avenues for the understanding of random-laser behavior, including Lévy-type distributions of strong intensity fluctuations and phase transitions to a photonic spin-glass phase. In this work, we employ the Nd:YBO random laser system to unveil, from a single set of measurements, the physical origin of the complex correspondence between the Lévy fluctuation regime and the replica-symmetry-breaking transition to the spin-glass phase. A novel unexpected finding is also reported: the trend to suppress the spin-glass behavior for high excitation pulse energies. The present description from first principles of this correspondence unfolds new possibilities to characterize other random lasers, such as random fiber lasers, nanolasers and small lasers, which include plasmonic-based, photonic-crystal and bio-derived nanodevices. The statistical nature of the emission provided by random lasers can also impact on their prominent use as sources for speckle-free laser imaging, which nowadays represents one of the most promising applications of random lasers, with expected progress even in cancer research.
Multivariate geostatistical simulation by minimising spatial cross-correlation
NASA Astrophysics Data System (ADS)
Sohrabian, Babak; Tercan, Abdullah Erhan
2014-03-01
Joint simulation of attributes in multivariate geostatistics can be achieved by transforming spatially correlated variables into independent factors. In this study, a new approach for this transformation, Minimum Spatial Cross-correlation (MSC) method, is suggested. The method is based on minimising the sum of squares of cross-variograms at different distances. In the approach, the problem in higher space (N × N) is reduced to N×N-1/2 problems in the two-dimensional space and the reduced problem is solved iteratively using Gradient Descent Algorithm. The method is applied to the joint simulation of a set of multivariate data in a marble quarry and the results are compared with Minimum/Maximum Autocorrelation Factors (MAF) method.
NASA Astrophysics Data System (ADS)
Lin, Kun-Hsiang; Tseng, Hung-Wei; Kuo, Chen-Min; Yang, Tao-Chang; Yu, Pao-Shan
2016-04-01
Typhoons with heavy rainfall and strong wind often cause severe floods and losses in Taiwan, which motivates the development of rainfall forecasting models as part of an early warning system. Thus, this study aims to develop rainfall forecasting models based on two machine learning methods, support vector machines (SVMs) and random forests (RFs), and investigate the performances of the models with different predictor sets for searching the optimal predictor set in forecasting. Four predictor sets were used: (1) antecedent rainfalls, (2) antecedent rainfalls and typhoon characteristics, (3) antecedent rainfalls and meteorological factors, and (4) antecedent rainfalls, typhoon characteristics and meteorological factors to construct for 1- to 6-hour ahead rainfall forecasting. An application to three rainfall stations in Yilan River basin, northeastern Taiwan, was conducted. Firstly, the performance of the SVMs-based forecasting model with predictor set #1 was analyzed. The results show that the accuracy of the models for 2- to 6-hour ahead forecasting decrease rapidly as compared to the accuracy of the model for 1-hour ahead forecasting which is acceptable. For improving the model performance, each predictor set was further examined in the SVMs-based forecasting model. The results reveal that the SVMs-based model using predictor set #4 as input variables performs better than the other sets and a significant improvement of model performance is found especially for the long lead time forecasting. Lastly, the performance of the SVMs-based model using predictor set #4 as input variables was compared with the performance of the RFs-based model using predictor set #4 as input variables. It is found that the RFs-based model is superior to the SVMs-based model in hourly typhoon rainfall forecasting. Keywords: hourly typhoon rainfall forecasting, predictor selection, support vector machines, random forests
Carle, S F; Zavarin, M; Pawloski, G A
2002-11-01
LLNL hydrologic source term modeling at the Cambric site (Pawloski et al., 2000) showed that retardation of radionuclide transport is sensitive to the distribution and amount of radionuclide sorbing minerals. While all mineralogic information available near the Cambric site was used in these early simulations (11 mineral abundance analyses from UE-5n and 9 from RNM-l), these older data sets were qualitative in nature, with detection limits too high to accurately measure many of the important radionuclide sorbing minerals (e.g. iron oxide). Also, the sparse nature of the mineral abundance data permitted only a hypothetical description of the spatial distribution of radionuclide sorbing minerals. Yet, the modeling results predicted that the spatial distribution of sorbing minerals would strongly affect radionuclide transport. Clearly, additional data are needed to improve understanding of mineral abundances and their spatial distributions if model predictions in Frenchman Flat are to be defensible. This report evaluates new high-resolution quantitative X-Ray Diffraction (XRD) data on mineral distributions and their abundances from core samples recently collected from drill hole ER-5-4. The total of 94 samples from ER-5-4 were collected at various spacings to enable evaluation of spatial variability at a variety of spatial scales as small as 0.3 meters and up to hundreds of meters. Additional XRD analyses obtained from drillholes UE-Sn, ER-5-3, and U-11g-1 are used to augment evaluation of vertical spatial variability and permit some evaluation of lateral spatial variability. A total of 163 samples are evaluated. The overall goal of this study is to understand and characterize the spatial variation of sorbing minerals in Frenchman Flat alluvium using geostatistical techniques, with consideration for the potential impact on reactive transport of radionuclides. To achieve this goal requires an effort to ensure that plausible geostatistical models are used to
NASA Astrophysics Data System (ADS)
March, N. H.; Moreno, A. J.
2016-06-01
The critical exponent ν for randomly branched polymers with dimensionality d equal to 3, is known exactly as 1/2. Here, we invoke an already available string theory model to predict the remaining static critical exponents. Utilizing results of Hsu et al. (Comput Phys Commun. 2005;169:114-116), results are added for d = 8. Experiment plus simulation would now be important to confirm, or if necessary to refine, the proposed values.
Set statistics in conductive bridge random access memory device with Cu/HfO{sub 2}/Pt structure
Zhang, Meiyun; Long, Shibing Wang, Guoming; Xu, Xiaoxin; Li, Yang; Liu, Qi; Lv, Hangbing; Liu, Ming; Lian, Xiaojuan; Miranda, Enrique; Suñé, Jordi
2014-11-10
The switching parameter variation of resistive switching memory is one of the most important challenges in its application. In this letter, we have studied the set statistics of conductive bridge random access memory with a Cu/HfO{sub 2}/Pt structure. The experimental distributions of the set parameters in several off resistance ranges are shown to nicely fit a Weibull model. The Weibull slopes of the set voltage and current increase and decrease logarithmically with off resistance, respectively. This experimental behavior is perfectly captured by a Monte Carlo simulator based on the cell-based set voltage statistics model and the Quantum Point Contact electron transport model. Our work provides indications for the improvement of the switching uniformity.
Geostatistical analysis of Palmerton soil survey data.
Starks, T H; Sparks, A R; Brown, K W
1987-11-01
This paper describes statistical and geostatistical analyses of data from a soil sampling survey. Soil sampling was performed, in October and November of 1985, to obtain information on the level, extent, and spatial structure of metal pollution of the soil in and around the Palmerton, Pennsylvania, NPL Superfund site. Measurements of the concentrations of cadmium, copper, lead, and zinc in the soil samples were obtained. An appropriate variance stabilizing transformation was determined. Estimation of variance components was performed. Generalized convariance functions for log-transformed concentrations were estimated for each metal. Block kriging was employed using the estimated spatial structure models to obtain estimated metal concentration distributions over the central part of Palmerton.
Erlandson, Kristine M; Taejaroenkul, Sineenart; Smeaton, Laura; Gupta, Amita; Singini, Isaac L; Lama, Javier R; Mngqibisa, Rosie; Firnhaber, Cynthia; Cardoso, Sandra Wagner; Kanyama, Cecilia; Machado da Silva, Andre L; Hakim, James G; Kumarasamy, Nagalingeswaran; Campbell, Thomas B; Hughes, Michael D
2015-09-01
Background. Existing data on anthropomorphic changes in resource-limited settings primarily come from observational or cross-sectional studies. Data from randomized clinical trials are needed to inform treatment decisions in these areas of the world. Methods. The AIDS Clinical Trials Group Prospective Evaluation of Antiretrovirals in Resource-Limited Settings (PEARLS) study was a prospective, randomized evaluation of the efficacy of emtricitabine/tenofovir + efavirenz (FTC/TDF + EFV) vs lamivudine/zidovudine + efavirenz (3TC/ZDV + EFV) for the initial treatment of human immunodeficiency virus (HIV)-1-infected individuals from resource-diverse settings. Changes in anthropomorphic measures were analyzed using mixed-effect models for repeated measurements, using all available measurements at weeks 48, 96, and 144. Intent-to-treat results are presented; as-treated results were similar. Results. Five hundred twenty-six participants were randomized to FTC/TDF + EFV, and 519 participants were randomized to 3TC/ZDV + EFV. Significantly greater increases from baseline to week 144 were seen among those randomized to FTC/TDF + EFV vs 3TC/ZDV + EFV in all measures except waist-to-hip ratio, with the following mean changes: weight, 4.8 vs 3.0 kg; body mass index, 1.8 vs 1.1 kg/m(2); mid-arm, 1.7 vs 0.7 cm; waist, 5.2 vs 4.3 cm; hip, 3.8 vs 1.4 cm; and mid-thigh circumference, 3.1 vs 0.9 cm. There were 7 clinical diagnoses of lipoatrophy in the 3TC/ZDV + EFV arm compared with none in the FTC/TDF + EFV arm. The proportion of overweight or obese participants increased from 25% (week 0) to 42% (week 144) for FTC/TDF + EFV and from 26% to 38% for 3TC/ZDV + EFV. Conclusions. Our findings support first-line use of FTC/TDF + EFV in resource-limited settings and emphasize the need for interventions to limit weight gain among overweight or obese HIV-infected participants in all settings. PMID:26213694
Erlandson, Kristine M.; Taejaroenkul, Sineenart; Smeaton, Laura; Gupta, Amita; Singini, Isaac L.; Lama, Javier R.; Mngqibisa, Rosie; Firnhaber, Cynthia; Cardoso, Sandra Wagner; Kanyama, Cecilia; Machado da Silva, Andre L.; Hakim, James G.; Kumarasamy, Nagalingeswaran; Campbell, Thomas B.; Hughes, Michael D.
2015-01-01
Background. Existing data on anthropomorphic changes in resource-limited settings primarily come from observational or cross-sectional studies. Data from randomized clinical trials are needed to inform treatment decisions in these areas of the world. Methods. The AIDS Clinical Trials Group Prospective Evaluation of Antiretrovirals in Resource-Limited Settings (PEARLS) study was a prospective, randomized evaluation of the efficacy of emtricitabine/tenofovir + efavirenz (FTC/TDF + EFV) vs lamivudine/zidovudine + efavirenz (3TC/ZDV + EFV) for the initial treatment of human immunodeficiency virus (HIV)-1-infected individuals from resource-diverse settings. Changes in anthropomorphic measures were analyzed using mixed-effect models for repeated measurements, using all available measurements at weeks 48, 96, and 144. Intent-to-treat results are presented; as-treated results were similar. Results. Five hundred twenty-six participants were randomized to FTC/TDF + EFV, and 519 participants were randomized to 3TC/ZDV + EFV. Significantly greater increases from baseline to week 144 were seen among those randomized to FTC/TDF + EFV vs 3TC/ZDV + EFV in all measures except waist-to-hip ratio, with the following mean changes: weight, 4.8 vs 3.0 kg; body mass index, 1.8 vs 1.1 kg/m2; mid-arm, 1.7 vs 0.7 cm; waist, 5.2 vs 4.3 cm; hip, 3.8 vs 1.4 cm; and mid-thigh circumference, 3.1 vs 0.9 cm. There were 7 clinical diagnoses of lipoatrophy in the 3TC/ZDV + EFV arm compared with none in the FTC/TDF + EFV arm. The proportion of overweight or obese participants increased from 25% (week 0) to 42% (week 144) for FTC/TDF + EFV and from 26% to 38% for 3TC/ZDV + EFV. Conclusions. Our findings support first-line use of FTC/TDF + EFV in resource-limited settings and emphasize the need for interventions to limit weight gain among overweight or obese HIV-infected participants in all settings. PMID:26213694
A comprehensive, user-friendly geostatistical software system called GEOPACk has been developed. The purpose of this software is to make available the programs necessary to undertake a geostatistical analysis of spatially correlated data. The programs were written so that they ...
Kitchener, Betty A; Jorm, Anthony F
2004-01-01
Background The Mental Health First Aid training course was favorably evaluated in an uncontrolled trial in 2002 showing improvements in participants' mental health literacy, including knowledge, stigmatizing attitudes, confidence and help provided to others. This article reports the first randomized controlled trial of this course. Methods Data are reported on 301 participants randomized to either participate immediately in a course or to be wait-listed for 5 months before undertaking the training. The participants were employees in two large government departments in Canberra, Australia, where the courses were conducted during participants' work time. Data were analyzed according to an intention-to-treat approach. Results The trial found a number of benefits from this training course, including greater confidence in providing help to others, greater likelihood of advising people to seek professional help, improved concordance with health professionals about treatments, and decreased stigmatizing attitudes. An additional unexpected but exciting finding was an improvement in the mental health of the participants themselves. Conclusions The Mental Health First Aid training has shown itself to be not only an effective way to improve participants' mental health literacy but also to improve their own mental health. It is a course that has high applicability across the community. PMID:15310395
ERIC Educational Resources Information Center
Sawchuk, Craig N.; Russo, Joan E.; Charles, Steve; Goldberg, Jack; Forquera, Ralph; Roy-Byrne, Peter; Buchwald, Dedra
2011-01-01
We examined if step-count goal setting resulted in increases in physical activity and walking compared to only monitoring step counts with pedometers among American Indian/Alaska Native elders. Outcomes included step counts, self-reported physical activity and well-being, and performance on the 6-minute walk test. Although no significant…
SubPatch: random kd-tree on a sub-sampled patch set for nearest neighbor field estimation
NASA Astrophysics Data System (ADS)
Pedersoli, Fabrizio; Benini, Sergio; Adami, Nicola; Okuda, Masahiro; Leonardi, Riccardo
2015-02-01
We propose a new method to compute the approximate nearest-neighbors field (ANNF) between image pairs using random kd-tree and patch set sub-sampling. By exploiting image coherence we demonstrate that it is possible to reduce the number of patches on which we compute the ANNF, while maintaining high overall accuracy on the final result. Information on missing patches is then recovered by interpolation and propagation of good matches. The introduction of the sub-sampling factor on patch sets also allows for setting the desired trade off between accuracy and speed, providing a flexibility that lacks in state-of-the-art methods. Tests conducted on a public database prove that our algorithm achieves superior performance with respect to PatchMatch (PM) and Coherence Sensitivity Hashing (CSH) algorithms in a comparable computational time.
Dassau, Eyal; Brown, Sue A.; Basu, Ananda; Pinsker, Jordan E.; Kudva, Yogish C.; Gondhalekar, Ravi; Patek, Steve; Lv, Dayu; Schiavon, Michele; Lee, Joon Bok; Dalla Man, Chiara; Hinshaw, Ling; Castorino, Kristin; Mallad, Ashwini; Dadlani, Vikash; McCrady-Spitzer, Shelly K.; McElwee-Malloy, Molly; Wakeman, Christian A.; Bevier, Wendy C.; Bradley, Paige K.; Kovatchev, Boris; Cobelli, Claudio; Zisser, Howard C.
2015-01-01
Context: Closed-loop control (CLC) relies on an individual's open-loop insulin pump settings to initialize the system. Optimizing open-loop settings before using CLC usually requires significant time and effort. Objective: The objective was to investigate the effects of a one-time algorithmic adjustment of basal rate and insulin to carbohydrate ratio open-loop settings on the performance of CLC. Design: This study reports a multicenter, outpatient, randomized, crossover clinical trial. Patients: Thirty-seven adults with type 1 diabetes were enrolled at three clinical sites. Interventions: Each subject's insulin pump settings were subject to a one-time algorithmic adjustment based on 1 week of open-loop (i.e., home care) data collection. Subjects then underwent two 27-hour periods of CLC in random order with either unchanged (control) or algorithmic adjusted basal rate and carbohydrate ratio settings (adjusted) used to initialize the zone-model predictive control artificial pancreas controller. Subject's followed their usual meal-plan and had an unannounced exercise session. Main Outcomes and Measures: Time in the glucose range was 80–140 mg/dL, compared between both arms. Results: Thirty-two subjects completed the protocol. Median time in CLC was 25.3 hours. The median time in the 80–140 mg/dl range was similar in both groups (39.7% control, 44.2% adjusted). Subjects in both arms of CLC showed minimal time spent less than 70 mg/dl (median 1.34% and 1.37%, respectively). There were no significant differences more than 140 mg/dL. Conclusions: A one-time algorithmic adjustment of open-loop settings did not alter glucose control in a relatively short duration outpatient closed-loop study. The CLC system proved very robust and adaptable, with minimal (<2%) time spent in the hypoglycemic range in either arm. PMID:26204135
PREDICTION INTERVALS FOR INTEGRALS OF GAUSSIAN RANDOM FIELDS
De Oliveira, Victor; Kone, Bazoumana
2014-01-01
Methodology is proposed for the construction of prediction intervals for integrals of Gaussian random fields over bounded regions (called block averages in the geostatistical literature) based on observations at a finite set of sampling locations. Two bootstrap calibration algorithms are proposed, termed indirect and direct, aimed at improving upon plug-in prediction intervals in terms of coverage probability. A simulation study is carried out that illustrates the effectiveness of both procedures, and these procedures are applied to estimate block averages of chromium traces in a potentially contaminated region in Switzerland. PMID:25431507
Grisotto, Laura; Consonni, Dario; Cecconi, Lorenzo; Catelan, Dolores; Lagazio, Corrado; Bertazzi, Pier Alberto; Baccini, Michela; Biggeri, Annibale
2016-01-01
In this paper the focus is on environmental statistics, with the aim of estimating the concentration surface and related uncertainty of an air pollutant. We used air quality data recorded by a network of monitoring stations within a Bayesian framework to overcome difficulties in accounting for prediction uncertainty and to integrate information provided by deterministic models based on emissions meteorology and chemico-physical characteristics of the atmosphere. Several authors have proposed such integration, but all the proposed approaches rely on representativeness and completeness of existing air pollution monitoring networks. We considered the situation in which the spatial process of interest and the sampling locations are not independent. This is known in the literature as the preferential sampling problem, which if ignored in the analysis, can bias geostatistical inferences. We developed a Bayesian geostatistical model to account for preferential sampling with the main interest in statistical integration and uncertainty. We used PM10 data arising from the air quality network of the Environmental Protection Agency of Lombardy Region (Italy) and numerical outputs from the deterministic model. We specified an inhomogeneous Poisson process for the sampling locations intensities and a shared spatial random component model for the dependence between the spatial location of monitors and the pollution surface. We found greater predicted standard deviation differences in areas not properly covered by the air quality network. In conclusion, in this context inferences on prediction uncertainty may be misleading when geostatistical modelling does not take into account preferential sampling. PMID:27087040
2014-01-01
Background Few studies have designed and tested the use of continuous quality improvement approaches in community based substance use treatment settings. Little is known about the feasibility, costs, efficacy, and sustainment of such approaches in these settings. Methods/Design A group-randomized trial using a modified stepped wedge design is being used. In the first phase of the study, eight programs, stratified by modality (residential, outpatient) are being randomly assigned to the intervention or control condition. In the second phase, the initially assigned control programs are receiving the intervention to gain additional information about feasibility while sustainment is being studied among the programs initially assigned to the intervention. Discussion By using this design in a pilot study, we help inform the field about the feasibility, costs, efficacy and sustainment of the intervention. Determining information at the pilot stage about costs and sustainment provides value for designing future studies and implementation strategies with the goal to reduce the time between intervention development and translation to real world practice settings. PMID:24467770
Sawchuk, Craig N; Russo, Joan E; Charles, Steve; Goldberg, Jack; Forquera, Ralph; Roy-Byrne, Peter; Buchwald, Dedra
2011-01-01
We examined if step-count goal setting resulted in increases in physical activity and walking compared to only monitoring step counts with pedometers among American Indian/Alaska Native elders. Outcomes included step counts, self-reported physical activity and well-being, and performance on the 6-minute walk test. Although no significant between-group differences were found, within-group analyses indicated that elders significantly improved on the majority of step count, physical activity, health-related quality of life, and 6-minute walk outcomes.
Escoffery, Cam; Elliott, Tom; Nehl, Eric J.
2015-01-01
Objectives. We compared 2 strategies for disseminating an evidence-based skin cancer prevention program. Methods. We evaluated the effects of 2 strategies (basic vs enhanced) for dissemination of the Pool Cool skin cancer prevention program in outdoor swimming pools on (1) program implementation, maintenance, and sustainability and (2) improvements in organizational and environmental supports for sun protection. The trial used a cluster-randomized design with pools as the unit of intervention and outcome. The enhanced group received extra incentives, reinforcement, feedback, and skill-building guidance. Surveys were collected in successive years (2003–2006) from managers of 435 pools in 33 metropolitan areas across the United States participating in the Pool Cool Diffusion Trial. Results. Both treatment groups improved their implementation of the program, but pools in the enhanced condition had significantly greater overall maintenance of the program over 3 summers of participation. Furthermore, pools in the enhanced condition established and maintained significantly greater sun-safety policies and supportive environments over time. Conclusions. This study found that more intensive, theory-driven dissemination strategies can significantly enhance program implementation and maintenance of health-promoting environmental and policy changes. Future research is warranted through longitudinal follow-up to examine sustainability. PMID:25521872
Ouwehand, Arthur C; DongLian, Cai; Weijian, Xu; Stewart, Morgan; Ni, Jiayi; Stewart, Tad; Miller, Larry E
2014-01-16
Probiotics are known to reduce antibiotic associated diarrhea (AAD) and Clostridium difficile associated diarrhea (CDAD) risk in a strain-specific manner. The aim of this study was to determine the dose-response effect of a four strain probiotic combination (HOWARU(®) Restore) on the incidence of AAD and CDAD and severity of gastrointestinal symptoms in adult in-patients requiring antibiotic therapy. Patients (n=503) were randomized among three study groups: HOWARU(®) Restore probiotic 1.70×10(10) CFU (high-dose, n=168), HOWARU(®) Restore probiotic 4.17×10(9) CFU (low-dose, n=168), or placebo (n=167). Subjects were stratified by gender, age, and duration of antibiotic treatment. Study products were administered daily up to 7 days after the final antibiotic dose. The primary endpoint of the study was the incidence of AAD. Secondary endpoints included incidence of CDAD, diarrhea duration, stools per day, bloody stools, fever, abdominal cramping, and bloating. A significant dose-response effect on AAD was observed with incidences of 12.5, 19.6, and 24.6% with high-dose, low-dose, and placebo, respectively (p=0.02). CDAD was the same in both probiotic groups (1.8%) but different from the placebo group (4.8%; p=0.04). Incidences of fever, abdominal pain, and bloating were lower with increasing probiotic dose. The number of daily liquid stools and average duration of diarrhea decreased with higher probiotic dosage. The tested four strain probiotic combination appears to lower the risk of AAD, CDAD, and gastrointestinal symptoms in a dose-dependent manner in adult in-patients.
Relevant feature set estimation with a knock-out strategy and random forests.
Ganz, Melanie; Greve, Douglas N; Fischl, Bruce; Konukoglu, Ender
2015-11-15
Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data's multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods' user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a "knock-out" strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset
Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series
NASA Astrophysics Data System (ADS)
Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina
2014-05-01
The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results
Mejia, Anilena; Calam, Rachel; Sanders, Matthew R
2015-07-01
The aim of this study was to determine whether an intervention from the Triple P Positive Parenting Program system was effective in reducing parental reports of child behavioral difficulties in urban low-income settings in Panama City. A pilot parallel-group randomized controlled trial was carried out. A total of 108 parents of children 3 to 12 years old with some level of parent-rated behavioral difficulties were randomly assigned to a discussion group on "dealing with disobedience" or to a no intervention control. Blinded assessments were carried out prior to the intervention, 2 weeks, 3 months, and 6 months later. Results indicated that parental reports of child behavioral difficulties changed over time and decreased more steeply in the intervention than in the control group. The effects of the intervention on parental reports of behavioral difficulties were moderate at post-intervention and 3-month follow-up, and large at 6-month follow-up. Parents who participated in the discussion group reported fewer behavioral difficulties in their children after the intervention than those in the control condition. They also reported reduced parental stress and less use of dysfunctional parenting practices. There is a limited amount of evidence on the efficacy of parenting interventions in low-resource settings. This pilot trial was carried out using a small convenience sample living in low-income urban communities in Panama City, and therefore, the findings are of reduced generalizability to other settings. However, the methodology employed in this trial represents an example for future work in other low-resource settings. PMID:25703382
Mann, Devin M; Palmisano, Joseph; Lin, Jenny J
2016-12-01
Lifestyle behavior changes can prevent progression of prediabetes to diabetes but providers often are not able to effectively counsel about preventive lifestyle changes. We developed and pilot tested the Avoiding Diabetes Thru Action Plan Targeting (ADAPT) program to enhance primary care providers' counseling about behavior change for patients with prediabetes. Primary care providers in two urban academic practices and their patients with prediabetes were recruited to participate in the ADAPT study, an unblinded randomized pragmatic trial to test the effectiveness of the ADAPT program, including a streamlined electronic medical record-based goal setting tool. Providers were randomized to intervention or control arms; eligible patients whose providers were in the intervention arm received the ADAPT program. Physical activity (the primary outcome) was measured using pedometers, and data were gathered about patients' diet, weight and glycemic control. A total of 54 patients were randomized and analyzed as part of the 6-month ADAPT study (2010-2012, New York, NY). Those in the intervention group showed an increase total daily steps compared to those in the control group (+ 1418 vs - 598, p = 0.007) at 6 months. There was also a trend towards weight loss in the intervention compared to the control group (- 1.0 lbs. vs. 3.0 lbs., p = 0.11), although no change in glycemic control. The ADAPT study is among the first to use standard electronic medical record tools to embed goal setting into realistic primary care workflows and to demonstrate a significant improvement in prediabetes patients' physical activity.
Doctor, P.G.; Oberlander, P.L.; Rice, W.A.; Devary, J.L.; Nelson, R.W.; Tucker, P.E.
1982-09-01
The Office of Nuclear Waste Isolation (ONWI) requested Pacific Northwest Laboratory (PNL) to: (1) use geostatistical analyses to evaluate the adequacy of hydrologic data from three salt regions, each of which contains a potential nuclear waste repository site; and (2) demonstrate a methodology that allows quantification of the value of additional data collection. The three regions examined are the Paradox Basin in Utah, the Permian Basin in Texas, and the Mississippi Study Area. Additional and new data became available to ONWI during and following these analyses; therefore, this report must be considered a methodology demonstration here would apply as illustrated had the complete data sets been available. A combination of geostatistical and hydrologic analyses was used for this demonstration. Geostatistical analyses provided an optimal estimate of the potentiometric surface from the available data, a measure of the uncertainty of that estimate, and a means for selecting and evaluating the location of future data. The hydrologic analyses included the calculation of transmissivities, flow paths, travel times, and ground-water flow rates from hypothetical repository sites. Simulation techniques were used to evaluate the effect of optimally located future data on the potentiometric surface, flow lines, travel times, and flow rates. Data availability, quality, quantity, and conformance with model assumptions differed in each of the salt areas. Report highlights for the three locations are given.
NASA Astrophysics Data System (ADS)
Jha, Sanjeev Kumar; Mariethoz, Gregoire; Evans, Jason; McCabe, Matthew F.; Sharma, Ashish
2015-08-01
A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 and 10 km resolution for a 20 year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference data set indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local-scale estimates of precipitation and temperature from General Circulation Models.
Unsupervised classification of multivariate geostatistical data: Two algorithms
NASA Astrophysics Data System (ADS)
Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques
2015-12-01
With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.
Roth, Holger R; Lu, Le; Seff, Ari; Cherry, Kevin M; Hoffman, Joanne; Wang, Shijun; Liu, Jiamin; Turkbey, Evrim; Summers, Ronald M
2014-01-01
Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.
The role of geostatistics in medical geology
NASA Astrophysics Data System (ADS)
Goovaerts, Pierre
2014-05-01
Since its development in the mining industry, geostatistics has emerged as the primary tool for spatial data analysis in various fields, ranging from earth and atmospheric sciences, to agriculture, soil science, remote sensing, and more recently environmental exposure assessment. In the last few years, these tools have been tailored to the field of medical geography or spatial epidemiology, which is concerned with the study of spatial patterns of disease incidence and mortality and the identification of potential 'causes' of disease, such as environmental exposure, diet and unhealthy behaviors, economic or socio-demographic factors. On the other hand, medical geology is an emerging interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. This paper provides an introduction to the field of medical geology with an overview of geostatistical methods available for the analysis of geological and health data. Key concepts are illustrated using the mapping of groundwater arsenic concentrations across eleven Michigan counties and the exploration of its relationship to the incidence of prostate cancer at the township level. Arsenic in drinking-water is a major problem and has received much attention because of the large human population exposed and the extremely high concentrations (e.g. 600 to 700 μg/L) recorded in many instances. Few studies have however assessed the risks associated with exposure to low levels of arsenic (say < 50 μg/L) most commonly found in drinking water in the United States. In the Michigan thumb region, arsenopyrite (up to 7% As by weight) has been identified in the bedrock of the Marshall Sandstone aquifer, one of the region's most productive aquifers. Epidemiologic studies have suggested a possible associationbetween exposure to inorganic arsenic and prostate cancer mortality, including a study of populations residing in Utah. The information available for the
Lahlou, Saadi; Boesen-Mariani, Sabine; Franks, Bradley; Guelinckx, Isabelle
2015-01-01
On average, children and adults in developed countries consume too little water, which can lead to negative health consequences. In a one-year longitudinal field experiment in Poland, we compared the impact of three home-based interventions on helping children and their parents/caregivers to develop sustainable increased plain water consumption habits. Fluid consumption of 334 children and their caregivers were recorded over one year using an online specific fluid dietary record. They were initially randomly allocated to one of the three following conditions: Control, Information (child and carer received information on the health benefits of water), or Placement (in addition to information, free small bottles of still water for a limited time period were delivered at home). After three months, half of the non-controls were randomly assigned to Community (child and caregiver engaged in an online community forum providing support on water consumption). All conditions significantly increased the water consumption of children (by 21.9-56.7%) and of adults (by 22-89%). Placement + Community generated the largest effects. Community enhanced the impact of Placement for children and parents, as well as the impact of Information for parents but not children. The results suggest that the family setting offers considerable scope for successful installation of interventions encouraging children and caregivers to develop healthier consumption habits, in mutually reinforcing ways. Combining information, affordances, and social influence gives the best, and most sustainable, results. PMID:26088044
2014-01-01
Background In high-resource settings, obstetric ultrasound is a standard component of prenatal care used to identify pregnancy complications and to establish an accurate gestational age in order to improve obstetric care. Whether or not ultrasound use will improve care and ultimately pregnancy outcomes in low-resource settings is unknown. Methods/Design This multi-country cluster randomized trial will assess the impact of antenatal ultrasound screening performed by health care staff on a composite outcome consisting of maternal mortality and maternal near-miss, stillbirth and neonatal mortality in low-resource community settings. The trial will utilize an existing research infrastructure, the Global Network for Women’s and Children’s Health Research with sites in Pakistan, Kenya, Zambia, Democratic Republic of Congo and Guatemala. A maternal and newborn health registry in defined geographic areas which documents all pregnancies and their outcomes to 6 weeks post-delivery will provide population-based rates of maternal mortality and morbidity, stillbirth, neonatal mortality and morbidity, and health care utilization for study clusters. A total of 58 study clusters each with a health center and about 500 births per year will be randomized (29 intervention and 29 control). The intervention includes training of health workers (e.g., nurses, midwives, clinical officers) to perform ultrasound examinations during antenatal care, generally at 18–22 and at 32–36 weeks for each subject. Women who are identified as having a complication of pregnancy will be referred to a hospital for appropriate care. Finally, the intervention includes community sensitization activities to inform women and their families of the availability of ultrasound at the antenatal care clinic and training in emergency obstetric and neonatal care at referral facilities. Discussion In summary, our trial will evaluate whether introduction of ultrasound during antenatal care improves pregnancy
Goovaerts, Pierre; Trinh, Hoa T; Demond, Avery H; Towey, Timothy; Chang, Shu-Chi; Gwinn, Danielle; Hong, Biling; Franzblau, Alfred; Garabrant, David; Gillespie, Brenda W; Lepkowski, James; Adriaens, Peter
2008-05-15
A key component in any investigation of cause-effect relationships between point source pollution, such as an incinerator, and human health is the availability of measurements and/or accurate models of exposure at the same scale or geography as the health data. Geostatistics allows one to simulate the spatial distribution of pollutant concentrations over various spatial supports while incorporating both field data and predictions of deterministic dispersion models. This methodology was used in a companion paper to identify the census blocks that have a high probability of exceeding a given level of dioxin TEQ (toxic equivalents) around an incinerator in Midland, MI. This geostatistical model, along with population data, provided guidance for the collection of 51 new soil data, which permits the verification of the geostatistical predictions, and calibration of the model. Each new soil measurement was compared to the set of 100 TEQ values simulated at the closest grid node. The correlation between the measured concentration and the averaged simulated value is moderate (0.44), and the actual concentrations are clearly overestimated in the vicinity of the plant property line. Nevertheless, probability intervals computed from simulated TEQ values provide an accurate model of uncertainty: the proportion of observations that fall within these intervals exceeds what is expected from the model. Simulation-based probability intervals are also narrower than the intervals derived from the global histogram of the data, which demonstrates the greater precision of the geostatistical approach. Log-normal ordinary kriging provided fairly similar estimation results for the small and well-sampled area used in this validation study; however, the model of uncertainty was not always accurate. The regression analysis and geostatistical simulation were then conducted using the combined set of 53 original and 51 new soil samples, leading to an updated model for the spatial distribution of
NASA Astrophysics Data System (ADS)
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the
High Performance Geostatistical Modeling of Biospheric Resources
NASA Astrophysics Data System (ADS)
Pedelty, J. A.; Morisette, J. T.; Smith, J. A.; Schnase, J. L.; Crosier, C. S.; Stohlgren, T. J.
2004-12-01
We are using parallel geostatistical codes to study spatial relationships among biospheric resources in several study areas. For example, spatial statistical models based on large- and small-scale variability have been used to predict species richness of both native and exotic plants (hot spots of diversity) and patterns of exotic plant invasion. However, broader use of geostastics in natural resource modeling, especially at regional and national scales, has been limited due to the large computing requirements of these applications. To address this problem, we implemented parallel versions of the kriging spatial interpolation algorithm. The first uses the Message Passing Interface (MPI) in a master/slave paradigm on an open source Linux Beowulf cluster, while the second is implemented with the new proprietary Xgrid distributed processing system on an Xserve G5 cluster from Apple Computer, Inc. These techniques are proving effective and provide the basis for a national decision support capability for invasive species management that is being jointly developed by NASA and the US Geological Survey.
NASA Astrophysics Data System (ADS)
Hellies, Matteo; Deidda, Roberto; Langousis, Andreas
2016-04-01
We study the extreme rainfall regime of the Island of Sardinia in Italy, based on annual maxima of daily precipitation. The statistical analysis is conducted using 229 daily rainfall records with at least 50 complete years of observations, collected at different sites by the Hydrological Survey of the Sardinia Region. Preliminary analysis, and the L-skewness and L-kurtosis diagrams, show that the Generalized Extreme Value (GEV) distribution model performs best in describing daily rainfall extremes. The GEV distribution parameters are estimated using the method of Probability Weighted Moments (PWM). To obtain extreme rainfall estimates at ungauged sites, while minimizing uncertainties due to sampling variability, a regional and a geostatistical approach are compared. The regional approach merges information from different gauged sites, within homogeneous regions, to obtain GEV parameter estimates at ungauged locations. The geostatistical approach infers the parameters of the GEV distribution model at locations where measurements are available, and then spatially interpolates them over the study region. In both approaches we use local rainfall means as index-rainfall. In the regional approach we define homogeneous regions by applying a hierarchical cluster analysis based on Ward's method, with L-moment ratios (i.e. L-CV and L-Skewness) as metrics. The analysis results in four contiguous regions, which satisfy the Hosking and Wallis (1997) homogeneity tests. The latter have been conducted using a Monte-Carlo approach based on a 4-parameter Kappa distribution model, fitted to each station cluster. Note that the 4-parameter Kappa model includes the GEV distribution as a sub-case, when the fourth parameter h is set to 0. In the geostatistical approach we apply kriging for uncertain data (KUD), which accounts for the error variance in local parameter estimation and, therefore, may serve as a useful tool for spatial interpolation of metrics affected by high uncertainty. In
Shargie, Estifanos Biru; Mørkve, Odd; Lindtjørn, Bernt
2006-01-01
OBJECTIVE: To ascertain whether case-finding through community outreach in a rural setting has an effect on case-notification rate, symptom duration, and treatment outcome of smear-positive tuberculosis (TB). METHODS: We randomly allocated 32 rural communities to intervention or control groups. In intervention communities, health workers from seven health centres held monthly diagnostic outreach clinics at which they obtained sputum samples for sputum microscopy from symptomatic TB suspects. In addition, trained community promoters distributed leaflets and discussed symptoms of TB during house visits and at popular gatherings. Symptomatic individuals were encouraged to visit the outreach team or a nearby health facility. In control communities, cases were detected through passive case-finding among symptomatic suspects reporting to health facilities. Smear-positive TB patients from the intervention and control communities diagnosed during the study period were prospectively enrolled. FINDINGS: In the 1-year study period, 159 and 221 cases of smear-positive TB were detected in the intervention and control groups, respectively. Case-notification rates in all age groups were 124.6/10(5) and 98.1/10(5) person-years, respectively (P = 0.12). The corresponding rates in adults older than 14 years were 207/10(5) and 158/10(5) person-years, respectively (P = 0.09). The proportion of patients with >3 months' symptom duration was 41% in the intervention group compared with 63% in the control group (P<0.001). Pre-treatment symptom duration in the intervention group fell by 55-60% compared with 3-20% in the control group. In the intervention and control groups, 81% and 75%, respectively of patients successfully completed treatment (P = 0.12). CONCLUSION: The intervention was effective in improving the speed but not the extent of case finding for smear-positive TB in this setting. Both groups had comparable treatment outcomes. PMID:16501728
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the
Geostatistical methods for hazard assessment and site characterization in mining
Riefenberg, J.
1996-12-01
Ground control hazards, coal quality, ore reserve estimation, and pollution modeling seem unrelated topics from most mining perspectives. However, geostatistical methods can be used to characterize each of these, and more topics. Exploratory drill core data, and continued drilling and field measurements, can provide a wealth of information related to each of the above areas and are often severely underutilized. Recent studies have led to the development of the Multiple Parameter Mapping (MPM) technology, which utilizes geostatistics and other numerical modeling methods, to generate a {open_quotes}hazard index{close_quotes} map, often from exploratory drill core data. This mapping has been presented for ground control hazards relating roof quality, floor quality, numerically modelled stresses due to mining geometry, and geologic features. A review of the MPM method, future directions with the MPM, and a discussion of using these and other geostatistical methods to quantify coal quality, ore reserve estimation, and pollutant modeling are presented in this paper.
Martin-Diener, Eva; Braun-Fahrländer, Charlotte; Bauer, Georg; Martin, Brian W
2009-01-01
Background Effective interventions are needed to reduce the chronic disease epidemic. The Internet has the potential to provide large populations with individual advice at relatively low cost. Objective The focus of the study was the Web-based tailored physical activity intervention Active-online. The main research questions were (1) How effective is Active-online, compared to a nontailored website, in increasing self-reported and objectively measured physical activity levels in the general population when delivered in a real-life setting? (2) Do respondents recruited for the randomized study differ from spontaneous users of Active-online, and how does effectiveness differ between these groups? (3) What is the impact of frequency and duration of use of Active-online on changes in physical activity behavior? Methods Volunteers recruited via different media channels completed a Web-based baseline survey and were randomized to Active-online (intervention group) or a nontailored website (control group). In addition, spontaneous users were recruited directly from the Active-online website. In a subgroup of participants, physical activity was measured objectively using accelerometers. Follow-up assessments took place 6 weeks (FU1), 6 months (FU2), and 13 months (FU3) after baseline. Results A total of 1531 respondents completed the baseline questionnaire (intervention group n = 681, control group n = 688, spontaneous users n = 162); 133 individuals had valid accelerometer data at baseline. Mean age of the total sample was 43.7 years, and 1146 (74.9%) were women. Mixed linear models (adjusted for sex, age, BMI category, and stage of change) showed a significant increase in self-reported mean minutes spent in moderate- and vigorous-intensity activity from baseline to FU1 (coefficient = 0.14, P = .001) and to FU3 (coefficient = 0.19, P < .001) in all participants with no significant differences between groups. A significant increase in the proportion of individuals meeting
G STL: the geostatistical template library in C++
NASA Astrophysics Data System (ADS)
Remy, Nicolas; Shtuka, Arben; Levy, Bruno; Caers, Jef
2002-10-01
The development of geostatistics has been mostly accomplished by application-oriented engineers in the past 20 years. The focus on concrete applications gave birth to many algorithms and computer programs designed to address different issues, such as estimating or simulating a variable while possibly accounting for secondary information such as seismic data, or integrating geological and geometrical data. At the core of any geostatistical data integration methodology is a well-designed algorithm. Yet, despite their obvious differences, all these algorithms share many commonalities on which to build a geostatistics programming library, lest the resulting library is poorly reusable and difficult to expand. Building on this observation, we design a comprehensive, yet flexible and easily reusable library of geostatistics algorithms in C++. The recent advent of the generic programming paradigm allows us elegantly to express the commonalities of the geostatistical algorithms into computer code. Generic programming, also referred to as "programming with concepts", provides a high level of abstraction without loss of efficiency. This last point is a major gain over object-oriented programming which often trades efficiency for abstraction. It is not enough for a numerical library to be reusable, it also has to be fast. Because generic programming is "programming with concepts", the essential step in the library design is the careful identification and thorough definition of these concepts shared by most geostatistical algorithms. Building on these definitions, a generic and expandable code can be developed. To show the advantages of such a generic library, we use G STL to build two sequential simulation programs working on two different types of grids—a surface with faults and an unstructured grid—without requiring any change to the G STL code.
Geostatistical Study of Precipitation on the Island of Crete
NASA Astrophysics Data System (ADS)
Agou, Vasiliki D.; Varouchakis, Emmanouil A.; Hristopulos, Dionissios T.
2015-04-01
precipitation which are fitted locally to a three-parameter probability distribution, based on which a normalized index is derived. We use the Spartan variogram function to model space-time correlations, because it is more flexible than classical models [3]. The performance of the variogram model is tested by means of leave-one-out cross validation. The variogram model is then used in connection with ordinary kriging to generate precipitation maps for the entire island. In the future, we will explore the joint spatiotemporal evolution of precipitation patterns on Crete. References [1] P. Goovaerts. Geostatistical approaches for incorporating elevation into the spatial interpolation of precipitation. Journal of Hydrology, 228(1):113-129, 2000. [2] N. B. Guttman. Accepting the standardized precipitation index: a calculation algorithm. American Water Resource Association, 35(2):311-322, 1999. [3] D. T Hristopulos. Spartan Gibbs random field models for geostatistical applications. SIAM Journal on Scientific Computing, 24(6):2125-2162, 2003. [4] A.G. Koutroulis, A.-E.K. Vrohidou, and I.K. Tsanis. Spatiotemporal characteristics of meteorological drought for the island of Crete. Journal of Hydrometeorology, 12(2):206-226, 2011. [5] T. B. McKee, N. J. Doesken, and J. Kleist. The relationship of drought frequency and duration to time scales. In Proceedings of the 8th Conference on Applied Climatology, page 179-184, Anaheim, California, 1993.
Semi-automated mapping of landforms using multiple point geostatistics
NASA Astrophysics Data System (ADS)
Vannametee, E.; Babel, L. V.; Hendriks, M. R.; Schuur, J.; de Jong, S. M.; Bierkens, M. F. P.; Karssenberg, D.
2014-09-01
This study presents an application of a multiple point geostatistics (MPS) to map landforms. MPS uses information at multiple cell locations including morphometric attributes at a target mapping cell, i.e. digital elevation model (DEM) derivatives, and non-morphometric attributes, i.e. landforms at the neighboring cells, to determine the landform. The technique requires a training data set, consisting of a field map of landforms and a DEM. Mapping landforms proceeds in two main steps. First, the number of cells per landform class, associated with a set of observed attributes discretized into classes (e.g. slope class), is retrieved from the training image and stored in a frequency tree, which is a hierarchical database. Second, the algorithm visits the non-mapped cells and assigns to these a realization of a landform class, based on the probability function of landforms conditioned to the observed attributes as retrieved from the frequency tree. The approach was tested using a data set for the Buëch catchment in the French Alps. We used four morphometric attributes extracted from a 37.5-m resolution DEM as well as two non-morphometric attributes observed in the neighborhood. The training data set was taken from multiple locations, covering 10% of the total area. The mapping was performed in a stochastic framework, in which 35 map realizations were generated and used to derive the probabilistic map of landforms. Based on this configuration, the technique yielded a map with 51.2% of correct cells, evaluated against the field map of landforms. The mapping accuracy is relatively high at high elevations, compared to the mid-slope and low-lying areas. Debris slope was mapped with the highest accuracy, while MPS shows a low capability in mapping hogback and glacis. The mapping accuracy is highest for training areas with a size of 7.5-10% of the total area. Reducing the size of the training images resulted in a decreased mapping quality, as the frequency database only
The extension of geostatistical spatial analysis model and its application to datum land appraisal
NASA Astrophysics Data System (ADS)
Fu, Feihong; Li, Xuefei; Zou, Rong
2007-06-01
Geostatistical method can reflect quantitatively variable spatial distribution characteristic, and through produces many different theoretical models to reflect quantitatively the uncertain attribute because of lacking material. But geostatistics is taken a new discipline, it also exists the probability of extension. The extension of ordinary geostatistics includes mainly three aspects: the treatment of outliers in geostatistical spatial data, fitting the variogram and selecting Kriging estimate neighborhood. And it introduces the basic mentality of applying geostatistical space analytical model to appraise datum land price base on analyzing the feasibility.
NASA Astrophysics Data System (ADS)
Hodgetts, David; Burnham, Brian; Head, William; Jonathan, Atunima; Rarity, Franklin; Seers, Thomas; Spence, Guy
2013-04-01
In the hydrocarbon industry stochastic approaches are the main method by which reservoirs are modelled. These stochastic modelling approaches require geostatistical information on the geometry and distribution of the geological elements of the reservoir. As the reservoir itself cannot be viewed directly (only indirectly via seismic and/or well log data) this leads to a great deal of uncertainty in the geostatistics used, therefore outcrop analogues are characterised to help obtain the geostatistical information required to model the reservoir. Lidar derived Digital Outcrop Model's (DOM's) provide the ability to collect large quantities of statistical information on the geological architecture of the outcrop, far more than is possible by field work alone as the DOM allows accurate measurements to be made in normally inaccessible parts of the exposure. This increases the size of the measured statistical dataset, which in turn results in an increase in statistical significance. There are, however, many problems and biases in the data which cannot be overcome by sample size alone. These biases, for example, may relate to the orientation, size and quality of exposure, as well as the resolution of the DOM itself. Stochastic modelling used in the hydrocarbon industry fall mainly into 4 generic approaches: 1) Object Modelling where the geology is defined by a set of simplistic shapes (such as channels), where parameters such as width, height and orientation, among others, can be defined. 2) Sequential Indicator Simulations where geological shapes are less well defined and the size and distribution are defined using variograms. 3) Multipoint statistics where training images are used to define shapes and relationships between geological elements and 4) Discrete Fracture Networks for fractures reservoirs where information on fracture size and distribution are required. Examples of using DOM's to assist with each of these modelling approaches are presented, highlighting the
Gstat: a program for geostatistical modelling, prediction and simulation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer J.; Wesseling, Cees G.
1998-01-01
Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.
High resolution sequence stratigraphic concepts applied to geostatistical modeling
Desaubliaux, G.; De Lestang, A.P.; Eschard, R.
1995-08-01
Lithofacies simulations using a high resolution 3D grid allow to enhance the geometries of internal heterogeneities of reservoirs. In this study the series simulated were the Ness formation, part of the Brent reservoir in the Dunbar field located in the Viking graben of the North Sea. Simulations results have been used to build the reservoir layering supporting the 3D grid used for reservoir engineering, and also used as a frame to study the effects of secondary diagenetic processes on petrophysical properties. The method used is based on a geostatistical study and integrates the following data: a geological model using sequence stratigraphic concepts to define lithofacies sequences and associated bounding surfaces, well data (cores and logs) used as database for geostatistical analysis and simulations, seismic data: a 3D seismic survey has been used to define the internal surfaces bounding the units, outcrop data: The Mesa Verde formation (Colorado, USA) has been used as an outcrop analog to calibrate geostatistical parameters for the simulations (horizontal range of the variograms). This study illustrates the capacity to use high resolution sequence stratigraphic concepts to improve the simulations of reservoirs when the lack of subsurface information reduce the accuracy of geostatistical analysis.
Reducing uncertainty in geostatistical description with well testing pressure data
Reynolds, A.C.; He, Nanqun; Oliver, D.S.
1997-08-01
Geostatistics has proven to be an effective tool for generating realizations of reservoir properties conditioned to static data, e.g., core and log data and geologic knowledge. Due to the lack of closely spaced data in the lateral directions, there will be significant variability in reservoir descriptions generated by geostatistical simulation, i.e., significant uncertainty in the reservoir descriptions. In past work, we have presented procedures based on inverse problem theory for generating reservoir descriptions (rock property fields) conditioned to pressure data and geostatistical information represented as prior means for log-permeability and porosity and variograms. Although we have shown that the incorporation of pressure data reduces the uncertainty below the level contained in the geostatistical model based only on static information (the prior model), our previous results assumed did not explicitly account for uncertainties in the prior means and the parameters defining the variogram model. In this work, we investigate how pressure data can help detect errors in the prior means. If errors in the prior means are large and are not taken into account, realizations conditioned to pressure data represent incorrect samples of the a posteriori probability density function for the rock property fields, whereas, if the uncertainty in the prior mean is incorporated properly into the model, one obtains realistic realizations of the rock property fields.
Geostatistical Modeling of Evolving Landscapes by Means of Image Quilting
NASA Astrophysics Data System (ADS)
Mendes, J. H.; Caers, J.; Scheidt, C.
2015-12-01
Realistic geological representation of subsurface heterogeneity remains an important outstanding challenge. While many geostatistical methods exist for representing sedimentary systems, such as multiple-point geostatistics, rule-based methods or Boolean methods, the question of what the prior uncertainty on parameters (or training images) of such algorithms are, remains outstanding. In this initial work, we investigate the use of flume experiments to constrain better such prior uncertainty and to start understanding what information should be provided to geostatistical algorithms. In particular, we study the use of image quilting as a novel multiple-point method for generating fast geostatistical realizations once a training image is provided. Image quilting is a method emanating from computer graphics where patterns are extracted from training images and then stochastically quilted along a raster path to create stochastic variation of the stated training image. In this initial study, we use a flume experiment and extract 10 training images as representative for the variability of the evolving landscape over a period of 136 minutes. The training images consists of wet/dry regions obtained from overhead shots taken over the flume experiment. To investigate whether such image quilting reproduces the same variability of the evolving landscape in terms of wet/dry regions, we generate multiple realizations with all 10 training images and compare that variability with the variability seen in the entire flume experiment. By proper tuning of the quilting parameters we find generally reasonable agreement with the flume experiment.
NASA Astrophysics Data System (ADS)
Appelhans, Tim; Mwangomo, Ephraim; Otte, Insa; Detsch, Florian; Nauss, Thomas; Hemp, Andreas; Ndyamkama, Jimmy
2015-04-01
This study introduces the set-up and characteristics of a meteorological station network on the southern slopes of Mt. Kilimanjaro, Tanzania. The set-up follows a hierarchical approach covering an elevational as well as a land-use disturbance gradient. The network consists of 52 basic stations measuring ambient air temperature and above ground air humidity and 11 precipitation measurement sites. We provide in depth descriptions of various machine learning and classical geo-statistical methods used to fill observation gaps and extend the spatial coverage of the network to a total of 60 research sites. Performance statistics for these methods indicate that the presented data sets provide reliable measurements of the meteorological reality at Mt. Kilimanjaro. These data provide an excellent basis for ecological studies and are also of great value for regional atmospheric numerical modelling studies for which such comprehensive in-situ validation observations are rare, especially in tropical regions of complex terrain.
Geostatistical analysis of soil properties at field scale using standardized data
NASA Astrophysics Data System (ADS)
Millan, H.; Tarquis, A. M.; Pérez, L. D.; Matos, J.; González-Posada, M.
2012-04-01
Indentifying areas with physical degradation is a crucial step to ameliorate the effects in soil erosion. The quantification and interpretation of spatial variability is a key issue for site-specific soil management. Geostatistics has been the main methodological tool for implementing precision agriculture using field data collected at different spatial resolutions. Even though many works have made significant contributions to the body of knowledge on spatial statistics and its applications, some other key points need to be addressed for conducting precise comparisons between soil properties using geostatistical parameters. The objectives of the present work were (i) to quantify the spatial structure of different physical properties collected from a Vertisol, (ii) to search for potential correlations between different spatial patterns and (iii) to identify relevant components through multivariate spatial analysis. The study was conducted on a Vertisol (Typic Hapludert) dedicated to sugarcane (Saccharum officinarum L.) production during the last sixty years. We used six soil properties collected from a squared grid (225 points) (penetrometer resistance (PR), total porosity, fragmentation dimension (Df), vertical electrical conductivity (ECv), horizontal electrical conductivity (ECh) and soil water content (WC)). All the original data sets were z-transformed before geostatistical analysis. Three different types of semivariogram models were necessary for fitting individual experimental semivariograms. This suggests the different natures of spatial variability patterns. Soil water content rendered the largest nugget effect (C0 = 0.933) while soil total porosity showed the largest range of spatial correlation (A = 43.92 m). The bivariate geostatistical analysis also rendered significant cross-semivariance between different paired soil properties. However, four different semivariogram models were required in that case. This indicates an underlying co
ERIC Educational Resources Information Center
Jones, Debra Hughes; Vaden-Kiernan, Michael; Rudo, Zena; Fitzgerald, Robert; Hartry, Ardice; Chambers, Bette; Smith, Dewi; Muller, Patricia; Moss, Marcey A.
2008-01-01
Under the larger scope of the National Partnership for Quality Afterschool Learning, SEDL funded three awardees to carry out large-scale randomized controlled trials (RCT) assessing the efficacy of promising literacy curricula in afterschool settings on student academic achievement. SEDL provided analytic and technical support to the RCT studies…
Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia
2015-04-01
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion. PMID:25345922
Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia
2015-04-01
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.
Singdahlsen, D.S. )
1991-06-01
The East Painter structure is a doubly plunging, asymmetric anticline formed on the hanging wall of a back-thrust imbricate near the leading edge of the Absaroka Thrust. The Jurassic Nugget Sandstone is the productive horizon in the East Painter structure. The approximately 900-ft-thick Nugget is a stratigraphically complex and heterogeneous unit deposited by eolian processes in a complex erg setting. The high degree of heterogeneity iwthin the Nugget results from variations in grain size, sorting, mineralogy, and degree and distribution of lamination. The Nugget is comprised of dune, transitional toeset, and interdune facies, each exhibiting different porosity and permeability distributions. Gacies architecture results in both vertical and horizontal stratification of the reservoir. Adequate representation of reservoir heterogeneity is the key to successful modeling of past and future production performance. In addition, a detailed geologic model, based on depositional environment, must be integrated into the simulation to ensure realistic results. Geostatistics provide a method for modeling the spatial reservoir property distirbution while honoring all data values at their sample location. Conditional simulation is a geostatistical technique that generates several equally probably realizations that observe the data and spatial constraints imposed upon them while including fractal variability. Flow simulations of multiple reservoir realizations can provide a probability distribution of reservoir performance that can be used to evaluate risk associated with a project caused by the imcomplete sampling of the reservoir property distribution.
Optimal design of hydraulic head monitoring networks using space-time geostatistics
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Júnez-Ferreira, H. E.
2013-05-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Sakkinen, P A; Severson, R K; Ross, J A; Robison, L L
1995-01-01
The purpose of this analysis was to evaluate the degree of matching in 95 individually matched pairs from a case-control study of childhood leukemia that used random-digit dialing to select control subjects. Both geographic proximity (of each case subject to his or her matched control subject) and differences in socioeconomic status were evaluated. The median distance between matched pairs was 3.2 km. There were no significant differences in distance between matched pairs by urban/rural status and geographic location. For studies of childhood cancer drawn from pediatric referral centers, random-digit dialing appears to provide a suitable control group. PMID:7702122
Geostatistical joint inversion of seismic and potential field methods
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman; Chouteau, Michel; Giroux, Bernard
2016-04-01
Interpretation of geophysical data needs to integrate different types of information to make the proposed model geologically realistic. Multiple data sets can reduce uncertainty and non-uniqueness present in separate geophysical data inversions. Seismic data can play an important role in mineral exploration, however processing and interpretation of seismic data is difficult due to complexity of hard-rock geology. On the other hand, the recovered model from potential field methods is affected by inherent non uniqueness caused by the nature of the physics and by underdetermination of the problem. Joint inversion of seismic and potential field data can mitigate weakness of separate inversion of these methods. A stochastic joint inversion method based on geostatistical techniques is applied to estimate density and velocity distributions from gravity and travel time data. The method fully integrates the physical relations between density-gravity, on one hand, and slowness-travel time, on the other hand. As a consequence, when the data are considered noise-free, the responses from the inverted slowness and density data exactly reproduce the observed data. The required density and velocity auto- and cross-covariance are assumed to follow a linear model of coregionalization (LCM). The recent development of nonlinear model of coregionalization could also be applied if needed. The kernel function for the gravity method is obtained by the closed form formulation. For ray tracing, we use the shortest-path methods (SPM) to calculate the operation matrix. The jointed inversion is performed on structured grid; however, it is possible to extend it to use unstructured grid. The method is tested on two synthetic models: a model consisting of two objects buried in a homogeneous background and a model with stochastic distribution of parameters. The results illustrate the capability of the method to improve the inverted model compared to the separate inverted models with either gravity
Assessing the resolution-dependent utility of tomograms for geostatistics
Day-Lewis, F. D.; Lane, J.W.
2004-01-01
Geophysical tomograms are used increasingly as auxiliary data for geostatistical modeling of aquifer and reservoir properties. The correlation between tomographic estimates and hydrogeologic properties is commonly based on laboratory measurements, co-located measurements at boreholes, or petrophysical models. The inferred correlation is assumed uniform throughout the interwell region; however, tomographic resolution varies spatially due to acquisition geometry, regularization, data error, and the physics underlying the geophysical measurements. Blurring and inversion artifacts are expected in regions traversed by few or only low-angle raypaths. In the context of radar traveltime tomography, we derive analytical models for (1) the variance of tomographic estimates, (2) the spatially variable correlation with a hydrologic parameter of interest, and (3) the spatial covariance of tomographic estimates. Synthetic examples demonstrate that tomograms of qualitative value may have limited utility for geostatistics; moreover, the imprint of regularization may preclude inference of meaningful spatial statistics from tomograms.
Hydrogeologic Unit Flow Characterization Using Transition Probability Geostatistics
Jones, N L; Walker, J R; Carle, S F
2003-11-21
This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has several advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upwards sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow (HUF) package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids. An application of the technique involving probabilistic capture zone delineation for the Aberjona Aquifer in Woburn, Ma. is included.
Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios T.
2015-12-01
Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.
Optimization of Pilot Point Locations: an efficient and geostatistical perspective
NASA Astrophysics Data System (ADS)
Mehne, J.; Nowak, W.
2012-04-01
The pilot point method is a wide-spread method for calibrating ensembles of heterogeneous aquifer models on available field data such as hydraulic heads. The pilot points are virtual measurements of conductivity, introduced as localized carriers of information in the inverse procedure. For each heterogeneous aquifer realization, the pilot point values are calibrated until all calibration data are honored. Adequate placement and numbers of pilot points are crucial both for accurate representation of heterogeneity and to keep the computational costs of calibration at an acceptable level. Current placement methods for pilot points either rely solely on the expertise of the modeler, or they involve computationally costly sensitivity analyses. None of the existing placement methods directly addressed the geostatistical character of the placement and calibration problem. This study presents a new method for optimal selection of pilot point locations. We combine ideas from Ensemble Kalman Filtering and geostatistical optimal design with straightforward optimization. In a first step, we emulate the pilot point method with a modified Ensemble Kalman Filter for parameter estimation at drastically reduced computational costs. This avoids the costly evaluation of sensitivity coefficients often used for optimal placement of pilot points. Second, we define task-driven objective functions for the optimal placement of pilot points, based on ideas from geostatistical optimal design of experiments. These objective functions can be evaluated at speed, without carrying out the actual calibration process, requiring nothing else but ensemble covariances that are available from step one. By formal optimization, we can find pilot point placement schemes that are optimal in representing the data for the task-at-hand with minimal numbers of pilot points. In small synthetic test applications, we demonstrate the promising computational performance and the geostatistically logical choice of
Simulation of Soil Macropore Networks Using Multi-Point Geostatistics
NASA Astrophysics Data System (ADS)
Luo, L.; Lin, H.; Singha, K.
2006-12-01
Two-point correlation functions have been used to quantitatively evaluate the spatial variability of soil structure but cannot characterize its specific geometry. Multi-point (MP) geostatistics can be used to consider both spatial correlation and the specific shape of objects. The algorithm, SNESIM, developed by Strebelle (2002) was used to simulate complex geological patterns, reflecting different scales of variability and types of heterogeneity. Soil macropores, especially earthworm burrows and root channels, are critical to preferential flow and transport in soils. Accurate simulation of soil macropore network can allow us better simulate soil hydraulic properties and address scaling issues of structured soils. However, little work has been done to simulate soil macropore network using MP geostatistics. 3D soil macropore network of an agricultural soil in Pennsylvania, Hagerstown silt loam, was extracted from the spatially exhaustive data collected with Micro Computing Tomography. 3D training image was scanned by 3D template to obtain the 3D patterns. Permeability of the simulated macropore network was calculated using Lattice-Boltzmann method. SNESIM was able to simulate different types of macropores, the earthworm burrows and inter-aggregate pores. However, SNESIM requires that training images must have a stationary character, which may limit its ability to simulate soil macropore network. SNESIM is still very computationally consuming for the 3D structure simulation. While computationally expensive, MP geostatistics has great potential for simulating 3D soil macropore network, which is useful to understand and predict the hydraulic behavior of structured soils.
Assessment of Geostatistical Methods in Drought Monitoring Systems
NASA Astrophysics Data System (ADS)
Shahabfar, A.; Eitzinger, J.
2009-09-01
One of the essential components of drought risk management is drought monitoring and drought phenomenon has become a recurrent phenomenon in Iran in the last few decades. As the aim of construction of a drought monitoring system over Iran, in this paper according to last results that have been obtained by authors, three drought indices include China-Z index (CZI), modified CZI (MCZI), Z-Score which have high performance in detecting and measuring of drought intensity, have been calculated over 180 weather stations located in 10 separate agro-climatic zones in Iran. For finding, evaluating and refining an appropriate interpolation method, several geostatistical methods including ordinary kriging (Spherical, Circular, Exponential, Gaussian and Linear), Inverse Distance Weighed (IDW) and Spline have been applied and all of calculated drought indices have been interpolated over 10 different agro-climatic zones. The performance of the seven mentioned methods was evaluated and compared using the monthly data and the cross-validation technique. The comparison criterions were Mean Absolute Error (MAE) and Mean Biased Error (MBE). The results indicate that although ordinary kriging is the most accurate method but Inverse Distance Weighed and Spline have reasonable and more accurate results in several agro-climatic zones and can be used as high performance geostatistical tools for interpolation of different drought indices in Iran. Key words: drought monitoring, drought indices, geostatistical methods, interpolation.
Addressing uncertainty in rock properties through geostatistical simulation
McKenna, S.A.; Rautman, A.; Cromer, M.V.; Zelinski, W.P.
1996-09-01
Fracture and matrix properties in a sequence of unsaturated, welded tuffs at Yucca Mountain, Nevada, are modeled in two-dimensional cross-sections through geostatistical simulation. In the absence of large amounts of sample data, an n interpretive, deterministic, stratigraphic model is coupled with a gaussian simulation algorithm to constrain realizations of both matrix porosity and fracture frequency. Use of the deterministic, stratigraphic model imposes scientific judgment, in the form of a conceptual geologic model, onto the property realizations. Linear coregionalization and a regression relationship between matrix porosity and matrix hydraulic conductivity are used to generate realizations of matrix hydraulic conductivity. Fracture-frequency simulations conditioned on the stratigraphic model represent one class of fractures (cooling fractures) in the conceptual model of the geology. A second class of fractures (tectonic fractures) is conceptualized as fractures that cut across strata vertically and includes discrete features such as fault zones. Indicator geostatistical simulation provides locations of this second class of fractures. The indicator realizations are combined with the realizations of fracture spacing to create realizations of fracture frequency that are a combination of both classes of fractures. Evaluations of the resulting realizations include comparing vertical profiles of rock properties within the model to those observed in boreholes and checking intra-unit property distributions against collected data. Geostatistical simulation provides an efficient means of addressing spatial uncertainty in dual continuum rock properties.
2014-01-01
Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292
Yadav, Bechu K V; Nandy, S
2015-05-01
Mapping forest biomass is fundamental for estimating CO₂ emissions, and planning and monitoring of forests and ecosystem productivity. The present study attempted to map aboveground woody biomass (AGWB) integrating forest inventory, remote sensing and geostatistical techniques, viz., direct radiometric relationships (DRR), k-nearest neighbours (k-NN) and cokriging (CoK) and to evaluate their accuracy. A part of the Timli Forest Range of Kalsi Soil and Water Conservation Division, Uttarakhand, India was selected for the present study. Stratified random sampling was used to collect biophysical data from 36 sample plots of 0.1 ha (31.62 m × 31.62 m) size. Species-specific volumetric equations were used for calculating volume and multiplied by specific gravity to get biomass. Three forest-type density classes, viz. 10-40, 40-70 and >70% of Shorea robusta forest and four non-forest classes were delineated using on-screen visual interpretation of IRS P6 LISS-III data of December 2012. The volume in different strata of forest-type density ranged from 189.84 to 484.36 m(3) ha(-1). The total growing stock of the forest was found to be 2,024,652.88 m(3). The AGWB ranged from 143 to 421 Mgha(-1). Spectral bands and vegetation indices were used as independent variables and biomass as dependent variable for DRR, k-NN and CoK. After validation and comparison, k-NN method of Mahalanobis distance (root mean square error (RMSE) = 42.25 Mgha(-1)) was found to be the best method followed by fuzzy distance and Euclidean distance with RMSE of 44.23 and 45.13 Mgha(-1) respectively. DRR was found to be the least accurate method with RMSE of 67.17 Mgha(-1). The study highlighted the potential of integrating of forest inventory, remote sensing and geostatistical techniques for forest biomass mapping.
La Torre, G; Rossini, G; Saulle, R; Mannocci, A; Di Thiene, D; Mauro, V; Barbato, E
2013-01-01
Many studies have shown that the bad lifestyles are the major factors thought to influence susceptibility to many diseases in our society and often these habits during the adolescence begin. The aim of the study was to evaluate the health promotion intervention effect in an orthodontic adolescent sample, in particular: deterring adolescents from smoking; discourage the use and abuse of alcoholic beverages; encourage the adherence to the Mediterranean style diet. A blinded randomized controlled trial will be performed. The participants will be adolescents aged 10 to 14 years that will receive a medical examination in the Complex Unit of Orthodontics. The sample will be followed for three years. The collected evidence would be a scientific support for decisions in public health, in order to increase the health of the young generations.
Field, Craig; Walters, Scott; Marti, C. Nathan; Jun, Jina; Foreman, Michael; Brown, Carlos
2014-01-01
Objective Determine the efficacy of three brief intervention strategies that address heavy drinking among injured patients. Summary of Background Data The content or structure of brief interventions most effective at reducing alcohol misuse following traumatic injury is not known. Methods Injured patients from three trauma centers were screened for heavy drinking and randomly assigned to brief advice or BA (n=200), brief motivational intervention or BMI (n=203), or brief motivational intervention plus a telephone booster using personalized feedback or BMI+B (n=193). Among those randomly assigned, 57% met criteria for moderate to severe alcohol problems. The primary drinking outcomes were assessed at 3, 6, and 12 months. Results Compared with BA and BMI, BMI+B showed significant reductions in the number of standard drinks consumed per week at 3 (Δ adjusted means=-1.22, 95% C.I.=-.99 ∼ -1.49, p=.01) and 6 months (Δ adjusted means=-1.42, 95% C.I.=-1.14 ∼ -1.76, p=.02), percent days of heavy drinking at 6 months (Δ adjusted means=-5.90, 95% C.I.=-11.40 ∼ -0.40, p=.04), maximum number of standard drinks consumed in one day at 3 (Δ adjusted means=-1.38, 95% C.I.=-1.18 ∼ -1.62, p=.003) and 12 months (Δ adjusted means=-1.71, 95% C.I.=-1.47 ∼ -1.99, p=.02), and number of standard drinks consumed per drinking day at 3 (Δ adjusted means=-1.49, 95% C.I.=-1.35∼-1.65, p=.002) and 6 months (Δ adjusted means=-1.28,, 95% C.I.=-1.17 ∼ -1.40, p=.01). Conclusion Brief interventions based on motivational interviewing with a telephone booster using personalized feedback were most effective at achieving reductions in alcohol intake across the three trauma centers. PMID:24263324
Lee, K.H.
1997-09-01
Numerical and geostatistical analyses show that the artificial smoothing effect of kriging removes high permeability flow paths from hydrogeologic data sets, reducing simulated contaminant transport rates in heterogeneous vadose zone systems. therefore, kriging alone is not recommended for estimating the spatial distribution of soil hydraulic properties for contaminant transport analysis at vadose zone sites. Vadose zone transport if modeled more effectively by combining kriging with stochastic simulation to better represent the high degree of spatial variability usually found in the hydraulic properties of field soils. However, kriging is a viable technique for estimating the initial mass distribution of contaminants in the subsurface.
Apfel, Christian C; Souza, Kimberly; Portillo, Juan; Dalal, Poorvi; Bergese, Sergio D
2015-01-01
Intravenous (IV) acetaminophen has been shown to reduce postoperative pain and opioid consumption, which may lead to increased patient satisfaction. To determine the effect IV acetaminophen has on patient satisfaction, a pooled analysis from methodologically homogenous studies was conducted. We obtained patient-level data from five randomized, placebo-controlled studies in adults undergoing elective surgery in which patient satisfaction was measured using a 4-point categorical rating scale. The primary endpoint was "excellent" satisfaction and the secondary endpoint was "good" or "excellent" satisfaction at 24 hr after first study drug administration. Bivariate analyses were conducted using the chi-square test and Student's t-test and multivariable analyses were conducted using logistic regression analysis. Patients receiving IV acetaminophen were more than twice as likely as those who received placebo to report "excellent" patient satisfaction ratings (32.3% vs. 15.9%, respectively). Of all variables that remained statistically significant in the multivariable analysis (i.e., type of surgery, duration of anesthesia, last pain rating, and opioid consumption), IV acetaminophen had the strongest positive effect on "excellent" patient satisfaction with an odds ratio of 2.76 (95% CI 1.81-4.23). Results for "excellent" or "good" satisfaction were similar. When given as part of a perioperative analgesic regimen, IV acetaminophen was associated with significantly improved patient satisfaction.
Constraining geostatistical models with hydrological data to improve prediction realism
NASA Astrophysics Data System (ADS)
Demyanov, V.; Rojas, T.; Christie, M.; Arnold, D.
2012-04-01
Geostatistical models reproduce spatial correlation based on the available on site data and more general concepts about the modelled patters, e.g. training images. One of the problem of modelling natural systems with geostatistics is in maintaining realism spatial features and so they agree with the physical processes in nature. Tuning the model parameters to the data may lead to geostatistical realisations with unrealistic spatial patterns, which would still honour the data. Such model would result in poor predictions, even though although fit the available data well. Conditioning the model to a wider range of relevant data provide a remedy that avoid producing unrealistic features in spatial models. For instance, there are vast amounts of information about the geometries of river channels that can be used in describing fluvial environment. Relations between the geometrical channel characteristics (width, depth, wave length, amplitude, etc.) are complex and non-parametric and are exhibit a great deal of uncertainty, which is important to propagate rigorously into the predictive model. These relations can be described within a Bayesian approach as multi-dimensional prior probability distributions. We propose a way to constrain multi-point statistics models with intelligent priors obtained from analysing a vast collection of contemporary river patterns based on previously published works. We applied machine learning techniques, namely neural networks and support vector machines, to extract multivariate non-parametric relations between geometrical characteristics of fluvial channels from the available data. An example demonstrates how ensuring geological realism helps to deliver more reliable prediction of a subsurface oil reservoir in a fluvial depositional environment.
Geospatial Interpolation and Mapping of Tropospheric Ozone Pollution Using Geostatistics
Kethireddy, Swatantra R.; Tchounwou, Paul B.; Ahmad, Hafiz A.; Yerramilli, Anjaneyulu; Young, John H.
2014-01-01
Tropospheric ozone (O3) pollution is a major problem worldwide, including in the United States of America (USA), particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels. PMID:24434594
NASA Astrophysics Data System (ADS)
Tadeu Pereira, Gener; Ribeiro de Oliveira, Ismênia; De Bortoli Teixeira, Daniel; Arantes Camargo, Livia; Rodrigo Panosso, Alan; Marques, José, Jr.
2015-04-01
Phosphorus is one of the limiting nutrients for sugarcane development in Brazilian soils. The spatial variability of this nutrient is great, defined by the properties that control its adsorption and desorption reactions. Spatial estimates to characterize this variability are based on geostatistical interpolation. Thus, the assessment of the uncertainty of estimates associated with the spatial distribution of available P (Plabile) is decisive to optimize the use of phosphate fertilizers. The purpose of this study was to evaluate the performance of sequential Gaussian simulation (sGs) and ordinary kriging (OK) in the modeling of uncertainty in available P estimates. A sampling grid with 626 points was established in a 200-ha experimental sugarcane field in Tabapuã, São Paulo State, Brazil. The soil was sampled in the crossover points of a regular grid with intervals of 50 m. From the observations, 63 points, approximately 10% of sampled points were randomly selected before the geostatistical modeling of the composition of a data set used in the validation process modeling, while the remaining 563 points were used for the predictions variable in a place not sampled. The sGs generated 200 realizations. From the realizations generated, different measures of estimation and uncertainty were obtained. The standard deviation, calculated point to point, all simulated maps provided the map of deviation, used to assess local uncertainty. The visual analysis of maps of the E-type and KO showed that the spatial patterns produced by both methods were similar, however, it was possible to observe the characteristic smoothing effect of the KO especially in regions with extreme values. The Standardized variograms of selected realizations sGs showed both range and model similar to the variogram of the Observed date of Plabile. The variogram KO showed a distinct structure of the observed data, underestimating the variability over short distances, presenting parabolic behavior near
Kumar, Sathish; Shewade, Hemant Deepak; Vasudevan, Kavita; Durairaju, Kathamuthu; Santhi, V S; Sunderamurthy, Bhuvaneswary; Krishnakumari, Velavane; Panigrahi, Krishna Chandra
2015-01-01
Objective. We wanted to study whether mobile reminders increased follow-up for definitive tests resulting in higher screening yield during opportunistic screening for diabetes. Methods. This was a facility-based parallel randomized controlled trial during routine outpatient department hours in a primary health care setting in Puducherry, India (2014). We offered random blood glucose testing to non-pregnant non-diabetes adults with age >30 years (667 total, 390 consented); eligible outpatients (random blood glucose ≥ 6.1 mmol/l, n = 268) were requested to follow-up for definitive tests (fasting and postprandial blood glucose). Eligible outpatients either received (intervention arm, n = 133) or did not receive mobile reminder (control arm, n = 135) to follow-up for definitive tests. We measured capillary blood glucose using a glucometer to make epidemiological diagnosis of diabetes. The trial was registered with Clinical Trial Registry of India (CTRI/2014/10/005138). Results. 85.7% of outpatients in intervention arm returned for definitive test when compared to 53.3% in control arm [Relative Risk = 1.61, (0.95 Confidence Interval - 1.35, 1.91)]. Screening yield in intervention and control arm was 18.6% and 10.2% respectively. Etiologic fraction was 45.2% and number needed to screen was 11.9. Conclusion. In countries like India, which is emerging as the diabetes capital of the world, considering the wide prevalent use of mobile phones, and real life resource limited settings in which this study was carried out, mobile reminders during opportunistic screening in primary health care setting improve screening yield of diabetes.
Mercury emissions from coal combustion in Silesia, analysis using geostatistics
NASA Astrophysics Data System (ADS)
Zasina, Damian; Zawadzki, Jaroslaw
2015-04-01
Data provided by the UNEP's report on mercury [1] shows that solid fuel combustion in significant source of mercury emission to air. Silesia, located in southwestern Poland, is notably affected by mercury emission due to being one of the most industrialized Polish regions: the place of coal mining, production of metals, stone mining, mineral quarrying and chemical industry. Moreover, Silesia is the region with high population density. People are exposed to severe risk of mercury emitted from both: industrial and domestic sources (i.e. small household furnaces). Small sources have significant contribution to total emission of mercury. Official and statistical analysis, including prepared for international purposes [2] did not provide data about spatial distribution of the mercury emitted to air, however number of analysis on Polish public power and energy sector had been prepared so far [3; 4]. The distribution of locations exposed for mercury emission from small domestic sources is interesting matter merging information from various sources: statistical, economical and environmental. This paper presents geostatistical approach to distibution of mercury emission from coal combustion. Analysed data organized in 2 independent levels: individual, bottom-up approach derived from national emission reporting system [5; 6] and top down - regional data calculated basing on official statistics [7]. Analysis, that will be presented, will include comparison of spatial distributions of mercury emission using data derived from sources mentioned above. Investigation will include three voivodeships of Poland: Lower Silesian, Opole (voivodeship) and Silesian using selected geostatistical methodologies including ordinary kriging [8]. References [1] UNEP. Global Mercury Assessment 2013: Sources, Emissions, Releases and Environmental Transport. UNEP Chemicals Branch, Geneva, Switzerland, 2013. [2] NCEM. Poland's Informative Inventory Report 2014. NCEM at the IEP-NRI, 2014. http
Examples of improved reservoir modeling through geostatistical data integration
Bashore, W.M.; Araktingi, U.G.
1994-12-31
Results from four case studies are presented to demonstrate improvements in reservoir modeling and subsequent flow predictions through various uses of geostatistical integration methods. Specifically, these cases highlight improvements gained from (1) better understanding of reservoir geometries through 3D visualization, (2) forward modeling to assess the value of new data prior to acquisition and integration, (3) assessment of reduced uncertainty in porosity prediction through integration of seismic acoustic impedance, and (4) integration of crosswell tomographic and reflection data. The intent of each of these examples is to quantify the add-value of geological and geophysical data integration in engineering terms such as fluid-flow results and reservoir property predictions.
ON THE GEOSTATISTICAL APPROACH TO THE INVERSE PROBLEM. (R825689C037)
The geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis. Although the geostatistical approach is occasionally misconstrued as mere cokriging, in fact it consists of two steps: estimation of statist...
Shahvaran, Zahra; Kazemi, Kamran; Helfroush, Mohammad Sadegh; Jafarian, Nassim; Noorizadeh, Negar
2012-08-15
Noise and intensity non-uniformity are causing major difficulties in magnetic resonance (MR) image segmentation. This paper introduces a variational level set approach for simultaneous MR image segmentation and intensity non-uniformity correction. The proposed energy functional is based on local Gaussian intensity fitting with local means and variances. Furthermore, the proposed model utilizes Markov random fields to model the spatial correlation between neighboring pixels/voxels. The improvements achieved with our method are demonstrated by brain segmentation experiments with simulated and real magnetic resonance images with different noise and bias level. In particular, it is superior in term of accuracy as compared to LGDF and FSL-FAST methods.
Terrain data conflation using an improved pattern-based multiple-point geostatistical approach
NASA Astrophysics Data System (ADS)
Tang, Yunwei; Zhang, Jingxiong; Li, Hui; Ding, Haifeng; Jing, Linhai
2014-11-01
The aim of data conflation is to synergise geospatial information from different sources into a common framework, which can be realised using multivariate geostatistics. Recently, multiple-point geostatistics (MPG) has been proposed for data conflation. Instead of the variogram, MPG borrows structures from the training image, so the spatial correlation is characterised by multiple-point statistics. In pattern-based MPG, two sets of data can be integrated by utilising the secondary data as a locally varying mean (LVM). The training image provides a spatial correlation model and is incorporated to facilitate reproduction of similar local patterns in the predicted image. However, the current patternbased MPG gathers similar patterns based on a prototype class, which extracts spatial structures in an arbitrary way. In this paper, we proposed an improved pattern-based MPG for conflation of digital elevation models (DEMs). In this approach, a new strategy for forming prototype class is applied, which is based on the residual surface, vector ruggedness measure (VRM) and ridge valley class (RVC) of terrain data. The method was tested on the SRTM and GMTED2010 data. SRTM data at the spatial resolution of 3 arc-second was simulated by conflating sparse elevation point data and GMTED2010 data at a coarser spatial resolution of 7.5 arc-second. The proposed MPG method was compared with the traditional pattern-based MPG simulation. Several kriging predictors were applied to provide LVMs for MPG simulation. The result shows that the new method can achieve more precise prediction and retain more spatial details than the benchmarks.
Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2016-03-01
In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.
Geostatistical three-dimensional modeling of oolite shoals, St. Louis Limestone, southwest Kansas
Qi, L.; Carr, T.R.; Goldstein, R.H.
2007-01-01
In the Hugoton embayment of southwestern Kansas, reservoirs composed of relatively thin (<4 m; <13.1 ft) oolitic deposits within the St. Louis Limestone have produced more than 300 million bbl of oil. The geometry and distribution of oolitic deposits control the heterogeneity of the reservoirs, resulting in exploration challenges and relatively low recovery. Geostatistical three-dimensional (3-D) models were constructed to quantify the geometry and spatial distribution of oolitic reservoirs, and the continuity of flow units within Big Bow and Sand Arroyo Creek fields. Lithofacies in uncored wells were predicted from digital logs using a neural network. The tilting effect from the Laramide orogeny was removed to construct restored structural surfaces at the time of deposition. Well data and structural maps were integrated to build 3-D models of oolitic reservoirs using stochastic simulations with geometry data. Three-dimensional models provide insights into the distribution, the external and internal geometry of oolitic deposits, and the sedimentologic processes that generated reservoir intervals. The structural highs and general structural trend had a significant impact on the distribution and orientation of the oolitic complexes. The depositional pattern and connectivity analysis suggest an overall aggradation of shallow-marine deposits during pulses of relative sea level rise followed by deepening near the top of the St. Louis Limestone. Cemented oolitic deposits were modeled as barriers and baffles and tend to concentrate at the edge of oolitic complexes. Spatial distribution of porous oolitic deposits controls the internal geometry of rock properties. Integrated geostatistical modeling methods can be applicable to other complex carbonate or siliciclastic reservoirs in shallow-marine settings. Copyright ?? 2007. The American Association of Petroleum Geologists. All rights reserved.
Lewicki, J.L.; Bergfeld, D.; Cardellini, C.; Chiodini, G.; Granieri, D.; Varley, N.; Werner, C.
2005-01-01
We present a comparative study of soil CO2 flux (FCO2) measured by five groups (Groups 1-5) at the IAVCEI-CCVG Eighth Workshop on Volcanic Gases on Masaya volcano, Nicaragua. Groups 1-5 measured (FCO2) using the accumulation chamber method at 5-m spacing within a 900 m2 grid during a morning (AM) period. These measurements were repeated by Groups 1-3 during an afternoon (PM) period. Measured (FCO2 ranged from 218 to 14,719 g m-2 day-1. The variability of the five measurements made at each grid point ranged from ??5 to 167%. However, the arithmetic means of fluxes measured over the entire grid and associated total CO2 emission rate estimates varied between groups by only ??22%. All three groups that made PM measurements reported an 8-19% increase in total emissions over the AM results. Based on a comparison of measurements made during AM and PM times, we argue that this change is due in large part to natural temporal variability of gas flow, rather than to measurement error. In order to estimate the mean and associated CO2 emission rate of one data set and to map the spatial FCO2 distribution, we compared six geostatistical methods: Arithmetic and minimum variance unbiased estimator means of uninterpolated data, and arithmetic means of data interpolated by the multiquadric radial basis function, ordinary kriging, multi-Gaussian kriging, and sequential Gaussian simulation methods. While the total CO2 emission rates estimated using the different techniques only varied by ??4.4%, the FCO2 maps showed important differences. We suggest that the sequential Gaussian simulation method yields the most realistic representation of the spatial distribution of FCO2, but a variety of geostatistical methods are appropriate to estimate the total CO2 emission rate from a study area, which is a primary goal in volcano monitoring research. ?? Springer-Verlag 2005.
Schur, Nadine; Hürlimann, Eveline; Stensgaard, Anna-Sofie; Chimfwembe, Kingford; Mushinge, Gabriel; Simoonga, Christopher; Kabatereine, Narcis B; Kristensen, Thomas K; Utzinger, Jürg; Vounatsou, Penelope
2013-11-01
Schistosomiasis remains one of the most prevalent parasitic diseases in the tropics and subtropics, but current statistics are outdated due to demographic and ecological transformations and ongoing control efforts. Reliable risk estimates are important to plan and evaluate interventions in a spatially explicit and cost-effective manner. We analysed a large ensemble of georeferenced survey data derived from an open-access neglected tropical diseases database to create smooth empirical prevalence maps for Schistosoma mansoni and Schistosoma haematobium for a total of 13 countries of eastern Africa. Bayesian geostatistical models based on climatic and other environmental data were used to account for potential spatial clustering in spatially structured exposures. Geostatistical variable selection was employed to reduce the set of covariates. Alignment factors were implemented to combine surveys on different age-groups and to acquire separate estimates for individuals aged ≤20 years and entire communities. Prevalence estimates were combined with population statistics to obtain country-specific numbers of Schistosoma infections. We estimate that 122 million individuals in eastern Africa are currently infected with either S. mansoni, or S. haematobium, or both species concurrently. Country-specific population-adjusted prevalence estimates range between 12.9% (Uganda) and 34.5% (Mozambique) for S. mansoni and between 11.9% (Djibouti) and 40.9% (Mozambique) for S. haematobium. Our models revealed that infection risk in Burundi, Eritrea, Ethiopia, Kenya, Rwanda, Somalia and Sudan might be considerably higher than previously reported, while in Mozambique and Tanzania, the risk might be lower than current estimates suggest. Our empirical, large-scale, high-resolution infection risk estimates for S. mansoni and S. haematobium in eastern Africa can guide future control interventions and provide a benchmark for subsequent monitoring and evaluation activities.
Schur, Nadine; Hürlimann, Eveline; Stensgaard, Anna-Sofie; Chimfwembe, Kingford; Mushinge, Gabriel; Simoonga, Christopher; Kabatereine, Narcis B; Kristensen, Thomas K; Utzinger, Jürg; Vounatsou, Penelope
2013-11-01
Schistosomiasis remains one of the most prevalent parasitic diseases in the tropics and subtropics, but current statistics are outdated due to demographic and ecological transformations and ongoing control efforts. Reliable risk estimates are important to plan and evaluate interventions in a spatially explicit and cost-effective manner. We analysed a large ensemble of georeferenced survey data derived from an open-access neglected tropical diseases database to create smooth empirical prevalence maps for Schistosoma mansoni and Schistosoma haematobium for a total of 13 countries of eastern Africa. Bayesian geostatistical models based on climatic and other environmental data were used to account for potential spatial clustering in spatially structured exposures. Geostatistical variable selection was employed to reduce the set of covariates. Alignment factors were implemented to combine surveys on different age-groups and to acquire separate estimates for individuals aged ≤20 years and entire communities. Prevalence estimates were combined with population statistics to obtain country-specific numbers of Schistosoma infections. We estimate that 122 million individuals in eastern Africa are currently infected with either S. mansoni, or S. haematobium, or both species concurrently. Country-specific population-adjusted prevalence estimates range between 12.9% (Uganda) and 34.5% (Mozambique) for S. mansoni and between 11.9% (Djibouti) and 40.9% (Mozambique) for S. haematobium. Our models revealed that infection risk in Burundi, Eritrea, Ethiopia, Kenya, Rwanda, Somalia and Sudan might be considerably higher than previously reported, while in Mozambique and Tanzania, the risk might be lower than current estimates suggest. Our empirical, large-scale, high-resolution infection risk estimates for S. mansoni and S. haematobium in eastern Africa can guide future control interventions and provide a benchmark for subsequent monitoring and evaluation activities. PMID:22019933
NASA Astrophysics Data System (ADS)
Rocca, Dario
2014-05-01
A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.
Rocca, Dario
2014-05-14
A new ab initio approach is introduced to compute the correlation energy within the adiabatic connection fluctuation dissipation theorem in the random phase approximation. First, an optimally small basis set to represent the response functions is obtained by diagonalizing an approximate dielectric matrix containing the kinetic energy contribution only. Then, the Lanczos algorithm is used to compute the full dynamical dielectric matrix and the correlation energy. The convergence issues with respect to the number of empty states or the dimension of the basis set are avoided and the dynamical effects are easily kept into account. To demonstrate the accuracy and efficiency of this approach the binding curves for three different configurations of the benzene dimer are computed: T-shaped, sandwich, and slipped parallel.
Kim, Jeong In; Hwang, Tae In; Aguilar, Ludwig Erik; Park, Chan Hee; Kim, Cheol Sang
2016-01-01
Scaffolds made of aligned nanofibers are favorable for nerve regeneration due to their superior nerve cell attachment and proliferation. However, it is challenging not only to produce a neat mat or a conduit form with aligned nanofibers but also to use these for surgical applications as a nerve guide conduit due to their insufficient mechanical strength. Furthermore, no studies have been reported on the fabrication of aligned nanofibers and randomly-oriented nanofibers on the same mat. In this study, we have successfully produced a mat with both aligned and randomly-oriented nanofibers by using a novel electrospinning set up. A new conduit with a highly-aligned electrospun mat is produced with this modified electrospinning method, and this proposed conduit with favorable features, such as selective permeability, hydrophilicity and nerve growth directional steering, were fabricated as nerve guide conduits (NGCs). The inner surface of the nerve conduit is covered with highly aligned electrospun nanofibers and is able to enhance the proliferation of neural cells. The central part of the tube is double-coated with randomly-oriented nanofibers over the aligned nanofibers, strengthening the weak mechanical strength of the aligned nanofibers. PMID:27021221
Validating spatial structure in canopy water content using geostatistics
NASA Technical Reports Server (NTRS)
Sanderson, E. W.; Zhang, M. H.; Ustin, S. L.; Rejmankova, E.; Haxo, R. S.
1995-01-01
Heterogeneity in ecological phenomena are scale dependent and affect the hierarchical structure of image data. AVIRIS pixels average reflectance produced by complex absorption and scattering interactions between biogeochemical composition, canopy architecture, view and illumination angles, species distributions, and plant cover as well as other factors. These scales affect validation of pixel reflectance, typically performed by relating pixel spectra to ground measurements acquired at scales of 1m(exp 2) or less (e.g., field spectra, foilage and soil samples, etc.). As image analysis becomes more sophisticated, such as those for detection of canopy chemistry, better validation becomes a critical problem. This paper presents a methodology for bridging between point measurements and pixels using geostatistics. Geostatistics have been extensively used in geological or hydrogeolocial studies but have received little application in ecological studies. The key criteria for kriging estimation is that the phenomena varies in space and that an underlying controlling process produces spatial correlation between the measured data points. Ecological variation meets this requirement because communities vary along environmental gradients like soil moisture, nutrient availability, or topography.
Probabilistic assessment of ground-water contamination. 1: Geostatistical framework
Rautman, C.A.; Istok, J.D.
1996-09-01
Characterizing the extent and severity of ground-water contamination at waste sites is expensive and time-consuming. A probabilistic approach, based on the acceptance of uncertainty and a finite probability of making classification errors (contaminated relative to a regulatory threshold vs. uncontaminated), is presented as an alternative to traditional site characterization methodology. The approach utilizes geostatistical techniques to identify and model the spatial continuity of contamination at a site (variography) and to develop alternate plausible simulations of contamination fields (conditional simulation). Probabilistic summaries of many simulations provide tools for (a) estimating the range of plausible contaminant concentrations at unsampled locations, (b) identifying the locations of boundaries between contaminated and uncontaminated portions of the site and the degree of certainty in those locations, and (c) estimating the range of plausible values for total contaminant mass. The first paper in the series presents the geostatistical framework and illustrates the approach using synthetic data for a hypothetical site. The second paper presents an application of the proposed methodology to the probabilistic assessment of ground-water contamination at a site involving ground-water contamination by nitrate and herbicide in a shallow, unconfined alluvial aquifer in an agricultural area in eastern Oregon.
Regional flow duration curves: Geostatistical techniques versus multivariate regression
Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.
2016-01-01
A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.
Regional flow duration curves: Geostatistical techniques versus multivariate regression
NASA Astrophysics Data System (ADS)
Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.
2016-10-01
A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.
2010-01-01
Objectives Primary: To compare the effectiveness of intensive group and individual interventions for smoking cessation in a primary health care setting; secondary: to identify the variables associated with smoking cessation. Methods Three-pronged clinical trial with randomisation at the individual level. We performed the following: an intensive individual intervention (III), an intensive group intervention (IGI) and a minimal intervention (MI). Included in the study were smokers who were prepared to quit smoking. Excluded from the study were individuals aged less than 18 years or with severe mental conditions or terminal illnesses. The outcome measure was continued abstinence at 12 months confirmed through CO-oximetry (CO). The analysis was based on intention to treat. Results In total, 287 smokers were recruited: 81 in the III, 111 in the IGI, and 95 in the MI. Continued abstinence at 12 months confirmed through CO was 7.4% in the III, 5.4% in the IGI, and 1% in the MI. No significant differences were noted between III and MI on the one hand, and between IGI and MI on the other [RR 7.04 (0.9-7.2) and RR 5.1 (0.6-41.9), respectively]. No differences were noted between IGI and III [RR 0.7 (0.2-2.2)]. In multivariate analysis, only overall visit length showed a statistically significant association with smoking cessation. Conclusions The effectiveness of intensive smoking interventions in this study was lower than expected. No statistically significant differences were found between the results of individual and group interventions. Trial registration number ISRCTN32323770 PMID:20178617
NASA Astrophysics Data System (ADS)
Erdin, R.; Frei, C.; Sideris, I.; Kuensch, H.-R.
2010-09-01
There is an increasing demand for accurate mapping of precipitation at a spatial resolution of kilometers. Radar and rain gauges - the two main precipitation measurement systems - exhibit complementary strengths and weaknesses. Radar offers high spatial and temporal resolution but lacks accuracy of absolute values, whereas rain gauges provide accurate values at their specific point location but suffer from poor spatial representativeness. Methods of geostatistical mapping have been proposed to combine radar and rain gauge data for quantitative precipitation estimation (QPE). The aim is to combine the respective strengths and compensate for the respective weaknesses of the two observation platforms. Several studies have demonstrated the potential of these methods over topography of moderate complexity, but their performance remains unclear for high-mountain regions where rainfall patterns are complex, the representativeness of rain gauge measurements is limited and radar observations are obstructed. In this study we examine the potential and limitations of two frequently used geostatistical mapping methods for the territory of Switzerland, where the mountain chain of the Alps poses particular challenges to QPE. The two geostatistical methods explored are kriging with external drift (KED) using radar as drift variable and ordinary kriging of radar errors (OKRE). The radar data is a composite from three C-band radars using a constant Z-R relationship, advanced correction processings for visibility, ground clutter and beam shielding and a climatological bias adjustment. The rain gauge data originates from an automatic network with a typical inter-station distance of 25 km. Both combination methods are applied to a set of case examples representing typical rainfall situations in the Alps with their inherent challenges at daily and hourly time resolution. The quality of precipitation estimates is assessed by several skill scores calculated from cross validation errors at
Viswanathan, Hari S; Robinson, Bruce A; Gable, Carl W; Carey, James W
2003-01-01
Retardation of certain radionuclides due to sorption to zeolitic minerals is considered one of the major barriers to contaminant transport in the unsaturated zone of Yucca Mountain. However, zeolitically altered areas are lower in permeability than unaltered regions, which raises the possibility that contaminants might bypass the sorptive zeolites. The relationship between hydrologic and chemical properties must be understood to predict the transport of radionuclides through zeolitically altered areas. In this study, we incorporate mineralogical information into an unsaturated zone transport model using geostatistical techniques to correlate zeolitic abundance to hydrologic and chemical properties. Geostatistical methods are used to develop variograms, kriging maps, and conditional simulations of zeolitic abundance. We then investigate, using flow and transport modeling on a heterogeneous field, the relationship between percent zeolitic alteration, permeability changes due to alteration, sorption due to alteration, and their overall effect on radionuclide transport. We compare these geostatistical simulations to a simplified threshold method in which each spatial location in the model is assigned either zeolitic or vitric properties based on the zeolitic abundance at that location. A key conclusion is that retardation due to sorption predicted by using the continuous distribution is larger than the retardation predicted by the threshold method. The reason for larger retardation when using the continuous distribution is a small but significant sorption at locations with low zeolitic abundance. If, for practical reasons, models with homogeneous properties within each layer are used, we recommend setting nonzero K(d)s in the vitric tuffs to mimic the more rigorous continuous distribution simulations. Regions with high zeolitic abundance may not be as effective in retarding radionuclides such as Neptunium since these rocks are lower in permeability and contaminants can
2012-01-01
Background Lack of physical activity (PA) is a known risk factor for many health conditions. The workplace is a setting often used to promote activity and health. We investigated the effectiveness of an intervention on PA and productivity-related outcomes in an occupational setting. Methods We conducted a randomized controlled trial of 12 months duration with two 1:1 allocated parallel groups of insurance company employees. Eligibility criteria included permanent employment and absence of any condition that risked the participant’s health during PA. Subjects in the intervention group monitored their daily PA with an accelerometer, set goals, had access to an online service to help them track their activity levels, and received counseling via telephone or web messages for 12 months. The control group received the results of a fitness test and an information leaflet on PA at the beginning of the study. The intervention’s aim was to increase PA, improve work productivity, and decrease sickness absence. Primary outcomes were PA (measured as MET minutes per week), work productivity (quantity and quality of work; QQ index), and sickness absence (SA) days at 12 months. Participants were assigned to groups using block randomization with a computer-generated scheme. The study was not blinded. Results There were 544 randomized participants, of which 521 were included in the analysis (64% female, mean age 43 years). At 12 months, there was no significant difference in physical activity levels between the intervention group (n = 264) and the control group (n = 257). The adjusted mean difference was −206 MET min/week [95% Bayesian credible interval −540 to 128; negative values favor control group]. There was also no significant difference in the QQ index (−0.5 [−4.4 to 3.3]) or SA days (0.0 [−1.2 to 0.9]). Of secondary outcomes, body weight (0.5 kg [0.0 to 1.0]) and percentage of body fat (0.6% [0.2% to 1.1%]) were slightly higher in the
Bautista-Molano, Wilson; Navarro-Compán, Victoria; Landewé, Robert B M; Boers, Maarten; Kirkham, Jamie J; van der Heijde, Désirée
2014-09-01
This study aims to investigate how well the Assessment of SpondyloArthritis international Society (ASAS)/Outcome Measures in Rheumatology Clinical Trials (OMERACT) core set and response criteria for ankylosing spondylitis (AS) have been implemented in randomized controlled trials (RCTs) testing pharmacological and non-pharmacological interventions. A systematic literature search was performed up to June 2013 looking for RCTs in patients with axial spondyloarthritis (SpA) (AS and non-radiographic axial SpA). The assessed domains and instruments belonging to the core sets for disease-controlling anti-rheumatic therapy (DC-ART) and symptom-modifying anti-rheumatic drugs (SMARDs) were extracted. Results were reported separately for those trials published until 2 years after the publication of the core set (1 April 2001; 'control trials') and those trials published at least 2 years after the publication date ('implementation trials'). One hundred twenty-three articles from 99 RCTs were included in the analysis, comparing 48 'control trials' and 51 'implementation trials'. Regarding DC-ART core set, the following domains were significantly more frequently assessed in the 'implementation group' in comparison to the 'control group': 'physical function' (100 vs 41.7 %; p ≤ 0.001), 'peripheral joints/entheses' (100 vs 33.3 %; p ≤ 0.001) and 'fatigue' (100 vs 0 %; p ≤ 0.001). Three instruments were significantly more used in the 'implementation group': Bath Ankylosing Spondylitis Functional Index (BASFI) (100 vs 8.3 %; p = ≤ 0.001), CRP (92.3 vs 58.3 %; p = 0.01) and Bath Ankylosing Spondylitis Metrology Index (BASMI) (53.8 vs 0 %; p = 0.001). Regarding SMARD core set domains, physical function (92 vs 23 %; p ≤ 0.001) and fatigue (84 vs 17 %; p ≤ 0.001), as well as the instruments BASFI (88 vs 14 %; p ≤ 0.001) and BASMI (52 vs 0 %; p ≤ 0.001), increased significantly in the 'implementation group'. Twenty per cent of
Bayesian geostatistics in health cartography: the perspective of malaria
Patil, Anand P.; Gething, Peter W.; Piel, Frédéric B.; Hay, Simon I.
2011-01-01
Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision. PMID:21420361
Bayesian geostatistics in health cartography: the perspective of malaria.
Patil, Anand P; Gething, Peter W; Piel, Frédéric B; Hay, Simon I
2011-06-01
Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision.
Indoor radon variations in central Iran and its geostatistical map
NASA Astrophysics Data System (ADS)
Hadad, Kamal; Mokhtari, Javad
2015-02-01
We present the results of 2 year indoor radon survey in 10 cities of Yazd province in Central Iran (covering an area of 80,000 km2). We used passive diffusive samplers with LATEX polycarbonate films as Solid State Nuclear Track Detector (SSNTD). This study carried out in central Iran where there are major minerals and uranium mines. Our results indicate that despite few extraordinary high concentrations, average annual concentrations of indoor radon are within ICRP guidelines. When geostatistical spatial distribution of radon mapped onto geographical features of the province it was observed that risk of high radon concentration increases near the Saqand, Bafq, Harat and Abarkooh cities, this depended on the elevation and vicinity of the ores and mines.
Geostatistical analyses reveal nutrient-vegetation relationships in savanna soils
NASA Astrophysics Data System (ADS)
Mladenov, N.; Okin, G. S.; Cassel, D.; Caylor, K. C.; Clausen, K. M.
2005-12-01
The uniform Kalahari sands that underlay the Kalahari Transect (KT) are a unique test bed for examining soil nutrient and vegetation cover relationships in a savanna ecosystem. Transects (300m x 100m) were sampled and mapped for each of four sites located along an aridity gradient from Mongu, Zambia in the north (1600 mm/yr precipitation), to Tshane, Botswana in the south (250 mm/yr precipitation). Geostatistical analyses of soil chemistry and tree, shrub, and grass cover at four sites along the moisture gradient of the KT indicated an accumulation of nutrients below tree/shrub cover. In general, areas where grasses dominated between canopies were negatively correlated with soil nutrients at the drier sites. Here we examine effects of vegetation cover on soil productivity and differences in soil nutrients along the moisture gradient of the KT.
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
Modeling fine-scale soil surface structure using geostatistics
NASA Astrophysics Data System (ADS)
Croft, H.; Anderson, K.; Brazier, R. E.; Kuhn, N. J.
2013-04-01
There is widespread recognition that spatially distributed information on soil surface roughness (SSR) is required for hydrological and geomorphological applications. Such information is necessary to describe variability in soil structure, which is highly heterogeneous in time and space, to parameterize hydrology and erosion models and to understand the temporal evolution of the soil surface in response to rainfall. This paper demonstrates how results from semivariogram analysis can quantify key elements of SSR for such applications. Three soil types (silt, silt loam, and silty clay) were used to show how different types of structural variance in SSR evolve during simulated rainfall events. All three soil types were progressively degraded using artificial rainfall to produce a series of roughness states. A calibrated laser profiling instrument was used to measure SSR over a 10 cm × 10 cm spatial extent, at a 2 mm resolution. These data were geostatistically analyzed in the context of aggregate breakdown and soil crusting. The results show that such processes are represented by a quantifiable decrease in sill variance, from 7.81 (control) to 0.94 (after 60 min of rainfall). Soil surface features such as soil cracks, tillage lines and erosional areas were quantified by local maxima in semivariance at a given length scale. This research demonstrates that semivariogram analysis can retrieve spatiotemporal variations in soil surface condition; in order to provide information on hydrological pathways. Consequently, geostatistically derived SSR shows strong potential for inclusion as spatial information in hydrology and erosion models to represent complex surface processes at different soil structural scales.
Principal Component Geostatistical Approach for large-dimensional inverse problems
NASA Astrophysics Data System (ADS)
Kitanidis, P. K.; Lee, J.
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
NASA Astrophysics Data System (ADS)
Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.
2011-07-01
We analyze theoretically the ability of model quality (sometimes termed information or discrimination) criteria such as the negative log likelihood NLL, Bayesian criteria BIC and KIC and information theoretic criteria AIC, AICc, and HIC to estimate (1) the parameter vector ? of the variogram of hydraulic log conductivity (Y = ln K), and (2) statistical parameters ? and ? proportional to head and log conductivity measurement error variances, respectively, in the context of geostatistical groundwater flow inversion. Our analysis extends the work of Hernandez et al. (2003, 2006) and Riva et al. (2009), who developed nonlinear stochastic inverse algorithms that allow conditioning estimates of steady state and transient hydraulic heads, fluxes and their associated uncertainty on information about conductivity and head data collected in a randomly heterogeneous confined aquifer. Their algorithms are based on recursive numerical approximations of exact nonlocal conditional equations describing the mean and (co)variance of groundwater flow. Log conductivity is parameterized geostatistically based on measured values at discrete locations and unknown values at discrete "pilot points." Optionally, the maximum likelihood function on which the inverse estimation of Y at pilot points is based may include a regularization term reflecting prior information about Y. The relative weight ? assigned to this term and its components ? and ?, as well as ? are evaluated separately from other model parameters to avoid bias and instability. This evaluation is done on the basis of criteria such as NLL, KIC, BIC, HIC, AIC, and AICc. We demonstrate theoretically that, whereas all these six criteria make it possible to estimate ?, KIC alone allows one to estimate validly ? and ? (and thus ?). We illustrate this discriminatory power of KIC numerically by using a differential evolution genetic search algorithm to minimize it in the context of a two-dimensional steady state groundwater flow
Thompson, L.M.; Van Manen, F.T.; King, T.L.
2005-01-01
Highways are one of the leading causes of wildlife habitat fragmentation and may particularly affect wide-ranging species, such as American black bears (Ursus americanus). We initiated a research project in 2000 to determine potential effects of a 4-lane highway on black bear ecology in Washington County, North Carolina. The research design included a treatment area (highway construction) and a control area and a pre- and post-construction phase. We used data from the pre-construction phase to determine whether we could detect scale dependency or directionality among allele occurrence patterns using geostatistics. Detection of such patterns could provide a powerful tool to measure the effects of landscape fragmentation on gene flow. We sampled DNA from roots of black bear hair at 70 hair-sampling sites on each study area for 7 weeks during fall of 2000. We used microsatellite analysis based on 10 loci to determine unique multi-locus genotypes. We examined all alleles sampled at ???25 sites on each study area and mapped their presence or absence at each hair-sample site. We calculated semivariograms, which measure the strength of statistical correlation as a function of distance, and adjusted them for anisotropy to determine the maximum direction of spatial continuity. We then calculated the mean direction of spatial continuity for all examined alleles. The mean direction of allele frequency variation was 118.3?? (SE = 8.5) on the treatment area and 172.3?? (SE = 6.0) on the control area. Rayleigh's tests showed that these directions differed from random distributions (P = 0.028 and P < 0.001, respectively), indicating consistent directional patterns for the alleles we examined in each area. Despite the small spatial scale of our study (approximately 11,000 ha for each study area), we observed distinct and consistent patterns of allele occurrence, suggesting different directions of gene flow between the study areas. These directions seemed to coincide with the
Guimarães, Ricardo J P S; Freitas, Corina C; Dutra, Luciano V; Felgueiras, Carlos A; Moura, Ana C M; Amaral, Ronaldo S; Drummond, Sandra C; Scholte, Ronaldo G C; Oliveira, Guilherme; Carvalho, Omar S
2009-03-01
Geostatistics is used in this work to make inferences about the presence of the species of Biomphalaria (B. glabrata, B. tenagophila and/or B. straminea), intermediate hosts of Schistosoma mansoni, at the São Francisco River Basin, in Minas Gerais, Brazil. One of these geostatistical procedures, known as indicator kriging, allows the classification of categorical data, in areas where the data are not available, using a punctual sample set. The result is a map of species and risk area definition. More than a single map of the categorical attribute, the procedure also permits the association of uncertainties of the stochastic model, which can be used to qualify the inferences. In order to validate the estimated data of the risk map, a fieldwork in five municipalities was carried out. The obtained results showed that indicator kriging is a rather robust tool since it presented a very good agreement with the field findings. The obtained risk map can be thought as an auxiliary tool to formulate proper public health strategies, and to guide other fieldwork, considering the places with higher occurrence probability of the most important snail species. Also, the risk map will enable better resource distribution and adequate policies for the mollusk control. This methodology will be applied to other river basins to generate a predictive map for Biomphalaria species distribution for the entire state of Minas Gerais. PMID:19046937
NASA Astrophysics Data System (ADS)
Kumari, Madhuri; Singh, Chander Kumar; Bakimchandra, Oinam; Basistha, Ashoke
2016-07-01
In mountainous region with heterogeneous topography, the geostatistical modeling of the rainfall using global data set may not confirm to the intrinsic hypothesis of stationarity. This study was focused on improving the precision of the interpolated rainfall maps by spatial stratification in complex terrain. Predictions of the normal annual rainfall data were carried out by ordinary kriging, universal kriging, and co-kriging, using 80-point observations in the Indian Himalayas extending over an area of 53,484 km2. A two-step spatial clustering approach is proposed. In the first step, the study area was delineated into two regions namely lowland and upland based on the elevation derived from the digital elevation model. The delineation was based on the natural break classification method. In the next step, the rainfall data was clustered into two groups based on its spatial location in lowland or upland. The terrain ruggedness index (TRI) was incorporated as a co-variable in co-kriging interpolation algorithm. The precision of the kriged and co-kriged maps was assessed by two accuracy measures, root mean square error and Chatfield's percent better. It was observed that the stratification of rainfall data resulted in 5-20 % of increase in the performance efficiency of interpolation methods. Co-kriging outperformed the kriging models at annual and seasonal scale. The result illustrates that the stratification of the study area improves the stationarity characteristic of the point data, thus enhancing the precision of the interpolated rainfall maps derived using geostatistical methods.
Guimarães, Ricardo J P S; Freitas, Corina C; Dutra, Luciano V; Felgueiras, Carlos A; Moura, Ana C M; Amaral, Ronaldo S; Drummond, Sandra C; Scholte, Ronaldo G C; Oliveira, Guilherme; Carvalho, Omar S
2009-03-01
Geostatistics is used in this work to make inferences about the presence of the species of Biomphalaria (B. glabrata, B. tenagophila and/or B. straminea), intermediate hosts of Schistosoma mansoni, at the São Francisco River Basin, in Minas Gerais, Brazil. One of these geostatistical procedures, known as indicator kriging, allows the classification of categorical data, in areas where the data are not available, using a punctual sample set. The result is a map of species and risk area definition. More than a single map of the categorical attribute, the procedure also permits the association of uncertainties of the stochastic model, which can be used to qualify the inferences. In order to validate the estimated data of the risk map, a fieldwork in five municipalities was carried out. The obtained results showed that indicator kriging is a rather robust tool since it presented a very good agreement with the field findings. The obtained risk map can be thought as an auxiliary tool to formulate proper public health strategies, and to guide other fieldwork, considering the places with higher occurrence probability of the most important snail species. Also, the risk map will enable better resource distribution and adequate policies for the mollusk control. This methodology will be applied to other river basins to generate a predictive map for Biomphalaria species distribution for the entire state of Minas Gerais.
de Jonge, Eric; Cloes, Edith; Op de Beeck, Lode; Adriaens, Bérengère; Lousbergh, Daniel; Orye, Guy G; Buntinx, Frank
2008-06-01
The objective of the study was to assess the effect of an invitation letter on the level of participation in a setting of mainly opportunistic screening for cervical cancer and to do a cost analysis of this intervention. We designed a quasi-randomized trial in which a sample of women between the ages of 25 and 64 years and residing in the province of Limburg, Belgium, who had no Pap smear taken in the past 30 months according to LIKAR (Limburg Cancer Registry), were assigned to an intervention group or to a control group. A written invitation was sent to 43 523 women in the intervention group. Baseline participation in cervical screening was recorded in the year before the intervention to determine its effect. Differences in cumulative incidence between the intervention and the control group were used to report the effect. The net effect of a written invitation resulted in 3355 more women undergoing a Pap smear, which is an increase of 6.4% (95% confidence interval: 5.9-6.9). The cost per additional Pap smear taken amounted to euro29.8. Within an opportunistic cervical cancer screening setting, the effect of a registry-based invitational programme to nonattenders increases the participation further, and at no extra cost compared with an invitational programme to all screen-eligible women irrespective of their screening status.
2013-01-01
Background Exposure to biomass fuel smoke is one of the leading risk factors for disease burden worldwide. International campaigns are currently promoting the widespread adoption of improved cookstoves in resource-limited settings, yet little is known about the cultural and social barriers to successful improved cookstove adoption and how these barriers affect environmental exposures and health outcomes. Design We plan to conduct a one-year crossover, feasibility intervention trial in three resource-limited settings (Kenya, Nepal and Peru). We will enroll 40 to 46 female primary cooks aged 20 to 49 years in each site (total 120 to 138). Methods At baseline, we will collect information on sociodemographic characteristics and cooking practices, and measure respiratory health and blood pressure for all participating women. An initial observational period of four months while households use their traditional, open-fire design cookstoves will take place prior to randomization. All participants will then be randomized to receive one of two types of improved, ventilated cookstoves with a chimney: a commercially-constructed cookstove (Envirofit G3300/G3355) or a locally-constructed cookstove. After four months of observation, participants will crossover and receive the other improved cookstove design and be followed for another four months. During each of the three four-month study periods, we will collect monthly information on self-reported respiratory symptoms, cooking practices, compliance with cookstove use (intervention periods only), and measure peak expiratory flow, forced expiratory volume at 1 second, exhaled carbon monoxide and blood pressure. We will also measure pulmonary function testing in the women participants and 24-hour kitchen particulate matter and carbon monoxide levels at least once per period. Discussion Findings from this study will help us better understand the behavioral, biological, and environmental changes that occur with a cookstove
Murray, Christopher J.; Bott, Yi-Ju
2008-12-30
This report provides an updated estimate of the inventory of carbon tetrachloride (CTET) in the unconfined aquifer in the 200 West Area of the Hanford Site. The contaminant plumes of interest extend within the 200-ZP-1 and 200-UP-1 operable units. CH2M HILL Plateau Remediation Company (CHPRC) currently is preparing a plan identifying locations for groundwater extraction wells, injection wells, transfer stations, and one or more treatment facilities to address contaminants of concern identified in the 200-ZP-1 CERCLA Record of Decision. To accomplish this, a current understanding of the inventory of CTET is needed throughout the unconfined aquifer in the 200 West Area. Pacific Northwest National Laboratory (PNNL) previously developed an estimate of the CTET inventory in the area using a Monte Carlo approach based on geostatistical simulation of the three-dimensional (3D) distribution of CTET and chloroform in the aquifer. Fluor Hanford, Inc. (FH) (the previous site contractor) requested PNNL to update that inventory estimate using as input a set of geostatistical realizations of CTET and chloroform recently created for a related but separate project, referred to as the mapping project. The scope of work for the inventory revision complemented the scope of work for the mapping project, performed for FH by PNNL. This report briefly describes the spatial and univariate distribution of the CTET and chloroform data, along with the results of the geostatistical analysis and simulation performed for the mapping project.
Efficient Geostatistical Inversion under Transient Flow Conditions in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf
2014-05-01
a reasonable range. Transient inversion, however, requires time series of measurements and therefore typically leads to a large number of observations, and under these circumstances the existing methods become unfeasible. We present an extension of the existing inversion methods to instationary flow regimes. Our approach uses a Conjugate Gradients scheme preconditioned with the prior covariance matrix QY Y to avoid both multiplications with QY Y -1 and the explicit assembly of Hz. Instead, one combined adjoint model run is used for all observations at once. As the computing time of our approach is largely independent of the number of measurements used for inversion, the presented method can be applied to large data sets. This facilitates the treatment of applications with variable boundary conditions (nearby rivers, precipitation). We integrate the geostatistical inversion method into the software framework DUNE, enabling the use of high-performance-computing techniques and full parallelization. Feasibility of our approach is demonstrated through the joint inversion of several synthetic data sets in two and three dimensions, e.g. estimation of hydraulic conductivity using hydraulic head values and tracer concentrations, and scalability of the new method is analyzed. A comparison of the new method with existing geostatistical inversion approaches highlights its advantages and drawbacks and demonstrates scenarios in which our scheme can be beneficial.
Saito, Hirotaka; McKenna, Sean Andrew; Coburn, Timothy C.
2004-07-01
Geostatistical and non-geostatistical noise filtering methodologies, factorial kriging and a low-pass filter, and a region growing method are applied to analytic signal magnetometer images at two UXO contaminated sites to delineate UXO target areas. Overall delineation performance is improved by removing background noise. Factorial kriging slightly outperforms the low-pass filter but there is no distinct difference between them in terms of finding anomalies of interest.
Building on crossvalidation for increasing the quality of geostatistical modeling
Olea, R.A.
2012-01-01
The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria. ?? 2011 US Government.
NASA Astrophysics Data System (ADS)
Illman, Walter A.; Berg, Steven J.; Zhao, Zhanfeng
2015-05-01
The robust performance of hydraulic tomography (HT) based on geostatistics has been demonstrated through numerous synthetic, laboratory, and field studies. While geostatistical inverse methods offer many advantages, one key disadvantage is its highly parameterized nature, which renders it computationally intensive for large-scale problems. Another issue is that geostatistics-based HT may produce overly smooth images of subsurface heterogeneity when there are few monitoring interval data. Therefore, some may question the utility of the geostatistical inversion approach in certain situations and seek alternative approaches. To investigate these issues, we simultaneously calibrated different groundwater models with varying subsurface conceptualizations and parameter resolutions using a laboratory sandbox aquifer. The compared models included: (1) isotropic and anisotropic effective parameter models; (2) a heterogeneous model that faithfully represents the geological features; and (3) a heterogeneous model based on geostatistical inverse modeling. The performance of these models was assessed by quantitatively examining the results from model calibration and validation. Calibration data consisted of steady state drawdown data from eight pumping tests and validation data consisted of data from 16 separate pumping tests not used in the calibration effort. Results revealed that the geostatistical inversion approach performed the best among the approaches compared, although the geological model that faithfully represented stratigraphy came a close second. In addition, when the number of pumping tests available for inverse modeling was small, the geological modeling approach yielded more robust validation results. This suggests that better knowledge of stratigraphy obtained via geophysics or other means may contribute to improved results for HT.
Pop-Eleches, Cristian; Thirumurthy, Harsha; Habyarimana, James P.; Zivin, Joshua G.; Goldstein, Markus P.; de Walque, Damien; MacKeen, Leslie; Haberer, Jessica; Kimaiyo, Sylvester; Sidle, John; Ngare, Duncan; Bangsberg, David R.
2013-01-01
Objective There is limited evidence on whether growing mobile phone availability in sub-Saharan Africa can be used to promote high adherence to antiretroviral therapy (ART). This study tested the efficacy of short message service (SMS) reminders on adherence to ART among patients attending a rural clinic in Kenya. Design A randomized controlled trial of four SMS reminder interventions with 48 weeks of follow-up. Methods Four hundred and thirty-one adult patients who had initiated ART within 3 months were enrolled and randomly assigned to a control group or one of the four intervention groups. Participants in the intervention groups received SMS reminders that were either short or long and sent at a daily or weekly frequency. Adherence was measured using the medication event monitoring system. The primary outcome was whether adherence exceeded 90% during each 12-week period of analysis and the 48-week study period. The secondary outcome was whether there were treatment interruptions lasting at least 48 h. Results In intention-to-treat analysis, 53% of participants receiving weekly SMS reminders achieved adherence of at least 90% during the 48 weeks of the study, compared with 40% of participants in the control group (P=0.03). Participants in groups receiving weekly reminders were also significantly less likely to experience treatment interruptions exceeding 48 h during the 48-week follow-up period than participants in the control group (81 vs. 90%, P = 0.03). Conclusion These results suggest that SMS reminders may be an important tool to achieve optimal treatment response in resource-limited settings. PMID:21252632
Lambert, Isora; Montada, Domingo; Baly, Alberto; Van der Stuyft, Patrick
2015-01-01
Objective & Methodology The current study evaluated the effectiveness and cost-effectiveness of Insecticide Treated Curtain (ITC) deployment for reducing dengue vector infestation levels in the Cuban context with intensive routine control activities. A cluster randomized controlled trial took place in Guantanamo city, east Cuba. Twelve neighborhoods (about 500 households each) were selected among the ones with the highest Aedes infestation levels in the previous two years, and were randomly allocated to the intervention and control arms. Long lasting ITC (PermaNet) were distributed in the intervention clusters in March 2009. Routine control activities were continued in the whole study area. In both study arms, we monitored monthly pre- and post-intervention House Index (HI, number of houses with at least 1 container with Aedes immature stages/100 houses inspected), during 12 and 18 months respectively. We evaluated the effect of ITC deployment on HI by fitting a generalized linear regression model with a negative binomial link function to these data. Principal Findings At distribution, the ITC coverage (% of households using ≥1 ITC) reached 98.4%, with a median of 3 ITC distributed/household. After 18 months, the coverage remained 97.4%. The local Aedes species was susceptible to deltamethrin (mosquito mortality rate of 99.7%) and the residual deltamethrin activity in the ITC was within acceptable levels (mosquito mortality rate of 73.1%) after one year of curtain use. Over the 18 month observation period after ITC distribution, the adjusted HI rate ratio, intervention versus control clusters, was 1.15 (95% CI 0.57 to 2.34). The annualized cost per household of ITC implementation was 3.8 USD, against 16.8 USD for all routine ACP activities. Conclusion Deployment of ITC in a setting with already intensive routine Aedes control actions does not lead to reductions in Aedes infestation levels. PMID:25794192
Chen, DI-WEN
2001-11-21
Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information
NASA Astrophysics Data System (ADS)
Painter, S. L.; Jiang, Y.; Woodbury, A. D.
2002-12-01
The Edwards Aquifer, a highly heterogeneous karst aquifer located in south central Texas, is the sole source of drinking water for more than one million people. Hydraulic conductivity (K) measurements in the Edwards Aquifer are sparse, highly variable (log-K variance of 6.4), and are mostly from single-well drawdown tests that are appropriate for the spatial scale of a few meters. To support ongoing efforts to develop a groundwater management (MODFLOW) model of the San Antonio segment of the Edwards Aquifer, a multistep procedure was developed to assign hydraulic parameters to the 402 m x 402 m computational cells intended for the management model. The approach used a combination of nonparametric geostatistical analysis, stochastic simulation, numerical upscaling, and automatic model calibration based on Bayesian updating [1,2]. Indicator correlograms reveal a nested spatial structure in the well-test K of the confined zone, with practical correlation ranges of 3,600 and 15,000 meters and a large nugget effect. The fitted geostatistical model was used in unconditional stochastic simulations by the sequential indicator simulation method. The resulting realizations of K, defined at the scale of the well tests, were then numerically upscaled to the block scale. A new geostatistical model was fitted to the upscaled values. The upscaled model was then used to cokrige the block-scale K based on the well-test K. The resulting K map was then converted to transmissivity (T) using deterministically mapped aquifer thickness. When tested in a forward groundwater model, the upscaled T reproduced hydraulic heads better than a simple kriging of the well-test values (mean error of -3.9 meter and mean-absolute-error of 12 meters, as compared with -13 and 17 meters for the simple kriging). As the final step in the study, the upscaled T map was used as the prior distribution in an inverse procedure based on Bayesian updating [1,2]. When input to the forward groundwater model, the
Visser, A; Moran, J E; Hillegonds, Darren; Singleton, M J; Kulongoski, Justin T; Belitz, Kenneth; Esser, B K
2016-03-15
Key characteristics of California groundwater systems related to aquifer vulnerability, sustainability, recharge locations and mechanisms, and anthropogenic impact on recharge are revealed in a spatial geostatistical analysis of a unique data set of tritium, noble gases and other isotopic analyses unprecedented in size at nearly 4000 samples. The correlation length of key groundwater residence time parameters varies between tens of kilometers ((3)H; age) to the order of a hundred kilometers ((4)Heter; (14)C; (3)Hetrit). The correlation length of parameters related to climate, topography and atmospheric processes is on the order of several hundred kilometers (recharge temperature; δ(18)O). Young groundwater ages that highlight regional recharge areas are located in the eastern San Joaquin Valley, in the southern Santa Clara Valley Basin, in the upper LA basin and along unlined canals carrying Colorado River water, showing that much of the recent recharge in central and southern California is dominated by river recharge and managed aquifer recharge. Modern groundwater is found in wells with the top open intervals below 60 m depth in the southeastern San Joaquin Valley, Santa Clara Valley and Los Angeles basin, as the result of intensive pumping and/or managed aquifer recharge operations.
Welhan, J.A.; Reed, M.F.
1997-01-01
The regional spatial correlation structure of bulk horizontal hydraulic conductivity (Kb) estimated from published transmissivity data from 79 open boreholes in the fractured basalt aquifer of the eastern Snake River Plain was analyzed with geostatistical methods. The two-dimensional spatial correlation structure of In Kb shows a pronounced 4:1 range anisotropy, with a maximum correlation range in the north-northwest- south-southeast direction of about 6 km. The maximum variogram range of In Kb is similar to the mean length of flow groups exposed at the surface. The In Kb range anisotropy is similar to the mean width/length ratio of late Quaternary and Holocene basalt lava flows and the orientations of the major volcanic structural features on the eastern Snake River Plain. The similarity between In Kb correlation scales and basalt flow dimensions and between basalt flow orientations and correlation range anisotropy suggests that the spatial distribution of zones of high hydraulic conductivity may be controlled by the lateral dimensions, spatial distribution, and interconnection between highly permeable zones which are known to occur between lava flows within flow groups. If hydraulic conductivity and lithology are eventually shown to be cross correlative in this geologic setting, it may be possible to stochastically simulate hydraulic conductivity distributions, which are conditional on a knowledge of volcanic stratigraphy.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologie parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into faci??s associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O) ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained. Copyright 2009 by the American Geophysical Union.
Visser, A; Moran, J E; Hillegonds, Darren; Singleton, M J; Kulongoski, Justin T; Belitz, Kenneth; Esser, B K
2016-03-15
Key characteristics of California groundwater systems related to aquifer vulnerability, sustainability, recharge locations and mechanisms, and anthropogenic impact on recharge are revealed in a spatial geostatistical analysis of a unique data set of tritium, noble gases and other isotopic analyses unprecedented in size at nearly 4000 samples. The correlation length of key groundwater residence time parameters varies between tens of kilometers ((3)H; age) to the order of a hundred kilometers ((4)Heter; (14)C; (3)Hetrit). The correlation length of parameters related to climate, topography and atmospheric processes is on the order of several hundred kilometers (recharge temperature; δ(18)O). Young groundwater ages that highlight regional recharge areas are located in the eastern San Joaquin Valley, in the southern Santa Clara Valley Basin, in the upper LA basin and along unlined canals carrying Colorado River water, showing that much of the recent recharge in central and southern California is dominated by river recharge and managed aquifer recharge. Modern groundwater is found in wells with the top open intervals below 60 m depth in the southeastern San Joaquin Valley, Santa Clara Valley and Los Angeles basin, as the result of intensive pumping and/or managed aquifer recharge operations. PMID:26803267
Caeiro, Sandra; Goovaerts, Pierre; Painho, Marco; Costa, M Helena
2003-09-15
The Sado Estuary is a coastal zone located in the south of Portugal where conflicts between conservation and development exist because of its location near industrialized urban zones and its designation as a natural reserve. The aim of this paper is to evaluate a set of multivariate geostatistical approaches to delineate spatially contiguous regions of sediment structure for Sado Estuary. These areas will be the supporting infrastructure of an environmental management system for this estuary. The boundaries of each homogeneous area were derived from three sediment characterization attributes through three different approaches: (1) cluster analysis of dissimilarity matrix function of geographical separation followed by indicator kriging of the cluster data, (2) discriminant analysis of kriged values of the three sediment attributes, and (3) a combination of methods 1 and 2. Final maximum likelihood classification was integrated into a geographical information system. All methods generated fairly spatially contiguous management areas that reproduce well the environment of the estuary. Map comparison techniques based on kappa statistics showed thatthe resultant three maps are similar, supporting the choice of any of the methods as appropriate for management of the Sado Estuary. However, the results of method 1 seem to be in better agreement with estuary behavior, assessment of contamination sources, and previous work conducted at this site. PMID:14524435
Wallace, C.S.A.; Marsh, S.E.
2005-01-01
Our study used geostatistics to extract measures that characterize the spatial structure of vegetated landscapes from satellite imagery for mapping endangered Sonoran pronghorn habitat. Fine spatial resolution IKONOS data provided information at the scale of individual trees or shrubs that permitted analysis of vegetation structure and pattern. We derived images of landscape structure by calculating local estimates of the nugget, sill, and range variogram parameters within 25 ?? 25-m image windows. These variogram parameters, which describe the spatial autocorrelation of the 1-m image pixels, are shown in previous studies to discriminate between different species-specific vegetation associations. We constructed two independent models of pronghorn landscape preference by coupling the derived measures with Sonoran pronghorn sighting data: a distribution-based model and a cluster-based model. The distribution-based model used the descriptive statistics for variogram measures at pronghorn sightings, whereas the cluster-based model used the distribution of pronghorn sightings within clusters of an unsupervised classification of derived images. Both models define similar landscapes, and validation results confirm they effectively predict the locations of an independent set of pronghorn sightings. Such information, although not a substitute for field-based knowledge of the landscape and associated ecological processes, can provide valuable reconnaissance information to guide natural resource management efforts. ?? 2005 Taylor & Francis Group Ltd.
Geostatistical investigations for suitable mapping of the water table: the Bordeaux case (France)
NASA Astrophysics Data System (ADS)
Guekie simo, Aubin Thibaut; Marache, Antoine; Lastennet, Roland; Breysse, Denys
2016-02-01
Methodologies have been developed to establish realistic water-table maps using geostatistical methods: ordinary kriging (OK), cokriging (CoK), collocated cokriging (CoCoK), and kriging with external drift (KED). In fact, in a hilly terrain, when piezometric data are sparsely distributed over large areas, the water-table maps obtained by these methods provide exact water levels at monitoring wells but fail to represent the groundwater flow system, manifested through an interpolated water table above the topography. A methodology is developed in order to rebuild water-table maps for urban areas at the city scale. The interpolation methodology is presented and applied in a case study where water levels are monitored at a set of 47 points for a part urban domain covering 25.6 km2 close to Bordeaux city, France. To select the best method, a geographic information system was used to visualize surfaces reconstructed with each method. A cross-validation was carried out to evaluate the predictive performances of each kriging method. KED proves to be the most accurate and yields a better description of the local fluctuations induced by the topography (natural occurrence of ridges and valleys).
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
2013-01-01
Background Audit and feedback to physicians is a commonly used quality improvement strategy, but its optimal design is unknown. This trial tested the effects of a theory-informed worksheet to facilitate goal setting and action planning, appended to feedback reports on chronic disease management, compared to feedback reports provided without these worksheets. Methods A two-arm pragmatic cluster randomized trial was conducted, with allocation at the level of primary care clinics. Participants were family physicians who contributed data from their electronic medical records. The ‘usual feedback’ arm received feedback every six months for two years regarding the proportion of their patients meeting quality targets for diabetes and/or ischemic heart disease. The intervention arm received these same reports plus a worksheet designed to facilitate goal setting and action plan development in response to the feedback reports. Blood pressure (BP) and low-density lipoprotein cholesterol (LDL) values were compared after two years as the primary outcomes. Process outcomes measured the proportion of guideline-recommended actions (e.g., testing and prescribing) conducted within the appropriate timeframe. Intention-to-treat analysis was performed. Results Outcomes were similar across groups at baseline. Final analysis included 20 physicians from seven clinics and 1,832 patients in the intervention arm (15% loss to follow up) and 29 physicians from seven clinics and 2,223 patients in the usual feedback arm (10% loss to follow up). Ten of 20 physicians completed the worksheet at least once during the study. Mean BP was 128/72 in the feedback plus worksheet arm and 128/73 in the feedback alone arm, while LDL was 2.1 and 2.0, respectively. Thus, no significant differences were observed across groups in the primary outcomes, but mean haemoglobin A1c was lower in the feedback plus worksheet arm (7.2% versus 7.4%, p<0.001). Improvements in both arms were noted over time for one
Kriging in the Shadows: Geostatistical Interpolation for Remote Sensing
NASA Technical Reports Server (NTRS)
Rossi, Richard E.; Dungan, Jennifer L.; Beck, Louisa R.
1994-01-01
It is often useful to estimate obscured or missing remotely sensed data. Traditional interpolation methods, such as nearest-neighbor or bilinear resampling, do not take full advantage of the spatial information in the image. An alternative method, a geostatistical technique known as indicator kriging, is described and demonstrated using a Landsat Thematic Mapper image in southern Chiapas, Mexico. The image was first classified into pasture and nonpasture land cover. For each pixel that was obscured by cloud or cloud shadow, the probability that it was pasture was assigned by the algorithm. An exponential omnidirectional variogram model was used to characterize the spatial continuity of the image for use in the kriging algorithm. Assuming a cutoff probability level of 50%, the error was shown to be 17% with no obvious spatial bias but with some tendency to categorize nonpasture as pasture (overestimation). While this is a promising result, the method's practical application in other missing data problems for remotely sensed images will depend on the amount and spatial pattern of the unobscured pixels and missing pixels and the success of the spatial continuity model used.
An interactive Bayesian geostatistical inverse protocol for hydraulic tomography
NASA Astrophysics Data System (ADS)
Fienen, Michael N.; Clemo, Tom; Kitanidis, Peter K.
2008-12-01
Hydraulic tomography is a powerful technique for characterizing heterogeneous hydrogeologic parameters. An explicit trade-off between characterization based on measurement misfit and subjective characterization using prior information is presented. We apply a Bayesian geostatistical inverse approach that is well suited to accommodate a flexible model with the level of complexity driven by the data and explicitly considering uncertainty. Prior information is incorporated through the selection of a parameter covariance model characterizing continuity and providing stability. Often, discontinuities in the parameter field, typically caused by geologic contacts between contrasting lithologic units, necessitate subdivision into zones across which there is no correlation among hydraulic parameters. We propose an interactive protocol in which zonation candidates are implied from the data and are evaluated using cross validation and expert knowledge. Uncertainty introduced by limited knowledge of dynamic regional conditions is mitigated by using drawdown rather than native head values. An adjoint state formulation of MODFLOW-2000 is used to calculate sensitivities which are used both for the solution to the inverse problem and to guide protocol decisions. The protocol is tested using synthetic two-dimensional steady state examples in which the wells are located at the edge of the region of interest.
Bayesian Geostatistical Modeling of Leishmaniasis Incidence in Brazil
Karagiannis-Voules, Dimitrios-Alexios; Scholte, Ronaldo G. C.; Guimarães, Luiz H.; Utzinger, Jürg; Vounatsou, Penelope
2013-01-01
Background Leishmaniasis is endemic in 98 countries with an estimated 350 million people at risk and approximately 2 million cases annually. Brazil is one of the most severely affected countries. Methodology We applied Bayesian geostatistical negative binomial models to analyze reported incidence data of cutaneous and visceral leishmaniasis in Brazil covering a 10-year period (2001–2010). Particular emphasis was placed on spatial and temporal patterns. The models were fitted using integrated nested Laplace approximations to perform fast approximate Bayesian inference. Bayesian variable selection was employed to determine the most important climatic, environmental, and socioeconomic predictors of cutaneous and visceral leishmaniasis. Principal Findings For both types of leishmaniasis, precipitation and socioeconomic proxies were identified as important risk factors. The predicted number of cases in 2010 were 30,189 (standard deviation [SD]: 7,676) for cutaneous leishmaniasis and 4,889 (SD: 288) for visceral leishmaniasis. Our risk maps predicted the highest numbers of infected people in the states of Minas Gerais and Pará for visceral and cutaneous leishmaniasis, respectively. Conclusions/Significance Our spatially explicit, high-resolution incidence maps identified priority areas where leishmaniasis control efforts should be targeted with the ultimate goal to reduce disease incidence. PMID:23675545
Soil Organic Carbon Mapping by Geostatistics in Europe Scale
NASA Astrophysics Data System (ADS)
Aksoy, E.; Panagos, P.; Montanarella, L.
2013-12-01
Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because SOC is an important soil component that plays key roles in the functions of both natural ecosystems and agricultural systems. The SOC content varies from place to place and it is strongly related with climate variables (temperature and rainfall), terrain features, soil texture, parent material, vegetation, land-use types, and human management (management and degradation) at different spatial scales. Geostatistical techniques allow for the prediction of soil properties using soil information and environmental covariates. In this study, assessment of SOC distribution has been predicted with Regression-Kriging method in Europe scale. In this prediction, combination of the soil samples which were collected from the LUCAS (European Land Use/Cover Area frame statistical Survey) & BioSoil Projects, with local soil data which were collected from six different CZOs in Europe and ten spatial predictors (slope, aspect, elevation, CTI, CORINE land-cover classification, parent material, texture, WRB soil classification, annual average temperature and precipitation) were used. Significant correlation between the covariates and the organic carbon dependent variable was found. Moreover, investigating the contribution of local dataset in watershed scale into regional dataset in European scale was an important challenge.
An interactive Bayesian geostatistical inverse protocol for hydraulic tomography
Fienen, Michael N.; Clemo, Tom; Kitanidis, Peter K.
2008-01-01
Hydraulic tomography is a powerful technique for characterizing heterogeneous hydrogeologic parameters. An explicit trade-off between characterization based on measurement misfit and subjective characterization using prior information is presented. We apply a Bayesian geostatistical inverse approach that is well suited to accommodate a flexible model with the level of complexity driven by the data and explicitly considering uncertainty. Prior information is incorporated through the selection of a parameter covariance model characterizing continuity and providing stability. Often, discontinuities in the parameter field, typically caused by geologic contacts between contrasting lithologic units, necessitate subdivision into zones across which there is no correlation among hydraulic parameters. We propose an interactive protocol in which zonation candidates are implied from the data and are evaluated using cross validation and expert knowledge. Uncertainty introduced by limited knowledge of dynamic regional conditions is mitigated by using drawdown rather than native head values. An adjoint state formulation of MODFLOW-2000 is used to calculate sensitivities which are used both for the solution to the inverse problem and to guide protocol decisions. The protocol is tested using synthetic two-dimensional steady state examples in which the wells are located at the edge of the region of interest.
Eastern gas shales geostatistical lineament analysis. Final report
Pratt, S.; Robey, E.; Wojewodka, R.
1986-12-01
A geostatistical analysis was conducted to determine if proximity to lineaments affects the production of Devonian shale gas wells. Production data, including open flow and 10-year cumulative production, were gathered for 847 gas wells in three areas of different tectonic stress: southwestern West Virginia, eastern Meigs County, Ohio, and Floyd county, Kentucky. Three experienced mappers were contracted to independently identify lineaments in each study area for the purpose of comparing the mappers and their techniques. Data pertaining to the proximity and density of lineaments near each well were collected from maps provided by each contractor. Gas production was found not to be consistently associated with the proximity of lineaments nor linement intersections. The lack of correlation may be attributed to the fact that the corresponding maps provided by each contractor were vastly different. These difference may account for inconsistent findings of past lineament studies. The lack of observable relationships between lineaments and production, coupled with the degree of disparity in the lineaments seen by three experienced mappers, indicated many viable avenues for future research. The analysis clearly shows a need for more accurate and systematic methods of mapping lineaments before they become useful for gas exploration. 26 refs., 11 figs., 5 tabs.
Increasing confidence in mass discharge estimates using geostatistical methods.
Cai, Zuansi; Wilson, Ryan D; Cardiff, Michael A; Kitanidis, Peter K
2011-01-01
Mass discharge is one metric rapidly gaining acceptance for assessing the performance of in situ groundwater remediation systems. Multilevel sampling transects provide the data necessary to make such estimates, often using the Thiessen Polygon method. This method, however, does not provide a direct estimate of uncertainty. We introduce a geostatistical mass discharge estimation approach that involves a rigorous analysis of data spatial variability and selection of an appropriate variogram model. High-resolution interpolation was applied to create a map of measurements across a transect, and the magnitude and uncertainty of mass discharge were quantified by conditional simulation. An important benefit of the approach is quantified uncertainty of the mass discharge estimate. We tested the approach on data from two sites monitored using multilevel transects. We also used the approach to explore the effect of lower spatial monitoring resolution on the accuracy and uncertainty of mass discharge estimates. This process revealed two important findings: (1) appropriate monitoring resolution is that which yielded an estimate comparable with the full dataset value, and (2) high-resolution sampling yields a more representative spatial data structure descriptor, which can then be used via conditional simulation to make subsequent mass discharge estimates from lower resolution sampling of the same transect. The implication of the latter is that a high-resolution multilevel transect needs to be sampled only once to obtain the necessary spatial data descriptor for a contaminant plume exhibiting minor temporal variability, and thereafter less spatially intensely to reduce costs.
Spatiotemporal analysis of olive flowering using geostatistical techniques.
Rojo, Jesús; Pérez-Badia, Rosa
2015-02-01
Analysis of flowering patterns in the olive (Olea europaea L.) are of considerable agricultural and ecological interest, and also provide valuable information for allergy-sufferers, enabling identification of the major sources of airborne pollen at any given moment by interpreting the aerobiological data recorded in pollen traps. The present spatiotemporal analysis of olive flowering in central Spain combined geostatistical techniques with the application of a Geographic Information Systems, and compared results for flowering intensity with airborne pollen records. The results were used to obtain continuous phenological maps which determined the pattern of the succession of the olive flowering. The results show also that, although the highest airborne olive-pollen counts were recorded during the greatest flowering intensity of the groves closest to the pollen trap, the counts recorded at the start of the pollen season were not linked to local olive groves, which had not yet begin to flower. To detect the remote sources of olive pollen several episodes of pollen recorded before the local flowering season were analysed using a HYSPLIT trajectory model and the findings showed that western, southern and southwestern winds transported pollen grains into the study area from earlier-flowering groves located outside the territory.
Statistical Downscaling Based on Spartan Spatial Random Fields
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2010-05-01
Stochastic methods of space-time interpolation and conditional simulation have been used in statistical downscaling approaches to increase the resolution of measured fields. One of the popular interpolation methods in geostatistics is kriging, also known as optimal interpolation in data assimilation. Kriging is a stochastic, linear interpolator which incorporates time/space variability by means of the variogram function. However, estimation of the variogram from data involves various assumptions and simplifications. At the same time, the high numerical complexity of kriging makes it difficult to use for very large data sets. We present a different approach based on the so-called Spartan Spatial Random Fields (SSRFs). SSRFs were motivated from classical field theories of statistical physics [1]. The SSRFs provide a different approach of parametrizing spatial dependence based on 'effective interactions,' which can be formulated based on general statistical principles or even incorporate physical constraints. This framework leads to a broad family of covariance functions [2], and it provides new perspectives in covariance parameter estimation and interpolation [3]. A significant advantage offered by SSRFs is reduced numerical complexity, which can lead to much faster codes for spatial interpolation and conditional simulation. In addition, on grids composed of rectangular cells, the SSRF representation leads to an explicit expression for the precision matrix (the inverse covariance). Therefore SSRFs could provide useful models of error covariance for data assimilation methods. We use simulated and real data to demonstrate SSRF properties and downscaled fields. keywords: interpolation, conditional simulation, precision matrix References [1] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Journal in Scientific Computation, 24, 2125-2162. [2] Hristopulos, D.T., Elogne, S. N. 2007. Analytic properties and covariance
Cronkite-Ratcliff, C.; Phelps, G.A.; Boucher, A.
2012-01-01
This report provides a proof-of-concept to demonstrate the potential application of multiple-point geostatistics for characterizing geologic heterogeneity and its effect on flow and transport simulation. The study presented in this report is the result of collaboration between the U.S. Geological Survey (USGS) and Stanford University. This collaboration focused on improving the characterization of alluvial deposits by incorporating prior knowledge of geologic structure and estimating the uncertainty of the modeled geologic units. In this study, geologic heterogeneity of alluvial units is characterized as a set of stochastic realizations, and uncertainty is indicated by variability in the results of flow and transport simulations for this set of realizations. This approach is tested on a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. Yucca Flat was chosen as a data source for this test case because it includes both complex geologic and hydrologic characteristics and also contains a substantial amount of both surface and subsurface geologic data. Multiple-point geostatistics is used to model geologic heterogeneity in the subsurface. A three-dimensional (3D) model of spatial variability is developed by integrating alluvial units mapped at the surface with vertical drill-hole data. The SNESIM (Single Normal Equation Simulation) algorithm is used to represent geologic heterogeneity stochastically by generating 20 realizations, each of which represents an equally probable geologic scenario. A 3D numerical model is used to simulate groundwater flow and contaminant transport for each realization, producing a distribution of flow and transport responses to the geologic heterogeneity. From this distribution of flow and transport responses, the frequency of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary.
Fuller, Clifton D.; Nijkamp, Jasper; Duppen, Joop C.; Rasch, Coen R.N.; Thomas, Charles R.; Wang, Samuel J.; Okunieff, Paul; Jones, William E.; Baseman, Daniel; Patel, Shilpen; Demandante, Carlo G.N.; Harris, Anna M.; Smith, Benjamin D.; Katz, Alan W.; McGann, Camille
2011-02-01
Purpose: Variations in target volume delineation represent a significant hurdle in clinical trials involving conformal radiotherapy. We sought to determine the effect of a consensus guideline-based visual atlas on contouring the target volumes. Methods and Materials: A representative case was contoured (Scan 1) by 14 physician observers and a reference expert with and without target volume delineation instructions derived from a proposed rectal cancer clinical trial involving conformal radiotherapy. The gross tumor volume (GTV), and two clinical target volumes (CTVA, including the internal iliac, presacral, and perirectal nodes, and CTVB, which included the external iliac nodes) were contoured. The observers were randomly assigned to receipt (Group A) or nonreceipt (Group B) of a consensus guideline and atlas for anorectal cancers and then instructed to recontour the same case/images (Scan 2). Observer variation was analyzed volumetrically using the conformation number (CN, where CN = 1 equals total agreement). Results: Of 14 evaluable contour sets (1 expert and 7 Group A and 6 Group B observers), greater agreement was found for the GTV (mean CN, 0.75) than for the CTVs (mean CN, 0.46-0.65). Atlas exposure for Group A led to significantly increased interobserver agreement for CTVA (mean initial CN, 0.68, after atlas use, 0.76; p = .03) and increased agreement with the expert reference (initial mean CN, 0.58; after atlas use, 0.69; p = .02). For the GTV and CTVB, neither the interobserver nor the expert agreement was altered after atlas exposure. Conclusion: Consensus guideline atlas implementation resulted in a detectable difference in interobserver agreement and a greater approximation of expert volumes for the CTVA but not for the GTV or CTVB in the specified case. Visual atlas inclusion should be considered as a feature in future clinical trials incorporating conformal RT.
2011-01-01
Introduction The analysis of flow and pressure waveforms generated by ventilators can be useful in the optimization of patient-ventilator interactions, notably in chronic obstructive pulmonary disease (COPD) patients. To date, however, a real clinical benefit of this approach has not been proven. Methods The aim of the present randomized, multi-centric, controlled study was to compare optimized ventilation, driven by the analysis of flow and pressure waveforms, to standard ventilation (same physician, same initial ventilator setting, same time spent at the bedside while the ventilator screen was obscured with numerical data always available). The primary aim was the rate of pH normalization at two hours, while secondary aims were changes in PaCO2, respiratory rate and the patient's tolerance to ventilation (all parameters evaluated at baseline, 30, 120, 360 minutes and 24 hours after the beginning of ventilation). Seventy patients (35 for each group) with acute exacerbation of COPD were enrolled. Results Optimized ventilation led to a more rapid normalization of pH at two hours (51 vs. 26% of patients), to a significant improvement of the patient's tolerance to ventilation at two hours, and to a higher decrease of PaCO2 at two and six hours. Optimized ventilation induced physicians to use higher levels of external positive end-expiratory pressure, more sensitive inspiratory triggers and a faster speed of pressurization. Conclusions The analysis of the waveforms generated by ventilators has a significant positive effect on physiological and patient-centered outcomes during acute exacerbation of COPD. The acquisition of specific skills in this field should be encouraged. Trial registration ClinicalTrials.gov NCT01291303. PMID:22115190
Krebs, Nancy F; Mazariegos, Manolo; Chomba, Elwyn; Sami, Neelofar; Pasha, Omrana; Tshefu, Antoinette; Carlo, Waldemar A; Goldenberg, Robert L; Bose, Carl L; Wright, Linda L; Koso-Thomas, Marion; Goco, Norman; Kindem, Mark; McClure, Elizabeth M; Westcott, Jamie; Garces, Ana; Lokangaka, Adrien; Manasyan, Albert; Imenda, Edna; Hartwell, Tyler D; Hambidge, K Michael
2012-01-01
Background: Improved complementary feeding is cited as a critical factor for reducing stunting. Consumption of meats has been advocated, but its efficacy in low-resource settings has not been tested. Objective: The objective was to test the hypothesis that daily intake of 30 to 45 g meat from 6 to 18 mo of age would result in greater linear growth velocity and improved micronutrient status in comparison with an equicaloric multimicronutrient-fortified cereal. Design: This was a cluster randomized efficacy trial conducted in the Democratic Republic of Congo, Zambia, Guatemala, and Pakistan. Individual daily portions of study foods and education messages to enhance complementary feeding were delivered to participants. Blood tests were obtained at trial completion. Results: A total of 532 (86.1%) and 530 (85.8%) participants from the meat and cereal arms, respectively, completed the study. Linear growth velocity did not differ between treatment groups: 1.00 (95% CI: 0.99, 1.02) and 1.02 (95% CI: 1.00, 1.04) cm/mo for the meat and cereal groups, respectively (P = 0.39). From baseline to 18 mo, stunting [length-for-age z score (LAZ) <−2.0] rates increased from ∼33% to nearly 50%. Years of maternal education and maternal height were positively associated with linear growth velocity (P = 0.0006 and 0.003, respectively); LAZ at 6 mo was negatively associated (P < 0.0001). Anemia rates did not differ by group; iron deficiency was significantly lower in the cereal group. Conclusion: The high rate of stunting at baseline and the lack of effect of either the meat or multiple micronutrient-fortified cereal intervention to reverse its progression argue for multifaceted interventions beginning in the pre- and early postnatal periods. This trial was registered at clinicaltrials.gov as NCT01084109. PMID:22952176
Cavallari, Ilaria; Mega, Simona; Goffredo, Costanza; Patti, Giuseppe; Chello, Massimo; Di Sciascio, Germano
2015-06-01
Transthoracic echocardiography is not a routine test in the pre-operative cardiac evaluation of patients undergoing non-cardiac surgery but may be considered in those with known heart failure and valvular heart disease or complaining cardiac symptoms. In this setting, hand-held echocardiography (HHE) could find a potential application as an alternative to standard echocardiography in selected patients; however, its utility in this context has not been investigated. The aim of this pilot study was to evaluate the conclusiveness of HHE compared to standard echocardiography in this subset of patients. 100 patients scheduled for non-cardiac surgery were randomized to receive a standard exam with a Philips Ie33 or a bedside evaluation with a pocket-size imaging device (Opti-Go, Philips Medical System). The primary endpoint was the percentage of satisfactory diagnosis at the end of the examination referred as conclusiveness. Secondary endpoints were the mean duration time and the mean waiting time to perform the exams. No significant difference in terms of conclusiveness between HHE and standard echo was found (86 vs 96%; P = 0.08). Mean duration time of the examinations was 6.1 ± 1.2 min with HHE and 13.1 ± 2.6 min with standard echocardiography (P < 0.001). HHE resulted in a consistent save of waiting time because it was performed the same day of clinical evaluation whereas patients waited 10.1 ± 6.1 days for a standard echocardiography (P < 0.001). This study suggests the potential role of HHE for pre-operative evaluation of selected patients undergoing non-cardiac surgery, since it provided similar information but it was faster and earlier performed compared to standard echocardiography.
2013-01-01
Background Forensic mental health services have largely ignored examining patients’ views on the nature of the services offered to them. A structured communication approach (DIALOG) has been developed with the aim of placing the patient’s perspective on their care at the heart of the discussions between patients and clinicians. The effectiveness of the structured communication approach in community mental health services has been demonstrated, but no trial has taken place in a secure psychiatric setting. This pilot study is evaluating a 6-month intervention combining DIALOG with principles of solution-focused therapy on quality of life in medium-secure settings. Methods and design A cluster randomized controlled trial design is being employed to conduct a 36-month pilot study. Participants are recruited from six medium-secure inpatient services, with 48 patients in the intervention group and 48 in the control group. The intervention uses a structured communication approach. It comprises six meetings between patient and nurse held monthly over a 6-month period. During each meeting, patients rate their satisfaction with a range of life and treatment domains with responses displayed on a tablet. The rating is followed by a discussion of how to improve the current situation in those domains identified by the patient. Assessments take place prior to the intervention (baseline), at 6 months (postintervention) and at 12 months (follow-up). The primary outcome is the patient’s self-reported quality of life. Discussion This study aims to (1) establish the feasibility of the trial design as the basis for determining the viability of a large full-scale trial, (2) determine the variability of the outcomes of interest (quality of life, levels of satisfaction, disturbance, ward climate and engagement with services), (3) estimate the costs of the intervention and (4) refine the intervention following the outcome of the study based upon the experiences of the nurses and
Massively Parallel Geostatistical Inversion of Coupled Processes in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Ngo, A.; Schwede, R. L.; Li, W.; Bastian, P.; Ippisch, O.; Cirpka, O. A.
2012-04-01
The quasi-linear geostatistical approach is an inversion scheme that can be used to estimate the spatial distribution of a heterogeneous hydraulic conductivity field. The estimated parameter field is considered to be a random variable that varies continuously in space, meets the measurements of dependent quantities (such as the hydraulic head, the concentration of a transported solute or its arrival time) and shows the required spatial correlation (described by certain variogram models). This is a method of conditioning a parameter field to observations. Upon discretization, this results in as many parameters as elements of the computational grid. For a full three dimensional representation of the heterogeneous subsurface it is hardly sufficient to work with resolutions (up to one million parameters) of the model domain that can be achieved on a serial computer. The forward problems to be solved within the inversion procedure consists of the elliptic steady-state groundwater flow equation and the formally elliptic but nearly hyperbolic steady-state advection-dominated solute transport equation in a heterogeneous porous medium. Both equations are discretized by Finite Element Methods (FEM) using fully scalable domain decomposition techniques. Whereas standard conforming FEM is sufficient for the flow equation, for the advection dominated transport equation, which rises well known numerical difficulties at sharp fronts or boundary layers, we use the streamline diffusion approach. The arising linear systems are solved using efficient iterative solvers with an AMG (algebraic multigrid) pre-conditioner. During each iteration step of the inversion scheme one needs to solve a multitude of forward and adjoint problems in order to calculate the sensitivities of each measurement and the related cross-covariance matrix of the unknown parameters and the observations. In order to reduce interprocess communications and to improve the scalability of the code on larger clusters
Rhodes, Elena M; Liburd, Oscar E; Grunwald, Sabine
2011-08-01
Flower thrips (Frankliniella spp.) are one of the key pests of southern highbush blueberries (Vaccinium corymbosum L. x V. darrowii Camp), a high-value crop in Florida. Thrips' feeding and oviposition injury to flowers can result in fruit scarring that renders the fruit unmarketable. Flower thrips often form areas of high population, termed "hot spots", in blueberry plantings. The objective of this study was to model thrips spatial distribution patterns with geostatistical techniques. Semivariogram models were used to determine optimum trap spacing and two commonly used interpolation methods, inverse distance weighting (IDW) and ordinary kriging (OK), were compared for their ability to model thrips spatial patterns. The experimental design consisted of a grid of 100 white sticky traps spaced at 15.24-m and 7.61-m intervals in 2008 and 2009, respectively. Thirty additional traps were placed randomly throughout the sampling area to collect information on distances shorter than the grid spacing. The semivariogram analysis indicated that, in most cases, spacing traps at least 28.8 m apart would result in spatially independent samples. Also, the 7.61-m grid spacing captured more of the thrips spatial variability than the 15.24-m grid spacing. IDW and OK produced maps with similar accuracy in both years, which indicates that thrips spatial distribution patterns, including "hot spots," can be modeled using either interpolation method. Future studies can use this information to determine if the formation of "hot spots" can be predicted using flower density, temperature, and other environmental factors. If so, this development would allow growers to spot treat the "hot spots" rather than their entire field.
NASA Astrophysics Data System (ADS)
Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.
2015-12-01
The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.
Schröder, Winfried
2006-05-01
By the example of environmental monitoring, some applications of geographic information systems (GIS), geostatistics, metadata banking, and Classification and Regression Trees (CART) are presented. These tools are recommended for mapping statistically estimated hot spots of vectors and pathogens. GIS were introduced as tools for spatially modelling the real world. The modelling can be done by mapping objects according to the spatial information content of data. Additionally, this can be supported by geostatistical and multivariate statistical modelling. This is demonstrated by the example of modelling marine habitats of benthic communities and of terrestrial ecoregions. Such ecoregionalisations may be used to predict phenomena based on the statistical relation between measurements of an interesting phenomenon such as, e.g., the incidence of medically relevant species and correlated characteristics of the ecoregions. The combination of meteorological data and data on plant phenology can enhance the spatial resolution of the information on climate change. To this end, meteorological and phenological data have to be correlated. To enable this, both data sets which are from disparate monitoring networks have to be spatially connected by means of geostatistical estimation. This is demonstrated by the example of transformation of site-specific data on plant phenology into surface data. The analysis allows for spatial comparison of the phenology during the two periods 1961-1990 and 1991-2002 covering whole Germany. The changes in both plant phenology and air temperature were proved to be statistically significant. Thus, they can be combined by GIS overlay technique to enhance the spatial resolution of the information on the climate change and use them for the prediction of vector incidences at the regional scale. The localisation of such risk hot spots can be done by geometrically merging surface data on promoting factors. This is demonstrated by the example of the
Geostatistical Inverse Modeling for Natural Attenuation of Hydrocarbons in Groundwater
NASA Astrophysics Data System (ADS)
Hosseini, A. H.; Deutsch, C. V.; Mendoza, C. A.; Biggar, K. W.
2008-12-01
Parameter uncertainty for natural attenuation has been previously studied in the context of characterizing the uncertainty in the field measured biodegradation rate constant. Natural attenuation response variables (e.g. solute concentrations) should be stated in terms of a number of model parameters in such a way that (1) the most important mechanisms contributing to natural attenuation of petroleum hydrocarbons are simulated, (2) the independent variables (model parameters) and their uncertainty can be estimated using the available observations and prior information and (3) the model is not over-parameterized. Extensive sensitivity analyses show that the source term, aquifer heterogeneity and biodegradation rate of contaminants are the most important factors affecting the fate of dissolved petroleum hydrocarbon (PHC) contaminants in groundwater. A geostatistical inverse modeling approach is developed to quantify uncertainty in source geometry, source dissolution rate, aquifer heterogeneity and biodegradation rate constant. Multiple joint realizations of source geometry and aquifer transmissivity are constructed by distance function (DF) algorithm and sequential self calibration (SSC) approach. A gradient-based optimization approach is then adapted to condition the joint realizations to a number of observed concentrations recorded over a specific monitoring period. The conditioned joint realizations are then ranked based on their goodness of fit and used in the subsequent prediction of uncertainty in the response variables such as downstream concentrations, plume length and contaminant mass loaded into the aquifer. The inverse modeling approach and its associated calculation of sensitivity coefficients show that an extended monitoring period is significantly important in well-posedness of the problem; and an uncertainty in occurrence of the spill can have a minor impact on the modeling results as long as the observation data are collected while the contaminant
A geostatistical approach to mapping site response spectral amplifications
Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Tanaka, Y.; Tanaka, H.
2010-01-01
If quantitative estimates of the seismic properties do not exist at a location of interest then the site response spectral amplifications must be estimated from data collected at other locations. Currently, the most common approach employs correlations of site class with maps of surficial geology. Analogously, correlations of site class with topographic slope can be employed where the surficial geology is unknown. Our goal is to identify and validate a method to estimate site response with greater spatial resolution and accuracy for regions where additional effort is warranted. This method consists of three components: region-specific data collection, a spatial model for interpolating seismic properties, and a theoretical method for computing spectral amplifications from the interpolated seismic properties. We consider three spatial interpolation schemes: correlations with surficial geology, termed the geologic trend (GT), ordinary kriging (OK), and kriging with a trend (KT). We estimate the spectral amplifications from seismic properties using the square root of impedance method, thereby linking the frequency-dependent spectral amplifications to the depth-dependent seismic properties. Thus, the range of periods for which this method is applicable is limited by the depth of exploration. A dense survey of near-surface S-wave slowness (Ss) throughout Kobe, Japan shows that the geostatistical methods give more accurate estimates of Ss than the topographic slope and GT methods, and the OK and KT methods perform equally well. We prefer the KT model because it can be seamlessly integrated with geologic maps that cover larger regions. Empirical spectral amplifications show that the region-specific data achieve more accurate estimates of observed median short-period amplifications than the topographic slope method. ?? 2010 Elsevier B.V.
Geostatistical inspired metamodeling and optimization of nanoscale analog circuits
NASA Astrophysics Data System (ADS)
Okobiah, Oghenekarho
The current trend towards miniaturization of modern consumer electronic devices significantly affects their design. The demand for efficient all-in-one appliances leads to smaller, yet more complex and powerful nanoelectronic devices. The increasing complexity in the design of such nanoscale Analog/Mixed-Signal Systems-on-Chip (AMS-SoCs) presents difficult challenges to designers. One promising design method used to mitigate the burden of this design effort is the use of metamodeling (surrogate) modeling techniques. Their use significantly reduces the time for computer simulation and design space exploration and optimization. This dissertation addresses several issues of metamodeling based nanoelectronic based AMS design exploration. A surrogate modeling technique which uses geostatistical based Kriging prediction methods in creating metamodels is proposed. Kriging prediction techniques take into account the correlation effects between input parameters for performance point prediction. We propose the use of Kriging to utilize this property for the accurate modeling of process variation effects of designs in the deep nanometer region. Different Kriging methods have been explored for this work such as simple and ordinary Kriging. We also propose another metamodeling technique Kriging-Bootstrapped Neural Network that combines the accuracy and process variation awareness of Kriging with artificial neural network models for ultra-fast and accurate process aware metamodeling design. The proposed methodologies combine Kriging metamodels with selected algorithms for ultra-fast layout optimization. The selected algorithms explored are: Gravitational Search Algorithm (GSA), Simulated Annealing Optimization (SAO), and Ant Colony Optimization (ACO). Experimental results demonstrate that the proposed Kriging metamodel based methodologies can perform the optimizations with minimal computational burden compared to traditional (SPICE-based) design flows.
Bootstrapped models for intrinsic random functions
Campbell, K.
1988-08-01
Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.
Bootstrapped models for intrinsic random functions
Campbell, K.
1987-01-01
The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.
Rose, Olaf; Mennemann, Hugo; John, Carina; Lautenschläger, Marcus; Mertens-Keller, Damaris; Richling, Katharina; Waltering, Isabel; Hamacher, Stefanie; Felsch, Moritz; Herich, Lena; Czarnecki, Kathrin; Schaffert, Corinna; Jaehde, Ulrich; Köberlein-Neu, Juliane
2016-01-01
Background Medication reviews are recognized services to increase quality of therapy and reduce medication risks. The selection of eligible patients with potential to receive a major benefit is based on assumptions rather than on factual data. Acceptance of interprofessional collaboration is crucial to increase the quality of medication therapy. Objective The research question was to identify and prioritize eligible patients for a medication review and to provide evidence-based criteria for patient selection. Acceptance of the prescribing general practitioner to implement pharmaceutical recommendations was measured and factors influencing physicians’ acceptance were explored to obtain an impression on the extent of collaboration in medication review in an ambulatory care setting. Methods Based on data of a cluster-randomized controlled study (WestGem-study), the correlation between patient parameters and the individual performance in a medication review was calculated in a multiple logistic regression model. Physician’s acceptance of the suggested intervention was assessed using feedback forms. Influential factors were analyzed. Results The number of drugs in use (p = 0.001), discrepancies between prescribed and used medicines (p = 0.014), the baseline Medication Appropriateness Index score (p<0.001) and the duration of the intervention (p = 0.006) could be identified as influential factors for a major benefit from a medication review, whereas morbidity (p>0.05) and a low kidney function (p>0.05) do not predetermine the outcome. Longitudinal patient care with repeated reviews showed higher interprofessional acceptance and superior patient benefit. A total of 54.9% of the recommendations in a medication review on drug therapy were accepted for implementation. Conclusions The number of drugs in use and medication reconciliation could be a first rational step in patient selection for a medication review. Most elderly, multimorbid patients with polymedication
TiConverter: A training image converting tool for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Fadlelmula F., Mohamed M.; Killough, John; Fraim, Michael
2016-11-01
TiConverter is a tool developed to ease the application of multiple-point geostatistics whether by the open source Stanford Geostatistical Modeling Software (SGeMS) or other available commercial software. TiConverter has a user-friendly interface and it allows the conversion of 2D training images into numerical representations in four different file formats without the need for additional code writing. These are the ASCII (.txt), the geostatistical software library (GSLIB) (.txt), the Isatis (.dat), and the VTK formats. It performs the conversion based on the RGB color system. In addition, TiConverter offers several useful tools including image resizing, smoothing, and segmenting tools. The purpose of this study is to introduce the TiConverter, and to demonstrate its application and advantages with several examples from the literature.
Use of geostatistical modeling to capture complex geology in finite-element analyses
Rautman, C.A.; Longenbaugh, R.S.; Ryder, E.E.
1995-12-01
This paper summarizes a number of transient thermal analyses performed for a representative two-dimensional cross section of volcanic tuffs at Yucca Mountain using the finite element, nonlinear heat-conduction code COYOTE-II. In addition to conventional design analyses, in which material properties are formulated as a uniform single material and as horizontally layered, internally uniform matters, an attempt was made to increase the resemblance of the thermal property field to the actual geology by creating two fairly complex, geologically realistic models. The first model was created by digitizing an existing two-dimensional geologic cross section of Yucca Mountain. The second model was created using conditional geostatistical simulation. Direct mapping of geostatistically generated material property fields onto finite element computational meshes was demonstrated to yield temperature fields approximately equivalent to those generated through more conventional procedures. However, the ability to use the geostatistical models offers a means of simplifying the physical-process analyses.
Geostatistical modeling of a heterogeneous site bordering the Venice lagoon, Italy.
Trevisani, Sebastiano; Fabbri, Paolo
2010-01-01
Geostatistical methods are well suited for analyzing the local and spatial uncertainties that accompany the modeling of highly heterogeneous three-dimensional (3D) geological architectures. The spatial modeling of 3D hydrogeological architectures is crucial for polluted site characterization, in regards to both groundwater modeling and planning remediation procedures. From this perspective, the polluted site of Porto Marghera, located on the periphery of the Venice lagoon, represents an interesting example. For this site, the available dense spatial sampling network, with 769 boreholes over an area of 6 km(2), allows us to evaluate the high geological heterogeneity by means of indicator kriging and sequential indicator simulation. We show that geostatistical methodologies and ad hoc post processing of geostatistical analysis results allow us to effectively analyze the high hydrogeological heterogeneity of the studied site.
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Holliger, Klaus
2015-08-01
We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.
NASA Astrophysics Data System (ADS)
Theodoridou, Panagiota G.; Karatzas, George P.; Varouchakis, Emmanouil A.; Corzo Perez, Gerald A.
2015-04-01
Groundwater level is an important information in hydrological modelling. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram model is very important for the optimal method performance. This work compares three different criteria, the least squares sum method, the Akaike Information Criterion and the Cressie's Indicator, to assess the theoretical variogram that fits to the experimental one and investigates the impact on the prediction results. Moreover, five different distance functions (Euclidean, Minkowski, Manhattan, Canberra, and Bray-Curtis) are applied to calculate the distance between observations that affects both the variogram calculation and the Kriging estimator. Cross validation analysis in terms of Ordinary Kriging is applied by using sequentially a different distance metric and the above three variogram fitting criteria. The spatial dependence of the observations in the tested dataset is studied by fitting classical variogram models and the Matérn model. The proposed comparison analysis performed for a data set of two hundred fifty hydraulic head measurements distributed over an alluvial aquifer that covers an area of 210 km2. The study area is located in the Prefecture of Drama, which belongs to the Water District of East Macedonia (Greece). This area was selected in terms of hydro-geological data availability and geological homogeneity. The analysis showed that a combination of the Akaike information Criterion for the variogram fitting assessment and the Brays-Curtis distance metric provided the most accurate cross-validation results. The Power-law variogram model provided the best fit to the experimental data. The aforementioned approach for the specific dataset in terms of the Ordinary Kriging method improves the prediction efficiency in comparison to the classical Euclidean distance metric. Therefore, maps of the spatial
NASA Astrophysics Data System (ADS)
Green, C. T.; Bekins, B. A.
2007-12-01
Large fluxes of nitrate in agricultural settings threaten water quality worldwide. In groundwater, the transport and fate of anthropogenic nitrate can be influenced by denitrification, which occurs under anaerobic conditions. It is difficult to characterize denitrification because (1) waters from multiple flow paths commingle in the aquifer due to dispersion and (2) a sample typically includes water from multiple lithologies. For a site near Merced, California, geostatistical simulations, flow modeling, and backward random walk particle tracking were applied to simulate the influence of mixing on apparent reaction rates, redox boundaries, and fractionation of stable isotopes of nitrogen. Results show that dispersion has a strong effect on the distribution of redox indicators. When reactions are rapid, mixing can create the appearance of gradual redox transitions that mask the actual reaction rates and the degree of isotopic fractionation.
Bayesian Geostatistical Analysis and Prediction of Rhodesian Human African Trypanosomiasis
Wardrop, Nicola A.; Atkinson, Peter M.; Gething, Peter W.; Fèvre, Eric M.; Picozzi, Kim; Kakembo, Abbas S. L.; Welburn, Susan C.
2010-01-01
Background The persistent spread of Rhodesian human African trypanosomiasis (HAT) in Uganda in recent years has increased concerns of a potential overlap with the Gambian form of the disease. Recent research has aimed to increase the evidence base for targeting control measures by focusing on the environmental and climatic factors that control the spatial distribution of the disease. Objectives One recent study used simple logistic regression methods to explore the relationship between prevalence of Rhodesian HAT and several social, environmental and climatic variables in two of the most recently affected districts of Uganda, and suggested the disease had spread into the study area due to the movement of infected, untreated livestock. Here we extend this study to account for spatial autocorrelation, incorporate uncertainty in input data and model parameters and undertake predictive mapping for risk of high HAT prevalence in future. Materials and Methods Using a spatial analysis in which a generalised linear geostatistical model is used in a Bayesian framework to account explicitly for spatial autocorrelation and incorporate uncertainty in input data and model parameters we are able to demonstrate a more rigorous analytical approach, potentially resulting in more accurate parameter and significance estimates and increased predictive accuracy, thereby allowing an assessment of the validity of the livestock movement hypothesis given more robust parameter estimation and appropriate assessment of covariate effects. Results Analysis strongly supports the theory that Rhodesian HAT was imported to the study area via the movement of untreated, infected livestock from endemic areas. The confounding effect of health care accessibility on the spatial distribution of Rhodesian HAT and the linkages between the disease's distribution and minimum land surface temperature have also been confirmed via the application of these methods. Conclusions Predictive mapping indicates an
Geostatistical analysis of spatial and temporal variations of groundwater level.
Ahmadi, Seyed Hamid; Sedghamiz, Abbas
2007-06-01
Groundwater and water resources management plays a key role in conserving the sustainable conditions in arid and semi-arid regions. Applying management tools which can reveal the critical and hot conditions seems necessary due to some limitations such as labor and funding. In this study, spatial and temporal analysis of monthly groundwater level fluctuations of 39 piezometric wells monitored during 12 years was carried out. Geostatistics which has been introduced as a management and decision tool by many researchers has been applied to reveal the spatial and temporal structure of groundwater level fluctuation. Results showed that a strong spatial and temporal structure existed for groundwater level fluctuations due to very low nugget effects. Spatial analysis showed a strong structure of groundwater level drop across the study area and temporal analysis showed that groundwater level fluctuations have temporal structure. On average, the range of variograms for spatial and temporal analysis was about 9.7 km and 7.2 months, respectively. Ordinary and universal kriging methods with cross-validation were applied to assess the accuracy of the chosen variograms in estimation of the groundwater level drop and groundwater level fluctuations for spatial and temporal scales, respectively. Results of ordinary and universal krigings revealed that groundwater level drop and groundwater level fluctuations were underestimated by 3% and 6% for spatial and temporal analysis, respectively, which are very low and acceptable errors and support the unbiasedness hypothesis of kriging. Although, our results demonstrated that spatial structure was a little bit stronger than temporal structure, however, estimation of groundwater level drop and groundwater level fluctuations could be performed with low uncertainty in both space and time scales. Moreover, the results showed that kriging is a beneficial and capable tool for detecting those critical regions where need more attentions for
NASA Astrophysics Data System (ADS)
You, Jiong; Pei, Zhiyuan
2015-01-01
With the development of remote sensing technology, its applications in agriculture monitoring systems, crop mapping accuracy, and spatial distribution are more and more being explored by administrators and users. Uncertainty in crop mapping is profoundly affected by the spatial pattern of spectral reflectance values obtained from the applied remote sensing data. Errors in remotely sensed crop cover information and the propagation in derivative products need to be quantified and handled correctly. Therefore, this study discusses the methods of error modeling for uncertainty characterization in crop mapping using GF-1 multispectral imagery. An error modeling framework based on geostatistics is proposed, which introduced the sequential Gaussian simulation algorithm to explore the relationship between classification errors and the spectral signature from remote sensing data source. On this basis, a misclassification probability model to produce a spatially explicit classification error probability surface for the map of a crop is developed, which realizes the uncertainty characterization for crop mapping. In this process, trend surface analysis was carried out to generate a spatially varying mean response and the corresponding residual response with spatial variation for the spectral bands of GF-1 multispectral imagery. Variogram models were employed to measure the spatial dependence in the spectral bands and the derived misclassification probability surfaces. Simulated spectral data and classification results were quantitatively analyzed. Through experiments using data sets from a region in the low rolling country located at the Yangtze River valley, it was found that GF-1 multispectral imagery can be used for crop mapping with a good overall performance, the proposal error modeling framework can be used to quantify the uncertainty in crop mapping, and the misclassification probability model can summarize the spatial variation in map accuracy and is helpful for
Eschard, R.; Desaubliaux, G.; Eemouzy, P. ); Bacchiana, C.; Parpant, J.; Chautru, J.M.
1993-09-01
Chaunoy field, the largest oil field of the Paris basin, is exploiting heterogeneous reservoirs deposited during the Triassic in a large alluvial fan/lacustrine complex. The construction of a realistic reservoir model is difficult in such a setting because of the highly complex architecture of single reservoir units. Geostatistical simulations therefore have been performed to take into account the reservoir heterogeneities in the fluid flow modeling. A first layering has been determined from sedimentological and sequence stratigraphic analysis. The series was deposited in an alluvial outer fan environment. A lower siliciclastic member shows four heterogeneous sand sheets (7 m thick), which have been correlated across the field. Each of them is made up of stacked single channel sequences. The sand sheets are separated by extensive lacustrine and flood plain mudstone layers acting as permeability barriers. An upper siliciclastic/dolomitic member has been divided into two units with porous conglomeratic channels interfingered with cemented lagoonal dolomites. Proportional curves in lithofacies have confirmed this layering, showing the continuity of the permeability barriers, and the variogram analysis has shown that the well spacing is larger than the channel width. Simulations in lithofacies have been performed with the Heresim software using three different variogram ranges (small, medium, and large values). Because a good correlation exists between the lithofacies and the petrophysical attributes, a transcription of the lithofacies simulations into petrophysical attributes therefore was easy and realistic. Scaling-up techniques have given fluid-flow models corresponding to the three correlation ranges. Comparison of the global results of the fluid flow simulations with the observed production history enabled us to choose the more relevant case. The the model using the selected correlation range helped determine optimum well spacing.
NASA Astrophysics Data System (ADS)
Esbrí, José M.; Higueras, Pablo; López-Berdonces, Miguel A.; García-Noguero, Eva M.; González-Corrochano, Beatriz; Fernández-Calderón, Sergio; Martínez-Coronado, Alba
2015-04-01
Puertollano is the biggest industrial city of Castilla-La Mancha, with 48,086 inhabitants. It is located 250 km South of Madrid in the North border of the Ojailén River valley. The industrial area includes a big coal open pit (ENCASUR), two power plants (EON and ELCOGAS), a petrochemical complex (REPSOL) and a fertiliser factory (ENFERSA), all located in the proximities of the town. These industries suppose a complex scenario in terms of metals and metalloids emissions. For instance, mercury emissions declared to PRTR inventory during 2010 were 210 kg year-1 (REPSOL), 130 kg year-1 (ELCOGAS) and 11,9 kg year-1 (EON). Besides it still remains an unaccounted possibly of diffuse sources of other potentially toxic elements coming from the different industrial sites. Multielemental analyses of soils from two different depths covering the whole valley were carried out by means of XRF with a portable Oxford Instruments device. Geostatistical data treatment was performed using SURFER software, applying block kriging to obtain interpolation maps for the study area. Semivariograms of elemental concentrations make a clear distinction between volatile (Hg, Se) and non-volatile elements (Cu, Ni), with differences in scales and variances between the two soil horizons considered. Semivariograms also show different models for elements emitted by combustion processes (Ni) and for anomalous elements from geological substrate (Pb, Zn). In addition to differences in anisotropy of data, these models reflect different forms of elemental dispersion; despite this, identification of particular sources for the different elements is not possible for this geochemical data set.
Estimating Solar PV Output Using Modern Space/Time Geostatistics (Presentation)
Lee, S. J.; George, R.; Bush, B.
2009-04-29
This presentation describes a project that uses mapping techniques to predict solar output at subhourly resolution at any spatial point, develop a methodology that is applicable to natural resources in general, and demonstrate capability of geostatistical techniques to predict the output of a potential solar plant.
NASA Astrophysics Data System (ADS)
Yan, Hongxiang; Moradkhani, Hamid
2016-08-01
Assimilation of satellite soil moisture and streamflow data into a distributed hydrologic model has received increasing attention over the past few years. This study provides a detailed analysis of the joint and separate assimilation of streamflow and Advanced Scatterometer (ASCAT) surface soil moisture into a distributed Sacramento Soil Moisture Accounting (SAC-SMA) model, with the use of recently developed particle filter-Markov chain Monte Carlo (PF-MCMC) method. Performance is assessed over the Salt River Watershed in Arizona, which is one of the watersheds without anthropogenic effects in Model Parameter Estimation Experiment (MOPEX). A total of five data assimilation (DA) scenarios are designed and the effects of the locations of streamflow gauges and the ASCAT soil moisture on the predictions of soil moisture and streamflow are assessed. In addition, a geostatistical model is introduced to overcome the significantly biased satellite soil moisture and also discontinuity issue. The results indicate that: (1) solely assimilating outlet streamflow can lead to biased soil moisture estimation; (2) when the study area can only be partially covered by the satellite data, the geostatistical approach can estimate the soil moisture for those uncovered grid cells; (3) joint assimilation of streamflow and soil moisture from geostatistical modeling can further improve the surface soil moisture prediction. This study recommends that the geostatistical model is a helpful tool to aid the remote sensing technique and the hydrologic DA study.
Geostatistical prediction of stream-flow regime in southeastern United States
NASA Astrophysics Data System (ADS)
Pugliese, Alessio; Castellarin, Attilio; Archfield, Stacey; Farmer, William
2015-04-01
similar performances independently of the interpretation of the curves (i.e. period-of-record/annual, or complete/seasonal) or Q* (MAF or MAP*); at -site performances are satisfactory or good (i.e. Nash-Sutcliffe Efficiency NSE ranges from 0.60 to 0.90 for cross-validated FDCs, depending on the model setting), while the overall performance at regional scale indicates that OK and TK are associated with smaller BIAS and RMSE relative to the six benchmark procedures. Acknowledgements: We thankfully acknowledge Alessia Bononi and Antonio Liguori for their preliminary analyses and Jon O. Skøien and Edzer Pebesma for their helpful assistance with R-packages rtop and gstat. The study is part of the research activities carried out by the working group: Anthropogenic and Climatic Controls on WateR AvailabilitY (ACCuRAcY) of Panta Rhei - Everything Flows Change in Hydrology and Society (IAHS Scientific Decade 2013-2022). References Pugliese, A., A. Castellarin, A., Brath (2014): Geostatistical prediction of flow-duration curves in an index-flow framework, Hydrol. Earth Syst. Sci., 18, 3801-3816,doi:10.5194/hess-18-3801-2014. Castiglioni, S., A. Castellarin, A. Montanari (2009): Prediction of low-flow indices in ungauged basins through physiographical space-based interpolation, Journal of Hydrology, 378, 272-280.
ERIC Educational Resources Information Center
Raudenbush, Stephen W.
2009-01-01
Fixed effects models are often useful in longitudinal studies when the goal is to assess the impact of teacher or school characteristics on student learning. In this article, I introduce an alternative procedure: adaptive centering with random effects. I show that this procedure can replicate the fixed effects analysis while offering several…
Introduction to This Special Issue on Geostatistics and Geospatial Techniques in Remote Sensing
NASA Technical Reports Server (NTRS)
Atkinson, Peter; Quattrochi, Dale A.; Goodman, H. Michael (Technical Monitor)
2000-01-01
The germination of this special Computers & Geosciences (C&G) issue began at the Royal Geographical Society (with the Institute of British Geographers) (RGS-IBG) annual meeting in January 1997 held at the University of Exeter, UK. The snow and cold of the English winter were tempered greatly by warm and cordial discussion of how to stimulate and enhance cooperation on geostatistical and geospatial research in remote sensing 'across the big pond' between UK and US researchers. It was decided that one way forward would be to hold parallel sessions in 1998 on geostatistical and geospatial research in remote sensing at appropriate venues in both the UK and the US. Selected papers given at these sessions would be published as special issues of C&G on the UK side and Photogrammetric Engineering and Remote Sensing (PE&RS) on the US side. These issues would highlight the commonality in research on geostatistical and geospatial research in remote sensing on both sides of the Atlantic Ocean. As a consequence, a session on "Geostatistics and Geospatial Techniques for Remote Sensing of Land Surface Processes" was held at the RGS-IBG annual meeting in Guildford, Surrey, UK in January 1998, organized by the Modeling and Advanced Techniques Special Interest Group (MAT SIG) of the Remote Sensing Society (RSS). A similar session was held at the Association of American Geographers (AAG) annual meeting in Boston, Massachusetts in March 1998, sponsored by the AAG's Remote Sensing Specialty Group (RSSG). The 10 papers that make up this issue of C&G, comprise 7 papers from the UK and 3 papers from the LIS. We are both co-editors of each of the journal special issues, with the lead editor of each journal issue being from their respective side of the Atlantic. The special issue of PE&RS (vol. 65) that constitutes the other half of this co-edited journal series was published in early 1999, comprising 6 papers by US authors. We are indebted to the International Association for Mathematical
NASA Astrophysics Data System (ADS)
Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Feng, Song-Lin
2009-11-01
A three-dimensional finite element model for phase change random access memory (PCRAM) is established for comprehensive electrical and thermal analysis during SET operation. The SET behaviours of the heater addition structure (HS) and the ring-type contact in bottom electrode (RIB) structure are compared with each other. There are two ways to reduce the RESET current, applying a high resistivity interfacial layer and building a new device structure. The simulation results indicate that the variation of SET current with different power reduction ways is little. This study takes the RESET and SET operation current into consideration, showing that the RIB structure PCRAM cell is suitable for future devices with high heat efficiency and high-density, due to its high heat efficiency in RESET operation.
NASA Astrophysics Data System (ADS)
Lee, J.; Mukerji, T.; Tompkins, M. J.
2012-12-01
Joint integration of seismic and electromagnetic (EM) data has been studied to better characterize hydrocarbon reservoirs because they are sensitive to different reservoir properties. Most of them, however, applied deterministic joint inversion which provides the best estimate of the spatial distribution of reservoir properties in least square sense. Although this way of integrating two different data helps to obtain a more improved reservoir model matching both data, it gives only one reservoir model. But, numerous reservoirs can be consistent with seismic and EM data obtained from field measurements. Therefore, uncertainty associated with reservoir models should be quantified for reducing risks of making wrong decisions in reservoir management. We suggest statistical integration with a new upscaling scheme, which simulates the joint probability distribution of field scale seismic and EM data as well as reservoir properties, such as facies, porosity, and fluid saturation, not only to estimate reservoir properties but also to assess uncertainty of the estimates. Statistical data integration has been used (e.g., Lucet and Mavko, 1991; Avseth et al., 2001a, 2001b; Mukerji et al., 1998, 2001; Eidsvik et al., 2004) to characterize reservoirs from seismic attributes in geophysics. Main issue in applying statistical integration to joint seismic and EM data is the scale difference of two data because seismic (crosswell or surface seismic) and EM measurements (crosswell EM or CSEM) represent different volumes of a reservoir. In this research, geologically analogous reservoirs to the target reservoir were generated by unconstrained simulation with multipoint geostatistical algorithm, SNESIM (Strebelle, 2000, 2002). Well-log scale seismic and EM attributes were randomly assigned to the analogous reservoirs using conditional probability distributions of the attributes given facies obtained from well log analysis. Forward modeling and inversion of the analogous reservoirs were
Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators
NASA Astrophysics Data System (ADS)
Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.
2015-11-01
A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Bachanas, Pamela; Kidder, Daniel; Medley, Amy; Pals, Sherri L; Carpenter, Deborah; Howard, Andrea; Antelman, Gretchen; DeLuca, Nicolas; Muhenje, Odylia; Sheriff, Muhsin; Somi, Geoffrey; Katuta, Frieda; Cherutich, Peter; Moore, Janet
2016-09-01
We conducted a group randomized trial to assess the feasibility and effectiveness of a multi-component, clinic-based HIV prevention intervention for HIV-positive patients attending clinical care in Namibia, Kenya, and Tanzania. Eighteen HIV care and treatment clinics (six per country) were randomly assigned to intervention or control arms. Approximately 200 sexually active clients from each clinic were enrolled and interviewed at baseline and 6- and 12-months post-intervention. Mixed model logistic regression with random effects for clinic and participant was used to assess the effectiveness of the intervention. Of 3522 HIV-positive patients enrolled, 3034 (86 %) completed a 12-month follow-up interview. Intervention participants were significantly more likely to report receiving provider-delivered messages on disclosure, partner testing, family planning, alcohol reduction, and consistent condom use compared to participants in comparison clinics. Participants in intervention clinics were less likely to report unprotected sex in the past 2 weeks (OR = 0.56, 95 % CI 0.32, 0.99) compared to participants in comparison clinics. In Tanzania, a higher percentage of participants in intervention clinics (17 %) reported using a highly effective method of contraception compared to participants in comparison clinics (10 %, OR = 2.25, 95 % CI 1.24, 4.10). This effect was not observed in Kenya or Namibia. HIV prevention services are feasible to implement as part of routine care and are associated with a self-reported decrease in unprotected sex. Further operational research is needed to identify strategies to address common operational challenges including staff turnover and large patient volumes. PMID:26995678
Kidder, Daniel; Medley, Amy; Pals, Sherri L.; Carpenter, Deborah; Howard, Andrea; Antelman, Gretchen; DeLuca, Nicolas; Muhenje, Odylia; Sheriff, Muhsin; Somi, Geoffrey; Katuta, Frieda; Cherutich, Peter; Moore, Janet
2016-01-01
We conducted a group randomized trial to assess the feasibility and effectiveness of a multi-component, clinic-based HIV prevention intervention for HIV-positive patients attending clinical care in Namibia, Kenya, and Tanzania. Eighteen HIV care and treatment clinics (six per country) were randomly assigned to intervention or control arms. Approximately 200 sexually active clients from each clinic were enrolled and interviewed at baseline and 6- and 12-months post-intervention. Mixed model logistic regression with random effects for clinic and participant was used to assess the effectiveness of the intervention. Of 3522 HIV-positive patients enrolled, 3034 (86 %) completed a 12-month follow-up interview. Intervention participants were significantly more likely to report receiving provider-delivered messages on disclosure, partner testing, family planning, alcohol reduction, and consistent condom use compared to participants in comparison clinics. Participants in intervention clinics were less likely to report unprotected sex in the past 2 weeks (OR = 0.56, 95 % CI 0.32, 0.99) compared to participants in comparison clinics. In Tanzania, a higher percentage of participants in intervention clinics (17 %) reported using a highly effective method of contraception compared to participants in comparison clinics (10 %, OR = 2.25, 95 % CI 1.24, 4.10). This effect was not observed in Kenya or Namibia. HIV prevention services are feasible to implement as part of routine care and are associated with a self-reported decrease in unprotected sex. Further operational research is needed to identify strategies to address common operational challenges including staff turnover and large patient volumes. PMID:26995678
Visher, Christy A.; Hiller, Matthew; Belenko, Steven; Pankow, Jennifer; Dembo, Richard; Frisman, Linda K.; Pearson, Frank S.; Swan, Holly; Wiley, Tisha R. A.
2015-01-01
The National Criminal Justice Drug Abuse Treatment Studies research program conducted cluster randomized trials to test an organizational process improvement strategy for implementing evidence-based improvements in HIV services for preventing, detecting, and/or treating HIV for individuals under correctional supervision. Nine research centers conducted cluster randomized trials in which one correctional facility used a modified Network for Improvement of Addiction Treatment (NIATx) change team approach to implementing improved HIV services and the other facility used their own approach to implement the improved HIV services. This paper examines whether the intervention increased the perceived value of HIV services among staff of correctional and community HIV organizations. Baseline and follow-up measures of the perceived acceptability, feasibility, and organizational support for implementing HIV service improvements were collected from correctional, medical, and community HIV treatment staff. Results indicated that the perceived acceptability and feasibility of implementing HIV services improved among staff in the facilities using the modified NIATx change team approach as compared to staff in the comparison facilities. PMID:25299806
A geostatistical approach for describing spatial pattern in stream networks
Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.
2005-01-01
The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.
Edwards, Lloyd A.; Paresol, Bernard
2014-09-01
This report of the geostatistical analysis results of the fire fuels response variables, custom reaction intensity and total dead fuels is but a part of an SRS 2010 vegetation inventory project. For detailed description of project, theory and background including sample design, methods, and results please refer to USDA Forest Service Savannah River Site internal report “SRS 2010 Vegetation Inventory GeoStatistical Mapping Report”, (Edwards & Parresol 2013).
Amble, Ingunn; Gude, Tore; Stubdal, Sven; Andersen, Bror Just; Wampold, Bruce E
2015-01-01
It has been claimed that the monitoring of ongoing psychotherapy is of crucial importance for improving the quality of mental health care. This study investigated the effect of using the Norwegian version of the patient feedback system OQ-Analyst using the Outcome Questionnaire-45.2. Patients from six psychiatric clinics in Southern Norway (N = 259) were randomized to feedback (FB) or no feedback (NFB). The main effect of feedback was statistical significant (p = .027), corroborating the hypothesis that feedback would improve the quality of services, although the size of the effect was small to moderate (d = 0.32). The benefits of feedback have to be considered against the costs of implementation.
Geostatistical description of lithofacies distribution in the aquifer system of Cremona, Italy.
NASA Astrophysics Data System (ADS)
Guadagnini, L.; Riva, M.; Salmaso, M.; Saraceni, F.; Straface, S.; Guadagnini, A.
2009-04-01
We develop two alternative conceptual models to describe the heterogeneous spatial distribution of geomaterials within the groundwater system in the proximity of the city of Cremona, Italy. The key hydrogeologic feature of the region is the occurrence of the Springs Belt which develops across the entire Lombardia region and provides a major source of fresh water for agricultural needs. During recent years the natural springs of the Cremona aquifer have been increasingly threatened by over-abstraction and contamination by agricultural fertilizers. The area investigated includes the main natural springs in the region, and is located between the Adda and the Serio rivers, covering a surface of approximately 785 km2. The groundwater system is constituted by two main productive aquifers, which are separated by a locally discontinuous aquitard. The vertical variability of geomaterials distribution inferred from available well logs suggests that the system is relatively heterogeneous on the given observation scale. Lithofacies distribution within each identified aquifer is estimated upon considering two alternative conceptual models: (a) a composite medium scheme, and (b) a multiple-continua approach. In the former scenario, the system is conceptualized as composed by disjoint blocks of different materials, the boundaries of which can be uncertain. The latter approach assumes that the porous medium is composed by a set of overlapping continua, whose relative fraction at a given location can be uncertain. We start by classifying available sedimentological information and group the various identified lithotypes into five separate clusters. An extension of the indicator-based approach of Guadagnini et al. [2004] is then developed in order to provide a geostatistical chracterization of lithotypes distribution when the system is described as a composite medium. A multi-continua description is achieved by means of multiple indicator Kriging techniques. With the aid of formal
Geostatistical simulations for radon indoor with a nested model including the housing factor.
Cafaro, C; Giovani, C; Garavaglia, M
2016-01-01
The radon prone areas definition is matter of many researches in radioecology, since radon is considered a leading cause of lung tumours, therefore the authorities ask for support to develop an appropriate sanitary prevention strategy. In this paper, we use geostatistical tools to elaborate a definition accounting for some of the available information about the dwellings. Co-kriging is the proper interpolator used in geostatistics to refine the predictions by using external covariates. In advance, co-kriging is not guaranteed to improve significantly the results obtained by applying the common lognormal kriging. Here, instead, such multivariate approach leads to reduce the cross-validation residual variance to an extent which is deemed as satisfying. Furthermore, with the application of Monte Carlo simulations, the paradigm provides a more conservative radon prone areas definition than the one previously made by lognormal kriging. PMID:26547362
Geostatistics for spatial genetic structures: study of wild populations of perennial ryegrass.
Monestiez, P; Goulard, M; Charmet, G
1994-04-01
Methods based on geostatistics were applied to quantitative traits of agricultural interest measured on a collection of 547 wild populations of perennial ryegrass in France. The mathematical background of these methods, which resembles spatial autocorrelation analysis, is briefly described. When a single variable is studied, the spatial structure analysis is similar to spatial autocorrelation analysis, and a spatial prediction method, called "kriging", gives a filtered map of the spatial pattern over all the sampled area. When complex interactions of agronomic traits with different evaluation sites define a multivariate structure for the spatial analysis, geostatistical methods allow the spatial variations to be broken down into two main spatial structures with ranges of 120 km and 300 km, respectively. The predicted maps that corresponded to each range were interpreted as a result of the isolation-by-distance model and as a consequence of selection by environmental factors. Practical collecting methodology for breeders may be derived from such spatial structures. PMID:24185879
Weiland, E.F.; Connors, R.A.; Robinson, M.L.; Lindemann, J.W.; Meyer, W.T.
1982-01-01
A mineral assessment of the Arkansas Canyon Planning Unit was undertaken by Barringer Resources Inc., under the terms of contract YA-553-CTO-100 with the Bureau of Land Management, Colorado State Office. The study was based on a geochemical-geostatistical survey in which 700 stream sediment samples were collected and analyzed for 25 elements. Geochemical results were interpreted by statistical processing which included factor, discriminant, multiple regression and characteristic analysis. The major deposit types evaluated were massive sulfide-base metal, sedimentary and magmatic uranium, thorium vein, magmatic segregation, and carbonatite related deposits. Results of the single element data and multivariate geostatistical analysis indicate that limited potential exists for base metal mineralization near the Horseshoe, El Plomo, and Green Mountain Mines. Thirty areas are considered to be anomalous with regard to one or more of the geochemical parameters evaluated during this study. The evaluation of carbonatite related mineralization was restricted due to the lack of geochemical data specific to this environment.
Geostatistical simulations for radon indoor with a nested model including the housing factor.
Cafaro, C; Giovani, C; Garavaglia, M
2016-01-01
The radon prone areas definition is matter of many researches in radioecology, since radon is considered a leading cause of lung tumours, therefore the authorities ask for support to develop an appropriate sanitary prevention strategy. In this paper, we use geostatistical tools to elaborate a definition accounting for some of the available information about the dwellings. Co-kriging is the proper interpolator used in geostatistics to refine the predictions by using external covariates. In advance, co-kriging is not guaranteed to improve significantly the results obtained by applying the common lognormal kriging. Here, instead, such multivariate approach leads to reduce the cross-validation residual variance to an extent which is deemed as satisfying. Furthermore, with the application of Monte Carlo simulations, the paradigm provides a more conservative radon prone areas definition than the one previously made by lognormal kriging.
Butler, Catherine R; O'Hare, Ann M
2016-01-01
The Study of Heart and Renal Protection (SHARP) found that treatment with ezetemibe and low-dose simvastatin reduced the incidence of major atherosclerotic events in patients with kidney disease. Due to the paucity of evidence-based interventions that lower cardiovascular morbidity in this high-risk population, the SHARP trial will likely have a large impact on clinical practice. However, applying the results of clinical trials conducted in select populations to the care of individual patients in real-world settings can be fraught with difficulty. This is especially true when caring for older adults with complex comorbidity and limited life expectancy. These patients are often excluded from clinical trials, frequently have competing health priorities, and may be less likely to benefit and more likely to be harmed by medications. We discuss key considerations in applying the results of the SHARP trial to the care of older adults with CKD in real-world clinical settings using guiding principles set forth by the American Geriatrics Society's Expert Panel on the Care of Older Adults with Multimorbidity. Using this schema, we emphasize the importance of evaluating trial results in the unique context of each patient's goals, values, priorities, and circumstances.
Geostatistical mapping of effluent-affected sediment distribution on the Palos Verdes Shelf
Murray, Christopher J. ); Lee, H J.; Hampton, M A.
2001-12-01
Geostatistical techniques were used to study the spatial continuity of the thickness of effluent-affected sediment in the offshore Palos Verdes margin area. The thickness data were measured directly from cores and indirectly from high-frequency subbottom profiles collected over the Palos Verdes Margin. Strong spatial continuity of the sediment thickness data was identified, with a maximum range of correlation in excess of 1.4 km. The spatial correlation showed a marked anisotropy, and was more than twice as continuous in the alongshore direction as in the cross-shelf direction. Sequential indicator simulation employing models fit to the thickness data variograms was used to map the distribution of the sediment, and to quantify the uncertainty in those estimates. A strong correlation between sediment thickness data and measurements of the mass of the contaminant p,p?-DDE per unit area was identified. A calibration based on the bivariate distribution of the thickness and p,p?-DDE data was applied using Markov-Bayes indicator simulation to extend the geostatistical study and map the contamination levels in the sediment. Integrating the map grids produced by the geostatistical study of the two variables indicated that 7.8 million cubic meters of effluent-affected sediment exist in the map area, containing approximately 61 to 72 Mg (metric tons) of p,p?-DDE. Most of the contaminated sediment (about 85% of the sediment and 89% of the p,p?-DDE) occurs in water depths less than 100 m. The geostatistical study also indicated that the samples available for mapping are well distributed and the uncertainty of the estimates of the thickness and contamination level of the sediments is lowest in areas where the contaminated sediment is most prevalent.
NASA Astrophysics Data System (ADS)
Linde, Niklas; Lochbühler, Tobias; Dogan, Mine; Van Dam, Remke L.
2015-12-01
We propose a new framework to compare alternative geostatistical descriptions of a given site. Multiple realizations of each of the considered geostatistical models and their corresponding tomograms (based on inversion of noise-contaminated simulated data) are used as a multivariate training image. The training image is scanned with a direct sampling algorithm to obtain conditional realizations of hydraulic conductivity that are not only in agreement with the geostatistical model, but also honor the spatially varying resolution of the site-specific tomogram. Model comparison is based on the quality of the simulated geophysical data from the ensemble of conditional realizations. The tomogram in this study is obtained by inversion of cross-hole ground-penetrating radar (GPR) first-arrival travel time data acquired at the MAcro-Dispersion Experiment (MADE) site in Mississippi (USA). Various heterogeneity descriptions ranging from multi-Gaussian fields to fields with complex multiple-point statistics inferred from outcrops are considered. Under the assumption that the relationship between porosity and hydraulic conductivity inferred from local measurements is valid, we find that conditioned multi-Gaussian realizations and derivatives thereof can explain the crosshole geophysical data. A training image based on an aquifer analog from Germany was found to be in better agreement with the geophysical data than the one based on the local outcrop, which appears to under-represent high hydraulic conductivity zones. These findings are only based on the information content in a single resolution-limited tomogram and extending the analysis to tracer or higher resolution surface GPR data might lead to different conclusions (e.g., that discrete facies boundaries are necessary). Our framework makes it possible to identify inadequate geostatistical models and petrophysical relationships, effectively narrowing the space of possible heterogeneity representations.
A geostatistical approach to predicting sulfur content in the Pittsburgh coal bed
Watson, W.D.; Ruppert, L.F.; Bragg, L.J.; Tewalt, S.J.
2001-01-01
The US Geological Survey (USGS) is completing a national assessment of coal resources in the five top coal-producing regions in the US. Point-located data provide measurements on coal thickness and sulfur content. The sample data and their geologic interpretation represent the most regionally complete and up-to-date assessment of what is known about top-producing US coal beds. The sample data are analyzed using a combination of geologic and Geographic Information System (GIS) models to estimate tonnages and qualities of the coal beds. Traditionally, GIS practitioners use contouring to represent geographical patterns of "similar" data values. The tonnage and grade of coal resources are then assessed by using the contour lines as references for interpolation. An assessment taken to this point is only indicative of resource quantity and quality. Data users may benefit from a statistical approach that would allow them to better understand the uncertainty and limitations of the sample data. To develop a quantitative approach, geostatistics were applied to the data on coal sulfur content from samples taken in the Pittsburgh coal bed (located in the eastern US, in the southwestern part of the state of Pennsylvania, and in adjoining areas in the states of Ohio and West Virginia). Geostatistical methods that account for regional and local trends were applied to blocks 2.7 mi (4.3 km) on a side. The data and geostatistics support conclusions concerning the average sulfur content and its degree of reliability at regional- and economic-block scale over the large, contiguous part of the Pittsburgh outcrop, but not to a mine scale. To validate the method, a comparison was made with the sulfur contents in sample data taken from 53 coal mines located in the study area. The comparison showed a high degree of similarity between the sulfur content in the mine samples and the sulfur content represented by the geostatistically derived contours. Published by Elsevier Science B.V.
Hussain, Mohammed; Moussavi, Mohammad; Korya, Daniel; Mehta, Siddhart; Brar, Jaskiran; Chahal, Harina; Qureshi, Ihtesham; Mehta, Tapan; Ahmad, Javaad; Zaidat, Osama O.; Kirmani, Jawad F.
2016-01-01
Background Recent advances in the treatment of ischemic stroke have focused on revascularization and led to better clinical and functional outcomes. A systematic review and pooled analyses of 6 recent multicentered prospective randomized controlled trials (MPRCT) were performed to compare intravenous tissue plasminogen activator (IV tPA) and endovascular therapy (intervention) with IV tPA alone (control) for anterior circulation ischemic stroke (AIS) secondary to large vessel occlusion (LVO). Objectives Six MPRCTs (MR CLEAN, ESCAPE, EXTEND IA, SWIFT PRIME, REVASCAT and THERAPY) incorporating image-based LVO AIS were selected for assessing the following: (1) prespecified primary clinical outcomes of AIS patients in intervention and control arms: good outcomes were defined by a modified Rankin Scale score of 0-2 at 90 days; (2) secondary clinical outcomes were: (a) revascularization rates [favorable outcomes defined as modified Thrombolysis in Cerebral Infarction scale (mTICI) score of 2b/3]; (b) symptomatic intracranial hemorrhage (sICH) rates and mortality; (c) derivation of number needed to harm (NNH), number needed to treat (NNT), and relative percent difference (RPD) between intervention and control groups, and (d) random effects model to determine overall significance (forest and funnel plots). Results A total of 1,386 patients were included. Good outcomes at 90 days were seen in 46% of patients in the intervention (p < 0.00001) and in 27% of patients in the control groups (p < 0.00002). An mTICI score of 2b/3 was achieved in 70.2% of patients in the intervention arm. The sICH and mortality in the intervention arm compared with the control arm were 4.7 and 14.3% versus 7.9 and 17.8%, respectively. The NNT and NNH in the intervention and control groups were 5.3 and 9.1, respectively. Patients in the intervention arm had a 50.1% (RPD) better chance of achieving a good 90-day outcome as compared to controls. Conclusions Endovascular therapy combined with IV t
Hussain, Mohammed; Moussavi, Mohammad; Korya, Daniel; Mehta, Siddhart; Brar, Jaskiran; Chahal, Harina; Qureshi, Ihtesham; Mehta, Tapan; Ahmad, Javaad; Zaidat, Osama O.; Kirmani, Jawad F.
2016-01-01
Background Recent advances in the treatment of ischemic stroke have focused on revascularization and led to better clinical and functional outcomes. A systematic review and pooled analyses of 6 recent multicentered prospective randomized controlled trials (MPRCT) were performed to compare intravenous tissue plasminogen activator (IV tPA) and endovascular therapy (intervention) with IV tPA alone (control) for anterior circulation ischemic stroke (AIS) secondary to large vessel occlusion (LVO). Objectives Six MPRCTs (MR CLEAN, ESCAPE, EXTEND IA, SWIFT PRIME, REVASCAT and THERAPY) incorporating image-based LVO AIS were selected for assessing the following: (1) prespecified primary clinical outcomes of AIS patients in intervention and control arms: good outcomes were defined by a modified Rankin Scale score of 0-2 at 90 days; (2) secondary clinical outcomes were: (a) revascularization rates [favorable outcomes defined as modified Thrombolysis in Cerebral Infarction scale (mTICI) score of 2b/3]; (b) symptomatic intracranial hemorrhage (sICH) rates and mortality; (c) derivation of number needed to harm (NNH), number needed to treat (NNT), and relative percent difference (RPD) between intervention and control groups, and (d) random effects model to determine overall significance (forest and funnel plots). Results A total of 1,386 patients were included. Good outcomes at 90 days were seen in 46% of patients in the intervention (p < 0.00001) and in 27% of patients in the control groups (p < 0.00002). An mTICI score of 2b/3 was achieved in 70.2% of patients in the intervention arm. The sICH and mortality in the intervention arm compared with the control arm were 4.7 and 14.3% versus 7.9 and 17.8%, respectively. The NNT and NNH in the intervention and control groups were 5.3 and 9.1, respectively. Patients in the intervention arm had a 50.1% (RPD) better chance of achieving a good 90-day outcome as compared to controls. Conclusions Endovascular therapy combined with IV t
NASA Technical Reports Server (NTRS)
Betts, M.; Tsegaye, T.; Tadesse, W.; Coleman, T. L.; Fahsi, A.
1998-01-01
The spatial and temporal distribution of near surface soil moisture is of fundamental importance to many physical, biological, biogeochemical, and hydrological processes. However, knowledge of these space-time dynamics and the processes which control them remains unclear. The integration of geographic information systems (GIS) and geostatistics together promise a simple mechanism to evaluate and display the spatial and temporal distribution of this vital hydrologic and physical variable. Therefore, this research demonstrates the use of geostatistics and GIS to predict and display soil moisture distribution under vegetated and non-vegetated plots. The research was conducted at the Winfred Thomas Agricultural Experiment Station (WTAES), Hazel Green, Alabama. Soil moisture measurement were done on a 10 by 10 m grid from tall fescue grass (GR), alfalfa (AA), bare rough (BR), and bare smooth (BS) plots. Results indicated that variance associated with soil moisture was higher for vegetated plots than non-vegetated plots. The presence of vegetation in general contributed to the spatial variability of soil moisture. Integration of geostatistics and GIS can improve the productivity of farm lands and the precision of farming.
Brooker, Simon; Clements, Archie C.A.
2009-01-01
Multiple parasite infections are widespread in the developing world and understanding their geographical distribution is important for spatial targeting of differing intervention packages. We investigated the spatial epidemiology of mono- and co-infection with helminth parasites in East Africa and developed a geostatistical model to predict infection risk. The data used for the analysis were taken from standardised school surveys of Schistosoma mansoni and hookworm (Ancylostoma duodenale/Necator americanus) carried out between 1999 and 2005 in East Africa. Prevalence of mono- and co-infection was modelled using satellite-derived environmental and demographic variables as potential predictors. A Bayesian multi-nominal geostatistical model was developed for each infection category for producing maps of predicted co-infection risk. We show that heterogeneities in co-infection with S. mansoni and hookworm are influenced primarily by the distribution of S. mansoni, rather than the distribution of hookworm, and that temperature, elevation and distance to large water bodies are reliable predictors of the spatial large-scale distribution of co-infection. On the basis of these results, we developed a validated geostatistical model of the distribution of co-infection at a scale that is relevant for planning regional disease control efforts that simultaneously target multiple parasite species. PMID:19073189
NASA Astrophysics Data System (ADS)
Ewers, B. E.; Adelman, J. D.; Mackay, D. S.; Loranty, M.; Traver, E.; Kruger, E. L.
2005-12-01
As direct measurements of tree transpiration via sap flux have become routine, the sample size of sap flux studies has dramatically increased. These large sample sizes now provide sufficient data for geostatistical analyses when measurement points are spatially explicit. In this study we tested whether 1) center-of-stand approaches to sap flux measurements and scaling are sufficient for characterizing stand level transpiration and 2) geostatistical techniques provide the information necessary for quantifying important stand and landscape level gradients. To maximize the inferences from our tests, we compared two contrasting forests: one dominated by lodgepole pine (Pinus contorta) in Wyoming and another dominated by trembling aspen (Populus tremuloides) in Wisconsin. The forest in Wyoming is characterized by low precipitation regimes dominated by snowfall while the forest in Wisconsin is characterized by moderate precipitation throughout the growing season. Analyses of the relationship between sample size and variance indicated an inflection point of 30 point pairs but the variance continued a shallow decline even at 100 point pairs. There was also evidence of species impacts on variance of transpiration that changed with environmental conditions. We used variogram analyses to quantify the spatial range of transpiration and found a variable spatial range across days with mean of approximately 30 m. The variability in spatial range could be partially explained by tree species' responses to soil moisture, vapor pressure deficit, and light based on current plant hydraulic knowledge. Our results confirm the utility of geostatistical techniques for quantifying and explaining transpiration in time and space.
NASA Astrophysics Data System (ADS)
martin, manuel; Lacarce, Eva; Meersmans, Jeroen; Orton, Thomas; Saby, Nicolas; Paroissien, Jean-Baptiste; Jolivet, Claudy; Boulonne, Line; Arrouays, Dominique
2013-04-01
Soil organic carbon (SOC) plays a major role in the global carbon budget. It can act as a source or a sink of atmospheric carbon, thereby possibly influencing the course of climate change. Improving the tools that model the spatial distributions of SOC stocks at national scales is a priority, both for monitoring changes in SOC and as an input for global carbon cycles studies. In this paper, first, we considered several increasingly complex boosted regression trees (BRT), a convenient and efficient multiple regression model from the statistical learning field. Further, we considered and a robust geostatistical approach coupled to the BRT models. Testing the different approaches was performed on the dataset from the French Soil Monitoring Network, with a consistent cross-validation procedure. We showed that the BRT models, given its ease of use and its predictive performance, could be preferred to geostatistical models for SOC mapping at the national scale, and if possible be joined with geostatistical models. This conclusion is valid provided that care is exercised in model fitting and validating, that the dataset does not allow for modeling local spatial autocorrelations, as it is the case for many national systematic sampling schemes, and when good quality data about SOC drivers included in the models is available.
A geostatistical methodology to assess the accuracy of unsaturated flow models
Smoot, J.L.; Williams, R.E.
1996-04-01
The Pacific Northwest National Laboratory spatiotemporal movement of water injected into (PNNL) has developed a Hydrologic unsaturated sediments at the Hanford Site in Evaluation Methodology (HEM) to assist the Washington State was used to develop a new U.S. Nuclear Regulatory Commission in method for evaluating mathematical model evaluating the potential that infiltrating meteoric predictions. Measured water content data were water will produce leachate at commercial low- interpolated geostatistically to a 16 x 16 x 36 level radioactive waste disposal sites. Two key grid at several time intervals. Then a issues are raised in the HEM: (1) evaluation of mathematical model was used to predict water mathematical models that predict facility content at the same grid locations at the selected performance, and (2) estimation of the times. Node-by-node comparison of the uncertainty associated with these mathematical mathematical model predictions with the model predictions. The technical objective of geostatistically interpolated values was this research is to adapt geostatistical tools conducted. The method facilitates a complete commonly used for model parameter estimation accounting and categorization of model error at to the problem of estimating the spatial every node. The comparison suggests that distribution of the dependent variable to be model results generally are within measurement calculated by the model. To fulfill this error. The worst model error occurs in silt objective, a database describing the lenses and is in excess of measurement error.
Geostatistical estimation of signal-to-noise ratios for spectral vegetation indices
Ji, Lei; Zhang, Li; Rover, Jennifer R.; Wylie, Bruce K.; Chen, Xuexia
2014-01-01
In the past 40 years, many spectral vegetation indices have been developed to quantify vegetation biophysical parameters. An ideal vegetation index should contain the maximum level of signal related to specific biophysical characteristics and the minimum level of noise such as background soil influences and atmospheric effects. However, accurate quantification of signal and noise in a vegetation index remains a challenge, because it requires a large number of field measurements or laboratory experiments. In this study, we applied a geostatistical method to estimate signal-to-noise ratio (S/N) for spectral vegetation indices. Based on the sample semivariogram of vegetation index images, we used the standardized noise to quantify the noise component of vegetation indices. In a case study in the grasslands and shrublands of the western United States, we demonstrated the geostatistical method for evaluating S/N for a series of soil-adjusted vegetation indices derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. The soil-adjusted vegetation indices were found to have higher S/N values than the traditional normalized difference vegetation index (NDVI) and simple ratio (SR) in the sparsely vegetated areas. This study shows that the proposed geostatistical analysis can constitute an efficient technique for estimating signal and noise components in vegetation indices.
Rautman, C.A.
1995-09-01
Two-dimensional, heterogeneous, spatially correlated models of thermal conductivity and bulk density have been created for a representative, east-west cross section of Yucca Mountain, Nevada, using geostatistical simulation. The thermal conductivity models are derived from spatially correlated, surrogate material-property models of porosity, through a multiple linear-regression equation, which expresses thermal conductivity as a function of porosity and initial temperature and saturation. Bulk-density values were obtained through a similar, linear-regression relationship with porosity. The use of a surrogate-property allows the use of spatially much-more-abundant porosity measurements to condition the simulations. Modeling was conducted in stratigraphic coordinates to represent original depositional continuity of material properties and the completed models were transformed to real-world coordinates to capture present-day tectonic tilting and faulting of the material-property units. Spatial correlation lengths required for geostatistical modeling were assumed, but are based on the results of previous transect-sampling and geostatistical-modeling work.
A conceptual sedimentological-geostatistical model of aquifer heterogeneity based on outcrop studies
Davis, J.M.
1994-01-01
Three outcrop studies were conducted in deposits of different depositional environments. At each site, permeability measurements were obtained with an air-minipermeameter developed as part of this study. In addition, the geological units were mapped with either surveying, photographs, or both. Geostatistical analysis of the permeability data was performed to estimate the characteristics of the probability distribution function and the spatial correlation structure. The information obtained from the geological mapping was then compared with the results of the geostatistical analysis for any relationships that may exist. The main field site was located in the Albuquerque Basin of central New Mexico at an outcrop of the Pliocene-Pleistocene Sierra Ladrones Formation. The second study was conducted on the walls of waste pits in alluvial fan deposits at the Nevada Test Site. The third study was conducted on an outcrop of an eolian deposit (miocene) south of Socorro, New Mexico. The results of the three studies were then used to construct a conceptual model relating depositional environment to geostatistical models of heterogeneity. The model presented is largely qualitative but provides a basis for further hypothesis formulation and testing.
Geostatistical estimation of signal-to-noise ratios for spectral vegetation indices
NASA Astrophysics Data System (ADS)
Ji, Lei; Zhang, Li; Rover, Jennifer; Wylie, Bruce K.; Chen, Xuexia
2014-10-01
In the past 40 years, many spectral vegetation indices have been developed to quantify vegetation biophysical parameters. An ideal vegetation index should contain the maximum level of signal related to specific biophysical characteristics and the minimum level of noise such as background soil influences and atmospheric effects. However, accurate quantification of signal and noise in a vegetation index remains a challenge, because it requires a large number of field measurements or laboratory experiments. In this study, we applied a geostatistical method to estimate signal-to-noise ratio (S/N) for spectral vegetation indices. Based on the sample semivariogram of vegetation index images, we used the standardized noise to quantify the noise component of vegetation indices. In a case study in the grasslands and shrublands of the western United States, we demonstrated the geostatistical method for evaluating S/N for a series of soil-adjusted vegetation indices derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. The soil-adjusted vegetation indices were found to have higher S/N values than the traditional normalized difference vegetation index (NDVI) and simple ratio (SR) in the sparsely vegetated areas. This study shows that the proposed geostatistical analysis can constitute an efficient technique for estimating signal and noise components in vegetation indices.
Park, Jinyong; Balasingham, P; McKenna, Sean Andrew; Pinnaduwa H.S.W. Kulatilake
2004-09-01
Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater
Roth, Alexis M; Van Der Pol, Barbara; Fortenberry, J Dennis; Dodge, Brian; Reece, Michael; Certo, David; Zimet, Gregory D
2015-01-01
Epidemiologic data demonstrate that women involved with the criminal justice system in the United States are at high risk for sexually transmitted infections, including herpes simplex virus type 2 (HSV-2). Female defendants were recruited from a misdemeanor court to assess whether brief framed messages utilizing prospect theory could encourage testing for HSV-2. Participants were randomly assigned to a message condition (gain, loss, or control), completed an interviewer-administered survey assessing factors associated with antibody test uptake/refusal and were offered free point-of-care HSV-2 serologic testing. Although individuals in the loss-frame group accepted testing at the highest rate, an overall statistical difference in HSV-2 testing behavior by group (p ≤ .43) was not detected. The majority of the sample (74.6%) characterized receiving a serological test for HSV-2 as health affirming. However, this did not moderate the effect of the intervention nor was it significantly associated with test acceptance (p ≤ .82). Although the effects of message framing are subtle, the findings have important theoretical implications given the participants' characterization of HSV-2 screening as health affirming despite being a detection behavior. Implications of study results for health care providers interested in brief, low cost interventions are also explored.
Ogawa, Tatsuya; Omon, Kyohei; Yuda, Tomohisa; Ishigaki, Tomoya; Imai, Ryota; Ohmatsu, Satoko; Morioka, Shu
2016-01-01
Objective: To investigate the short-term effects of the life goal concept on subjective well-being and treatment engagement, and to determine the sample size required for a larger trial. Design: A quasi-randomized controlled trial that was not blinded. Setting: A subacute rehabilitation ward. Subjects: A total of 66 patients were randomized to a goal-setting intervention group with the life goal concept (Life Goal), a standard rehabilitation group with no goal-setting intervention (Control 1), or a goal-setting intervention group without the life goal concept (Control 2). Interventions: The goal-setting intervention in the Life Goal and Control 2 was Goal Attainment Scaling. The Life Goal patients were assessed in terms of their life goals, and the hierarchy of goals was explained. The intervention duration was four weeks. Main measures: Patients were assessed pre- and post-intervention. The outcome measures were the Hospital Anxiety and Depression Scale, 12-item General Health Questionnaire, Pittsburgh Rehabilitation Participation Scale, and Functional Independence Measure. Results: Of the 296 potential participants, 66 were enrolled; Life Goal (n = 22), Control 1 (n = 22) and Control 2 (n = 22). Anxiety was significantly lower in the Life Goal (4.1 ±3.0) than in Control 1 (6.7 ±3.4), but treatment engagement was significantly higher in the Life Goal (5.3 ±0.4) compared with both the Control 1 (4.8 ±0.6) and Control 2 (4.9 ±0.5). Conclusions: The life goal concept had a short-term effect on treatment engagement. A sample of 31 patients per group would be required for a fully powered clinical trial. PMID:27496700
Duarte, F; Calvo, M V; Borges, A; Scatoni, I B
2015-08-01
The oriental fruit moth, Grapholita molesta (Busck), is the most serious pest in peach, and several insecticide applications are required to reduce crop damage to acceptable levels. Geostatistics and Geographic Information Systems (GIS) are employed to measure the range of spatial correlation of G. molesta in order to define the optimum sampling distance for performing spatial analysis and to determine the current distribution of the pest in peach orchards of southern Uruguay. From 2007 to 2010, 135 pheromone traps per season were installed and georeferenced in peach orchards distributed over 50,000 ha. Male adult captures were recorded weekly from September to April. Structural analysis of the captures was performed, yielding 14 semivariograms for the accumulated captures analyzed by generation and growing season. Two sets of maps were constructed to describe the pest distribution. Nine significant models were obtained in the 14 evaluated periods. The range estimated for the correlation was from 908 to 6884 m. Three hot spots of high population level and some areas with comparatively low populations were constant over the 3-year period, while there is a greater variation in the size of the population in different generations and years in other areas.
NASA Astrophysics Data System (ADS)
Lewis, M. J.; Parra, J.; Weissling, B.; Ackley, S. F.; Maksym, T. L.; Wilkinson, J.; Wagner, T.
2011-12-01
Sea ice is a critical component of the Earth's climate system and is a highly complex media. The physical characteristics are important in interpretation of remote sensing data. Sea ice characteristics such as snow surface topography, snow depth and ice thickness were derived from in situ measurements obtained during the J.C. Ross (ICEBell) and Oden Southern Ocean (OSO) expeditions during the austral summer of 2010-11. Select areas of sea ice floes in the Bellingshausen, Weddell and Amundsen Seas were measured using terrestrial scanning LiDAR (TSL) and also by conventional gridded and transect surveys. Snow depths were obtained at 2-5 meter sampling intervals and ice thickness was estimated by both electromagnetic induction (EMI) and auger drilling at 2-5 meter intervals. The LiDAR data is gridded to a 10cm rasterized data set. The field data from multiple floes in different regions provide a unique three dimensional perspective of sea ice elevation, snow depth and derived freeboard. These floes are visualized in both space and spectral domains and analyzed using classic statistical and geostatistical methods to assess surface roughness, snow depth, and the effects of differing scales on data resolution. The correlation lengths needed for isostatic equilibrium of freeboard were determined. These relationships are useful in assessing radar and laser altimetry data from airborne and satellite sources.
Meerschman, Eef; Cockx, Liesbet; Islam, Mohammad Monirul; Meeuws, Fun; Van Meirvenne, Marc
2011-06-01
Previous research showed a regional Cu enrichment of 6 mg kg(-1) in the top soil of the Ypres war zone (Belgium), caused by corrosion of WWI shell fragments. Further research was required since in addition to Cu, also As, Pb, and Zn were used during the manufacturing of ammunition. Therefore, an additional data collection was conducted in which the initial Cu data set was tripled to 731 data points and extended to eight heavy metals (As, Cd, Cr, Cu, Hg, Ni, Pb, and Zn) which permitted (1) to evaluate the environmental impact of the heavy metals at a regional scale and (2) to assess their regional spatial occurrence by performing an optimized geostatistical modeling. The results showed no pollution at a regional scale, but sometimes locally concentrations exceeded the soil sanitation threshold, especially for Cu, Pb, and Zn. The spatial patterns of Ni and Cr were related to variations in soil texture whereas the occurrences of Cu and Pb were clearly linked to WWI activities. This difference in spatial behavior was confirmed by an analysis of coregionalization.
NASA Astrophysics Data System (ADS)
Wang, Hui; Wellmann, Florian
2016-04-01
It is generally accepted that 3D geological models inferred from observed data will contain a certain amount of uncertainties. The uncertainty quantification and stochastic sampling methods are essential for gaining the insight into the geological variability of subsurface structures. In the community of deterministic or traditional modelling techniques, classical geo-statistical methods using boreholes (hard data sets) are still most widely accepted although suffering certain drawbacks. Modern geophysical measurements provide us regional data sets in 2D or 3D spaces either directly from sensors or indirectly from inverse problem solving using observed signal (soft data sets). We propose a stochastic modelling framework to extract subsurface heterogeneity from multiple and complementary types of data. In the presented work, subsurface heterogeneity is considered as the "hidden link" among multiple spatial data sets as well as inversion results. Hidden Markov random field models are employed to perform 3D segmentation which is the representation of the "hidden link". Finite Gaussian mixture models are adopted to characterize the statistical parameters of the multiple data sets. The uncertainties are quantified via a Gibbs sampling process under the Bayesian inferential framework. The proposed modelling framework is validated using two numerical examples. The model behavior and convergence are also well examined. It is shown that the presented stochastic modelling framework is a promising tool for the 3D data fusion in the communities of geological modelling and geophysics.
2012-01-01
Background Common low back pain represents a major public health problem in terms of its direct cost to health care and its socio-economic repercussions. Ten percent of individuals who suffer from low back pain evolve toward a chronic case and as such are responsible for 75 to 80% of the direct cost of low back pain. It is therefore imperative to highlight the predictive factors of low back pain chronification in order to lighten the economic burden of low back pain-related invalidity. Despite being particularly affected by low back pain, Hospices Civils de Lyon (HCL) personnel have never been offered a specific, tailor-made treatment plan. The PRESLO study (with PRESLO referring to Secondary Low Back Pain Prevention, or in French, PREvention Secondaire de la LOmbalgie), proposed by HCL occupational health services and the Centre Médico-Chirurgical et de Réadaptation des Massues – Croix Rouge Française, is a randomized trial that aims to evaluate the feasibility and efficiency of a global secondary low back pain prevention program for the low back pain sufferers among HCL hospital personnel, a population at risk for recurrence and chronification. This program, which is based on the concept of physical retraining, employs a multidisciplinary approach uniting physical activity, cognitive education about low back pain and lumbopelvic morphotype analysis. No study targeting populations at risk for low back pain chronification has as yet evaluated the efficiency of lighter secondary prevention programs. Methods/Design This study is a two-arm parallel randomized controlled trial proposed to all low back pain sufferers among HCL workers, included between October 2008 and July 2011 and followed over two years. The personnel following their usual treatment (control group) and those following the global prevention program in addition to their usual treatment (intervention group) are compared in terms of low back pain recurrence and the impairments measured at the
Rodriguez-Galiano, Victor; Mendes, Maria Paula; Garcia-Soldado, Maria Jose; Chica-Olmo, Mario; Ribeiro, Luis
2014-04-01
Watershed management decisions need robust methods, which allow an accurate predictive modeling of pollutant occurrences. Random Forest (RF) is a powerful machine learning data driven method that is rarely used in water resources studies, and thus has not been evaluated thoroughly in this field, when compared to more conventional pattern recognition techniques key advantages of RF include: its non-parametric nature; high predictive accuracy; and capability to determine variable importance. This last characteristic can be used to better understand the individual role and the combined effect of explanatory variables in both protecting and exposing groundwater from and to a pollutant. In this paper, the performance of the RF regression for predictive modeling of nitrate pollution is explored, based on intrinsic and specific vulnerability assessment of the Vega de Granada aquifer. The applicability of this new machine learning technique is demonstrated in an agriculture-dominated area where nitrate concentrations in groundwater can exceed the trigger value of 50 mg/L, at many locations. A comprehensive GIS database of twenty-four parameters related to intrinsic hydrogeologic proprieties, driving forces, remotely sensed variables and physical-chemical variables measured in "situ", were used as inputs to build different predictive models of nitrate pollution. RF measures of importance were also used to define the most significant predictors of nitrate pollution in groundwater, allowing the establishment of the pollution sources (pressures). The potential of RF for generating a vulnerability map to nitrate pollution is assessed considering multiple criteria related to variations in the algorithm parameters and the accuracy of the maps. The performance of the RF is also evaluated in comparison to the logistic regression (LR) method using different efficiency measures to ensure their generalization ability. Prediction results show the ability of RF to build accurate models
2013-01-01
Background There are a number of practical and ethical issues raised in school-based health research, particularly those related to obtaining consent from parents and assent from children. One approach to developing, strengthening, and supporting appropriate consent and assent processes is through community engagement. To date, much of the literature on community engagement in biomedical research has concentrated on community- or hospital-based research, with little documentation, if any, of community engagement in school-based health research. In this paper we discuss our experiences of consent, assent and community engagement in implementing a large school-based cluster randomized trial in rural Kenya. Methods Data collected as part of a qualitative study investigating the acceptability of the main trial, focus group discussions with field staff, observations of practice and authors’ experiences are used to: 1) highlight the challenges faced in obtaining assent/consent; and 2) strategies taken to try to both protect participant rights (including to refuse and to withdraw) and ensure the success of the trial. Results Early meetings with national, district and local level stakeholders were important in establishing their co-operation and support for the project. Despite this support, both practical and ethical challenges were encountered during consenting and assenting procedures. Our strategy for addressing these challenges focused on improving communication and understanding of the trial, and maintaining dialogue with all the relevant stakeholders throughout the study period. Conclusions A range of stakeholders within and beyond schools play a key role in school based health trials. Community entry and information dissemination strategies need careful planning from the outset, and with on-going consultation and feedback mechanisms established in order to identify and address concerns as they arise. We believe our experiences, and the ethical and practical
Fu, Qiang; Hu, Hongyu; Wang, Dezhao; Chen, Wei; Tan, Zhixu; Li, Qun; Chen, Buxing
2015-01-01
Background Growing evidence suggests that the left radial approach (LRA) is related to decreased coronary procedure duration and fewer cerebrovascular complications as compared to the right radial approach (RRA) in elective percutaneous coronary intervention (PCI). However, the feasibility of LRA in primary PCI has yet to be studied further. Therefore, the aim of this study was to investigate the efficacy of LRA compared with RRA for primary PCI in ST-elevation myocardial infarction (STEMI) patients. Materials and methods A total of 200 consecutive patients with STEMI who received primary PCI were randomized to LRA (number [n]=100) or RRA (n=100). The study endpoint was needle-to-balloon time, defined as the time from local anesthesia infiltration to the first balloon inflation. Radiation dose by measuring cumulative air kerma (CAK) and CAK dose area product, as well as fluoroscopy time and contrast volume were also investigated. Results There were no significant differences in the baseline characteristics between the two groups. The coronary procedural success rate was similar between both radial approaches (98% for left versus 94% for right; P=0.28). Compared with RRA, LRA had significantly shorter needle-to-balloon time (16.0±4.8 minutes versus 18.0±6.5 minutes, respectively; P=0.02). Additionally, fluoroscopy time (7.4±3.4 minutes versus 8.8±3.5 minutes, respectively; P=0.01) and CAK dose area product (51.9±30.4 Gy cm2 versus 65.3±49.1 Gy cm2, respectively; P=0.04) were significantly lower with LRA than with RRA. Conclusion Primary PCI can be performed via LRA with earlier blood flow restoration in the infarct-related artery and lower radiation exposure when compared with RRA; therefore, the LRA may become a feasible and attractive alternative to perform primary PCI for STEMI patients. PMID:26150704
Gonzalez, Jean-Michel; Saito, Kayoko; Kang, Changdon; Gromski, Mark; Sawhney, Mandeep; Chuttani, Ram; Matthes, Kai
2015-01-01
Background: Safe transgastric natural orifice transluminal endoscopic surgery (NOTES) procedures require a reliable closure of the gastrotomy. Recently a novel peritoneal access method via a submucosal tunnel has been described with encouraging preliminary results. Aim: The aim is to compare a submucosal tunnel access plus over-the-scope clip (OTSC) system for closure with two other closure modalities. Patients and methods: This is a prospective ex vivo study conducted on 42 porcine stomach models equally randomized into three groups in an academic medical center. The procedures performed in each group included: (1) Tunnel (6 cm) + endoclips; (2) Knife + balloon dilation access + OTSC; and (3) Tunnel + OTSC. A pressurized air-leak test was performed to evaluate the strength of the closure. Stomach volumes, procedure times, number of clips, and incision sizes were also registered. Results: The mean air-leak pressure was statistically higher in Group 3 than in Groups 1 and 2–95.2 ± 19.3 mmHg versus 72.5 ± 35.2 and 79.0 ± 24.5 mmHg (P < 0.05). The gastrotomy creation times for Groups 1, 2, and 3 were 28.0 ± 10.1, 4.3 ± 1.4, and 20.1 ± 10.6 minutes, respectively, with significantly lower time in Group 2 (P < 0.001). The closure times were 16.1 ± 6.1, 6.5 ± 1.2, and 5.3 ± 3.0 minutes, respectively, and significantly longer in the endoclip group (P < 0.001). There were no differences in the volumes and the incision sizes among the three groups. Conclusion: The combination of a submucosal tunnel access and OTSC offers a stronger closure than the other methods studied. PMID:26134780
Jesse, D. Elizabeth; Gaynes, Bradley N.; Feldhousen, Elizabeth; Newton, Edward R.; Bunch, Shelia; Hollon, Steven D.
2016-01-01
Introduction Cognitive behavioral group interventions have been shown to improve depressive symptoms in adult populations. This article details the feasibility and efficacy of a 6-week culturally tailored cognitive behavioral intervention offered to rural, minority, low-income women at risk for antepartum depression. Methods 146 pregnant women were stratified by high-risk for antepartum depression (Edinburgh Postnatal Depression Scale (EPDS) score of 10 or higher) or low-moderate risk (EPDS score of 4-9) and randomized to a cognitive behavioral intervention or treatment-as-usual. Differences in mean change of EPDS and BDI-II scores for low-moderate and high-risk women in the cognitive behavioral intervention and treatment-as-usual for the full sample were assessed from baseline (T1), post-treatment (T2) and 1-month follow-up (T3) and for African-American women in the subsample. Results Both the cognitive behavioral intervention and treatment-as-usual groups had significant reductions in the EPDS scores from T1 to T2 and T1 to T3. In women at high-risk for depression (n=62), there was no significant treatment effect from T1 to T2 or T3 for the Edinburgh Postnatal Depression Scale. However, in low-moderate risk women, there was a significant decrease in the BDI-II scores from T1 to T2 (4.92 vs. 0.59, P=.018) and T1 to T3 (5.67 vs. 1.51, P=.04). Also, the cognitive behavioral intervention significantly reduced EPDS scores for African-American women at high-risk (n=43) from T1 to T2 (5.59 vs. 2.18, P=.02) and from T1 to T3 (6.32 vs. 3.14, P= .04). Discussion A cognitive behavioral intervention integrated within prenatal clinics is feasible in this sample, although attrition rates were high. Compared to treatment-as-usual, the cognitive behavioral intervention reduced depressive symptoms for African-American women at high-risk for antepartum depression and for the full sample of women at low-moderate risk for antepartum depression. These promising findings need to be
NASA Astrophysics Data System (ADS)
Bellin, A.; Firmani, G.; Fiori, A.
2005-12-01
We analyze, by means of a numerical model, flow toward a pumping well in a confined three-dimensional heterogeneous aquifer. In order to model hydraulic property variations and the associated uncertainty the logconductivity field Y=ln K, where K is the hydraulic conductivity, is modelled as a stationary Random Space Function (RSF), normally distributed with constant mean and variance, σ_Y2, and an exponential axisymmetric covariance function, which identifies the geostatistical model of variability. First, we analyze how the boundary condition at the pumping (extraction) well influences the flow field. Specifically, we show that a specific water discharge through the well's envelope proportional to the local hydraulic conductivity is the condition that better approximates the flow field obtained by imposing a constant head along the well. The latter is the condition that better represents the experimental setup typically employed in pumping tests. Another result of our analysis is that the difference between the drawdown at a fully penetrating monitoring well and the ergodic solution provided by Indelman et al. (1996), which coincides with the Thiem's solution, reduces as the depth of the aquifer increases, becoming negligible as the depth grows larger than 60 vertical integral scales of the hydraulic logconductivity. With these results in mind we envision a simply to apply procedure for obtaining the parameters of the geostatistical model of spatial variability. The procedure is based on fitting the expression of the equivalent hydraulic conductivity proposed by Indelman et al. (1996) to the experimental values obtained by interpreting with the Thiem's solution the measured drawdown at a few wells . If the vertical integral scale is known independently from the pumping test the fitting procedure leads to a robust calculation of the parameters, although the horizontal integral scale is adversely affected by a wide confidence interval.
Campbell, Thomas B.; Smeaton, Laura M.; Kumarasamy, N.; Flanigan, Timothy; Klingman, Karin L.; Firnhaber, Cynthia; Grinsztejn, Beatriz; Hosseinipour, Mina C.; Kumwenda, Johnstone; Lalloo, Umesh; Riviere, Cynthia; Sanchez, Jorge; Melo, Marineide; Supparatpinyo, Khuanchai; Tripathy, Srikanth; Martinez, Ana I.; Nair, Apsara; Walawander, Ann; Moran, Laura; Chen, Yun; Snowden, Wendy; Rooney, James F.; Uy, Jonathan; Schooley, Robert T.; De Gruttola, Victor; Hakim, James Gita; Swann, Edith; Barnett, Ronald L.; Brizz, Barbara; Delph, Yvette; Gettinger, Nikki; Mitsuyasu, Ronald T.; Eshleman, Susan; Safren, Steven; Fiscus, Susan A.; Andrade, Adriana; Haas, David W.; Amod, Farida; Berthaud, Vladimir; Bollinger, Robert C.; Bryson, Yvonne; Celentano, David; Chilongozi, David; Cohen, Myron; Collier, Ann C.; Currier, Judith Silverstein; Cu-Uvin, Susan; Eron, Joseph; Flexner, Charles; Gallant, Joel E.; Gulick, Roy M.; Hammer, Scott M.; Hoffman, Irving; Kazembe, Peter; Kumwenda, Newton; Lama, Javier R.; Lawrence, Jody; Maponga, Chiedza; Martinson, Francis; Mayer, Kenneth; Nielsen, Karin; Pendame, Richard B.; Ramratnam, Bharat; Sanne, Ian; Severe, Patrice; Sirisanthana, Thira; Solomon, Suniti; Tabet, Steve; Taha, Taha; van der Horst, Charles; Wanke, Christine; Gormley, Joan; Marcus, Cheryl J.; Putnam, Beverly; Loeliger, Edde; Pappa, Keith A.; Webb, Nancy; Shugarts, David L.; Winters, Mark A.; Descallar, Renard S.; Steele, Joseph; Wulfsohn, Michael; Said, Farideh; Chen, Yue; Martin, John C; Bischofberger, Norbert; Cheng, Andrew; Jaffe, Howard; Sharma, Jabin; Poongulali, S.; Cardoso, Sandra Wagner; Faria, Deise Lucia; Berendes, Sima; Burke, Kelly; Mngqibisa, Rosie; Kanyama, Cecelia; Kayoyo, Virginia; Samaneka, Wadzanai P.; Chisada, Anthony; Faesen, Sharla; Chariyalertsak, Suwat; Santos, Breno; Lira, Rita Alves; Joglekar, Anjali A.; Rosa, Alberto La; Infante, Rosa; Jain, Mamta; Petersen, Tianna; Godbole, Sheela; Dhayarkar, Sampada; Feinberg, Judith; Baer, Jenifer; Pollard, Richard B.; Asmuth, David; Gangakhedkar, Raman R; Gaikwad, Asmita; Ray, M. Graham; Basler, Cathi; Para, Michael F.; Watson, Kathy J.; Taiwo, Babafemi; McGregor, Donna; Balfour, Henry H.; Mullan, Beth; Kim, Ge-Youl; Klebert, Michael K.; Cox, Gary Matthew; Silberman, Martha; Mildvan, Donna; Revuelta, Manuel; Tashima, Karen T.; Patterson, Helen; Geiseler, P. Jan; Santos, Bartolo; Daar, Eric S; Lopez, Ruben; Frarey, Laurie; Currin, David; Haas, David H.; Bailey, Vicki L.; Tebas, Pablo; Zifchak, Larisa; Noel-Connor, Jolene; Torres, Madeline; Sha, Beverly E.; Fritsche, Janice M.; Cespedes, Michelle; Forcht, Janet; O'Brien, William A.; Mogridge, Cheryl; Hurley, Christine; Corales, Roberto; Palmer, Maria; Adams, Mary; Luque, Amneris; Lopez-Detres, Luis; Stroberg, Todd
2012-01-01
Background Antiretroviral regimens with simplified dosing and better safety are needed to maximize the efficiency of antiretroviral delivery in resource-limited settings. We investigated the efficacy and safety of antiretroviral regimens with once-daily compared to twice-daily dosing in diverse areas of the world. Methods and Findings 1,571 HIV-1-infected persons (47% women) from nine countries in four continents were assigned with equal probability to open-label antiretroviral therapy with efavirenz plus lamivudine-zidovudine (EFV+3TC-ZDV), atazanavir plus didanosine-EC plus emtricitabine (ATV+DDI+FTC), or efavirenz plus emtricitabine-tenofovir-disoproxil fumarate (DF) (EFV+FTC-TDF). ATV+DDI+FTC and EFV+FTC-TDF were hypothesized to be non-inferior to EFV+3TC-ZDV if the upper one-sided 95% confidence bound for the hazard ratio (HR) was ≤1.35 when 30% of participants had treatment failure. An independent monitoring board recommended stopping study follow-up prior to accumulation of 472 treatment failures. Comparing EFV+FTC-TDF to EFV+3TC-ZDV, during a median 184 wk of follow-up there were 95 treatment failures (18%) among 526 participants versus 98 failures among 519 participants (19%; HR 0.95, 95% CI 0.72–1.27; p = 0.74). Safety endpoints occurred in 243 (46%) participants assigned to EFV+FTC-TDF versus 313 (60%) assigned to EFV+3TC-ZDV (HR 0.64, CI 0.54–0.76; p<0.001) and there was a significant interaction between sex and regimen safety (HR 0.50, CI 0.39–0.64 for women; HR 0.79, CI 0.62–1.00 for men; p = 0.01). Comparing ATV+DDI+FTC to EFV+3TC-ZDV, during a median follow-up of 81 wk there were 108 failures (21%) among 526 participants assigned to ATV+DDI+FTC and 76 (15%) among 519 participants assigned to EFV+3TC-ZDV (HR 1.51, CI 1.12–2.04; p = 0.007). Conclusion EFV+FTC-TDF had similar high efficacy compared to EFV+3TC-ZDV in this trial population, recruited in diverse multinational settings. Superior safety, especially in HIV-1-infected
Geostatistical independent simulation of spatially correlated soil variables
NASA Astrophysics Data System (ADS)
Boluwade, Alaba; Madramootoo, Chandra A.
2015-12-01
The selection of best management practices to reduce soil and water pollution often requires estimation of soil properties. It is important to find an efficient and robust technique to simulate spatially correlated soils parameters. Co-kriging and co-simulation are techniques that can be used. These methods are limited in terms of computer simulation due to the problem of solving large co-kriging systems and difficulties in fitting a valid model of coregionalization. The order of complexity increases as the number of covariables increases. This paper presents a technique for the conditional simulation of a non-Gaussian vector random field on point support scale. The technique is termed Independent Component Analysis (ICA). The basic principle underlining ICA is the determination of a linear representation of non-Gaussian data so that the components are considered statistically independent. With such representation, it would be easy and more computationally efficient to develop direct variograms for the components. The process is presented in two stages. The first stage involves the ICA decomposition. The second stage involves sequential Gaussian simulation of the generated components (which are derived from the first stage). This technique was applied for spatially correlated extractable cations such as magnesium (Mg) and iron (Fe) in a Canadian watershed. This paper has a strong application in stochastic quantification of uncertainties of soil attributes in soil remediation and soil rehabilitation.
Robinson, Thomas N.; Matheson, Donna; Desai, Manisha; Wilson, Darrell M.; Weintraub, Dana L.; Haskell, William L.; McClain, Arianna; McClure, Samuel; Banda, Jorge; Sanders, Lee M.; Haydel, K. Farish; Killen, Joel D.
2013-01-01
Objective To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Design Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Participants Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Interventions Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Main Outcome Measure Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. Conclusions The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families
Anchala, Raghupathy; Kaptoge, Stephen; Pant, Hira; Di Angelantonio, Emanuele; Franco, Oscar H.; Prabhakaran, D.
2015-01-01
Background Randomized control trials from the developed world report that clinical decision support systems (DSS) could provide an effective means to improve the management of hypertension (HTN). However, evidence from developing countries in this regard is rather limited, and there is a need to assess the impact of a clinical DSS on managing HTN in primary health care center (PHC) settings. Methods and Results We performed a cluster randomized trial to test the effectiveness and cost‐effectiveness of a clinical DSS among Indian adult hypertensive patients (between 35 and 64 years of age), wherein 16 PHC clusters from a district of Telangana state, India, were randomized to receive either a DSS or a chart‐based support (CBS) system. Each intervention arm had 8 PHC clusters, with a mean of 102 hypertensive patients per cluster (n=845 in DSS and 783 in CBS groups). Mean change in systolic blood pressure (SBP) from baseline to 12 months was the primary endpoint. The mean difference in SBP change from baseline between the DSS and CBS at the 12th month of follow‐up, adjusted for age, sex, height, waist, body mass index, alcohol consumption, vegetable intake, pickle intake, and baseline differences in blood pressure, was −6.59 mm Hg (95% confidence interval: −12.18 to −1.42; P=0.021). The cost‐effective ratio for CBS and DSS groups was $96.01 and $36.57 per mm of SBP reduction, respectively. Conclusion Clinical DSS are effective and cost‐effective in the management of HTN in resource‐constrained PHC settings. Clinical Trial Registration URL: http://www.ctri.nic.in. Unique identifier: CTRI/2012/03/002476. PMID:25559011
NASA Astrophysics Data System (ADS)
Tapiero, Charles S.; Vallois, Pierre
2016-11-01
The premise of this paper is that a fractional probability distribution is based on fractional operators and the fractional (Hurst) index used that alters the classical setting of random variables. For example, a random variable defined by its density function might not have a fractional density function defined in its conventional sense. Practically, it implies that a distribution's granularity defined by a fractional kernel may have properties that differ due to the fractional index used and the fractional calculus applied to define it. The purpose of this paper is to consider an application of fractional calculus to define the fractional density function of a random variable. In addition, we provide and prove a number of results, defining the functional forms of these distributions as well as their existence. In particular, we define fractional probability distributions for increasing and decreasing functions that are right continuous. Examples are used to motivate the usefulness of a statistical approach to fractional calculus and its application to economic and financial problems. In conclusion, this paper is a preliminary attempt to construct statistical fractional models. Due to the breadth and the extent of such problems, this paper may be considered as an initial attempt to do so.
Paul, Mandira; Iyengar, Kirti; Essén, Birgitta; Gemzell-Danielsson, Kristina; Iyengar, Sharad D.; Bring, Johan; Soni, Sunita; Klingberg-Allvin, Marie
2015-01-01
Background Studies evaluating acceptability of simplified follow-up after medical abortion have focused on high-resource or urban settings where telephones, road connections, and modes of transport are available and where women have formal education. Objective To investigate women’s acceptability of home-assessment of abortion and whether acceptability of medical abortion differs by in-clinic or home-assessment of abortion outcome in a low-resource setting in India. Design Secondary outcome of a randomised, controlled, non-inferiority trial. Setting Outpatient primary health care clinics in rural and urban Rajasthan, India. Population Women were eligible if they sought abortion with a gestation up to 9 weeks, lived within defined study area and agreed to follow-up. Women were ineligible if they had known contraindications to medical abortion, haemoglobin < 85mg/l and were below 18 years. Methods Abortion outcome assessment through routine clinic follow-up by a doctor was compared with home-assessment using a low-sensitivity pregnancy test and a pictorial instruction sheet. A computerized random number generator generated the randomisation sequence (1:1) in blocks of six. Research assistants randomly allocated eligible women who opted for medical abortion (mifepristone and misoprostol), using opaque sealed envelopes. Blinding during outcome assessment was not possible. Main Outcome Measures Women’s acceptability of home-assessment was measured as future preference of follow-up. Overall satisfaction, expectations, and comparison with previous abortion experiences were compared between study groups. Results 731 women were randomized to the clinic follow-up group (n = 353) or home-assessment group (n = 378). 623 (85%) women were successfully followed up, of those 597 (96%) were satisfied and 592 (95%) found the abortion better or as expected, with no difference between study groups. The majority, 355 (57%) women, preferred home-assessment in the event of a future
Geostatistical analysis of field hydraulic conductivity in compacted clay
Rogowski, A.S.; Simmons, D.E.
1988-05-01
Hydraulic conductivity (K) of fractured or porous materials is associated intimately with water flow and chemical transport. Basic concepts imply uniform flux through a homogeneous cross-sectional area. If flow were to occur only through part of the area, actual rates could be considerably different. Because laboratory values of K in compacted clays seldom agree with field estimates, questions arise as to what the true values of K are and how they should be estimated. Hydraulic conductivity values were measured on a 10 x 25 m elevated bridge-like platform. A constant water level was maintained for 1 yr over a 0.3-m thick layer of compacted clay, and inflow and outflow rates were monitored using 10 x 25 grids of 0.3-m diameter infiltration rings and outflow drains subtending approximately 1 x 1 m blocks of compacted clay. Variography of inflow and outflow data established relationships between cores and blocks of clay, respectively. Because distributions of outflow rates were much less and bore little resemblance to the distributions of break-through rates based on tracer studies, presence of macropores and preferential flow through the macropores was suspected. Subsequently, probability kriging was applied to reevaluate distribution of flux rates and possible location of macropores. Sites exceeding a threshold outflow of 100 x 10/sup -9/ m/s were classified as outliers and were assumed to probably contain a significant population of macropores. Different sampling schemes were examined. Variogram analysis of outflows with and without outliers suggested adequacy of sampling the site at 50 randomly chosen locations. Because of the potential contribution of macropores to pollutant transport and the practical necessity of extrapolating small plot values to larger areas, conditional simulations with and without outliers were carried out.
Peterson, Erin E; Urquhart, N Scott
2006-10-01
In the United States, probability-based water quality surveys are typically used to meet the requirements of Section 305(b) of the Clean Water Act. The survey design allows an inference to be generated concerning regional stream condition, but it cannot be used to identify water quality impaired stream segments. Therefore, a rapid and cost-efficient method is needed to locate potentially impaired stream segments throughout large areas. We fit a set of geostatistical models to 312 samples of dissolved organic carbon (DOC) collected in 1996 for the Maryland Biological Stream Survey using coarse-scale watershed characteristics. The models were developed using two distance measures, straight-line distance (SLD) and weighted asymmetric hydrologic distance (WAHD). We used the Corrected Spatial Akaike Information Criterion and the mean square prediction error to compare models. The SLD models predicted more variability in DOC than models based on WAHD for every autocovariance model except the spherical model. The SLD model based on the Mariah autocovariance model showed the best fit (r(2) = 0.72). DOC demonstrated a positive relationship with the watershed attributes percent water, percent wetlands, and mean minimum temperature, but was negatively correlated to percent felsic rock type. We used universal kriging to generate predictions and prediction variances for 3083 stream segments throughout Maryland. The model predicted that 90.2% of stream kilometers had DOC values less than 5 mg/l, 6.7% were between 5 and 8 mg/l, and 3.1% of streams produced values greater than 8 mg/l. The geostatistical model generated more accurate DOC predictions than previous models, but did not fit the data equally well throughout the state. Consequently, it may be necessary to develop more than one geostatistical model to predict stream DOC throughout Maryland. Our methodology is an improvement over previous methods because additional field sampling is not necessary, inferences about regional
Meyer, Swen; Blaschek, Michael; Duttmann, Rainer; Ludwig, Ralf
2016-02-01
According to current climate projections, Mediterranean countries are at high risk for an even pronounced susceptibility to changes in the hydrological budget and extremes. These changes are expected to have severe direct impacts on the management of water resources, agricultural productivity and drinking water supply. Current projections of future hydrological change, based on regional climate model results and subsequent hydrological modeling schemes, are very uncertain and poorly validated. The Rio Mannu di San Sperate Basin, located in Sardinia, Italy, is one test site of the CLIMB project. The Water Simulation Model (WaSiM) was set up to model current and future hydrological conditions. The availability of measured meteorological and hydrological data is poor as it is common for many Mediterranean catchments. In this study we conducted a soil sampling campaign in the Rio Mannu catchment. We tested different deterministic and hybrid geostatistical interpolation methods on soil textures and tested the performance of the applied models. We calculated a new soil texture map based on the best prediction method. The soil model in WaSiM was set up with the improved new soil information. The simulation results were compared to standard soil parametrization. WaSiMs was validated with spatial evapotranspiration rates using the triangle method (Jiang and Islam, 1999). WaSiM was driven with the meteorological forcing taken from 4 different ENSEMBLES climate projections for a reference (1971-2000) and a future (2041-2070) times series. The climate change impact was assessed based on differences between reference and future time series. The simulated results show a reduction of all hydrological quantities in the future in the spring season. Furthermore simulation results reveal an earlier onset of dry conditions in the catchment. We show that a solid soil model setup based on short-term field measurements can improve long-term modeling results, which is especially important
Conditioning geostatistical simulations of a bedrock fluvial aquifer using single well pumping tests
NASA Astrophysics Data System (ADS)
Niazi, A.; Bentley, L. R.; Hayashi, M.
2015-12-01
Geostatistical simulation is a powerful tool to explore the uncertainty associated with heterogeneity in groundwater and reservoir studies. Nonetheless, conditioning simulations merely with lithological information does not utilize all of the available information and so some workers additionally condition simulations with flow data. In this study, we introduce an approach to condition geostatistical simulations of the Paskapoo Formation, which is a paleo-fluvial system consisting of sandstone channels embedded in mudstone. The conditioning data consist of two-hour single well pumping tests extracted from the public water well database in Alberta, Canada. In this approach, lithologic models of an entire watershed are simulated and conditioned with hard lithological data using transition probability geostatistics (TPROGS). Then, a segment of the simulation around a pumping well was used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone were then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method (PPM) and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each pumping well that has pumping test data. The method constrains the lithological simulations and provides estimates of hydraulic conductivity and specific storage that are consistent with the pumping test data. Eventually, the simulations will be combined in watershed scale groundwater models.
2D Forward Modeling of Gravity Data Using Geostatistically Generated Subsurface Density Variations
NASA Astrophysics Data System (ADS)
Phelps, G. A.
2015-12-01
Two-dimensional (2D) forward models of synthetic gravity anomalies are calculated and compared to observed gravity anomalies using geostatistical models of density variations in the subsurface, constrained by geologic data. These models have an advantage over forward gravity models generated using polygonal bodies of homogeneous density because the homogeneous density restriction is relaxed, allowing density variations internal to geologic bodies to be considered. By discretizing the subsurface and calculating the cumulative gravitational effect of each cell, multiple forward models can be generated for a given geologic body, which expands the exploration of the solution space. Furthermore, the stochastic models can be designed to match the observed statistical properties of the internal densities of the geologic units being modeled. The results of such stochastically generated forward gravity models can then be compared with the observed data. To test this modeling approach, we compared stochastic forward gravity models of 2D geologic cross-sections to gravity data collected along a profile across the Vaca Fault near Fairfield, California. Three conceptual geologic models were created, each representing a distinct fault block scenario (normal, strike-slip, reverse) with four rock units in each model. Using fixed rock unit boundaries, the units were populated with geostatistically generated density values, characterized by their respective histogram and vertical variogram. The horizontal variogram could not be estimated because of lack of data, and was therefore left as a free parameter. Each fault block model had multiple geostatistical realizations of density associated with it. Forward models of gravity were then generated from the fault block model realizations, and rejection sampling was used to determine viable fault block density models. Given the constraints on subsurface density, the normal and strike-slip fault model were the most likely.
NASA Astrophysics Data System (ADS)
Lee, J. H.; Kitanidis, P. K.
2014-12-01
The geostatistical approach (GA) to inversion has been applied to many engineering applications to estimate unknown parameter functions and quantify the uncertainty in estimation. Thanks to recent advances in sensor technology, large-scale/joint inversions have become more common and the implementation of the traditional GA algorithm would require thousands of expensive numerical simulation runs, which would be computationally infeasible. To overcome the computational challenges, we present the Principal Component Geostatistical Approach (PCGA) that makes use of leading principal components of the prior information to avoid expensive sensitivity computations and obtain an approximate GA solution and its uncertainty with a few hundred numerical simulation runs. As we show in this presentation, the PCGA estimate is close to, even almost same as the estimate obtained from full-model implemented GA while one can reduce the computation time by the order of 10 or more in most practical cases. Furthermore, our method is "black-box" in the sense that any numerical simulation software can be linked to PCGA to perform the geostatistical inversion. This enables a hassle-free implementation of GA to multi-physics problems and joint inversion with different types of measurements such as hydrologic, chemical, and geophysical data obviating the need to explicitly compute the sensitivity of measurements through expensive coupled numerical simulations. Lastly, the PCGA is easily implemented to run the numerical simulations in parallel, thus taking advantage of high performance computing environments. We show the effectiveness and efficiency of our method with several examples such as 3-D transient hydraulic tomography, joint inversion of head and tracer data and geochemical heterogeneity identification.
Uncertainty Estimate in Resources Assessment: A Geostatistical Contribution
Souza, Luis Eduardo de Costa, Joao Felipe C. L.; Koppe, Jair C.
2004-03-15
For many decades the mining industry regarded resources/reserves estimation and classification as a mere calculation requiring basic mathematical and geological knowledge. Most methods were based on geometrical procedures and spatial data distribution. Therefore, uncertainty associated with tonnages and grades either were ignored or mishandled, although various mining codes require a measure of confidence in the values reported. Traditional methods fail in reporting the level of confidence in the quantities and grades. Conversely, kriging is known to provide the best estimate and its associated variance. Among kriging methods, Ordinary Kriging (OK) probably is the most widely used one for mineral resource/reserve estimation, mainly because of its robustness and its facility in uncertainty assessment by using the kriging variance. It also is known that OK variance is unable to recognize local data variability, an important issue when heterogeneous mineral deposits with higher and poorer grade zones are being evaluated. Alternatively, stochastic simulation are used to build local or global uncertainty about a geological attribute respecting its statistical moments. This study investigates methods capable of incorporating uncertainty to the estimates of resources and reserves via OK and sequential gaussian and sequential indicator simulation The results showed that for the type of mineralization studied all methods classified the tonnages similarly. The methods are illustrated using an exploration drill hole data sets from a large Brazilian coal deposit.
Boden, Sven; Rogiers, Bart; Jacques, Diederik
2013-09-01
Decommissioning of nuclear building structures usually leads to large amounts of low level radioactive waste. Using a reliable method to determine the contamination depth is indispensable prior to the start of decontamination works and also for minimizing the radioactive waste volume and the total workload. The method described in this paper is based on geostatistical modeling of in situ gamma-ray spectroscopy measurements using the multiple photo peak method. The method has been tested on the floor of the waste gas surge tank room within the BR3 (Belgian Reactor 3) decommissioning project and has delivered adequate results.
NASA Astrophysics Data System (ADS)
Aoki, K.; Mito, Y.; Yamamoto, T.; Shirasagi, S.
2007-12-01
The evaluation of the rock mass mechanical properties by the seismic reflection method and TBM driving is proposed for TBM tunnelling. The relationship between the reflection number derived from the three-dimensional seismic reflection method and the rock strength index ( RSI) derived from TBM driving data is examined, and the methodology of conversion from the reflection number to the RSI is proposed. Furthermore a geostatistical prediction methodology to provide a three-dimensional geotechnical profile ahead of the tunnel face is proposed. The performance of this prediction method is verified by actual field data.
New advances in methodology for statistical tests useful in geostatistical studies
Borgman, L.E.
1988-05-01
Methodology for statistical procedures to perform tests of hypothesis pertaining to various aspects of geostatistical investigations has been slow in developing. The correlated nature of the data precludes most classical tests and makes the design of new tests difficult. Recent studies have led to modifications of the classical t test which allow for the intercorrelation. In addition, results for certain nonparametric tests have been obtained. The conclusions of these studies provide a variety of new tools for the geostatistician in deciding questions on significant differences and magnitudes.
Boden, Sven; Rogiers, Bart; Jacques, Diederik
2013-09-01
Decommissioning of nuclear building structures usually leads to large amounts of low level radioactive waste. Using a reliable method to determine the contamination depth is indispensable prior to the start of decontamination works and also for minimizing the radioactive waste volume and the total workload. The method described in this paper is based on geostatistical modeling of in situ gamma-ray spectroscopy measurements using the multiple photo peak method. The method has been tested on the floor of the waste gas surge tank room within the BR3 (Belgian Reactor 3) decommissioning project and has delivered adequate results. PMID:23722072
Bossong, C.R.; Karlinger, M.R.; Troutman, B.M.; Vecchia, A.V.
1999-10-01
Technical and practical aspects of applying geostatistics are developed for individuals involved in investigation at hazardous-, toxic-, and radioactive-waste sites. Important geostatistical concepts, such as variograms and ordinary, universal, and indicator kriging, are described in general terms for introductory purposes and in more detail for practical applications. Variogram modeling using measured ground-water elevation data is described in detail to illustrate principles of stationarity, anisotropy, transformations, and cross validation. Several examples of kriging applications are described using ground-water-level elevations, bedrock elevations, and ground-water-quality data. A review of contemporary literature and selected public domain software associated with geostatistics also is provided, as is a discussion of alternative methods for spatial modeling, including inverse distance weighting, triangulation, splines, trend-surface analysis, and simulation.
NASA Astrophysics Data System (ADS)
Landrum, C.; Castrignanò, A.; Mueller, T.; Zourarakis, D.; Zhu, J.
2013-12-01
It is important to assess soil moisture to develop strategies to better manage its availability and use. At the landscape scale, soil moisture distribution derives from an integration of hydrologic, pedologic and geomorphic processes that cause soil moisture variability (SMV) to be time, space, and scale-dependent. Traditional methods to assess SMV at this scale are often costly, labor intensive, and invasive, which can lead to inadequate sampling density and spatial coverage. Fusing traditional sampling techniques with georeferenced auxiliary sensing technologies, such as geoelectric sensing and LiDAR, provide an alternative approach. Because geoelectric and LiDAR measurements are sensitive to soil properties and terrain features that affect soil moisture variation, they are often employed as auxiliary measures to support less dense direct sampling. Georeferenced proximal sensing acquires rapid, real-time, high resolution data over large spatial extents that is enriched with spatial, temporal and scale-dependent information. Data fusion becomes important when proximal sensing is used in tandem with more sparse direct sampling. Multicollocated factorial cokriging (MFC) is one technique of multivariate geostatistics to fuse multiple data sources collected at different sampling scales to study the spatial characteristics of environmental properties. With MFC sparse soil observations are supported by more densely sampled auxiliary attributes to produce more consistent spatial descriptions of scale-dependent parameters affecting SMV. This study uses high resolution geoelectric and LiDAR data as auxiliary measures to support direct soil sampling (n=127) over a 40 hectare Central Kentucky (USA) landscape. Shallow and deep apparent electrical resistivity (ERa) were measured using a Veris 3100 in tandem with soil moisture sampling on three separate dates with ascending soil moisture contents ranging from plant wilting point to field capacity. Terrain features were produced
Geostatistical and Stochastic Study of Flow and Transport in the Unsaturated Zone at Yucca Mountain
Ye, Ming; Pan, Feng; Hu, Xiaolong; Zhu, Jianting
2007-08-14
Yucca Mountain has been proposed by the U.S. Department of Energy as the nation’s long-term, permanent geologic repository for spent nuclear fuel or high-level radioactive waste. The potential repository would be located in Yucca Mountain’s unsaturated zone (UZ), which acts as a critical natural barrier delaying arrival of radionuclides to the water table. Since radionuclide transport in groundwater can pose serious threats to human health and the environment, it is important to understand how much and how fast water and radionuclides travel through the UZ to groundwater. The UZ system consists of multiple hydrogeologic units whose hydraulic and geochemical properties exhibit systematic and random spatial variation, or heterogeneity, at multiple scales. Predictions of radionuclide transport under such complicated conditions are uncertain, and the uncertainty complicates decision making and risk analysis. This project aims at using geostatistical and stochastic methods to assess uncertainty of unsaturated flow and radionuclide transport in the UZ at Yucca Mountain. Focus of this study is parameter uncertainty of hydraulic and transport properties of the UZ. The parametric uncertainty arises since limited parameter measurements are unable to deterministically describe spatial variability of the parameters. In this project, matrix porosity, permeability and sorption coefficient of the reactive tracer (neptunium) of the UZ are treated as random variables. Corresponding propagation of parametric uncertainty is quantitatively measured using mean, variance, 5th and 95th percentiles of simulated state variables (e.g., saturation, capillary pressure, percolation flux, and travel time). These statistics are evaluated using a Monte Carlo method, in which a three-dimensional flow and transport model implemented using the TOUGH2 code is executed with multiple parameter realizations of the random model parameters. The project specifically studies uncertainty of unsaturated
Geostatistical Borehole Image-Based Mapping of Karst-Carbonate Aquifer Pores.
Sukop, Michael C; Cunningham, Kevin J
2016-03-01
Quantification of the character and spatial distribution of porosity in carbonate aquifers is important as input into computer models used in the calculation of intrinsic permeability and for next-generation, high-resolution groundwater flow simulations. Digital, optical, borehole-wall image data from three closely spaced boreholes in the karst-carbonate Biscayne aquifer in southeastern Florida are used in geostatistical experiments to assess the capabilities of various methods to create realistic two-dimensional models of vuggy megaporosity and matrix-porosity distribution in the limestone that composes the aquifer. When the borehole image data alone were used as the model training image, multiple-point geostatistics failed to detect the known spatial autocorrelation of vuggy megaporosity and matrix porosity among the three boreholes, which were only 10 m apart. Variogram analysis and subsequent Gaussian simulation produced results that showed a realistic conceptualization of horizontal continuity of strata dominated by vuggy megaporosity and matrix porosity among the three boreholes.
NASA Astrophysics Data System (ADS)
Peredo, Oscar; Ortiz, Julián M.; Herrero, José R.
2015-12-01
The Geostatistical Software Library (GSLIB) has been used in the geostatistical community for more than thirty years. It was designed as a bundle of sequential Fortran codes, and today it is still in use by many practitioners and researchers. Despite its widespread use, few attempts have been reported in order to bring this package to the multi-core era. Using all CPU resources, GSLIB algorithms can handle large datasets and grids, where tasks are compute- and memory-intensive applications. In this work, a methodology is presented to accelerate GSLIB applications using code optimization and hybrid parallel processing, specifically for compute-intensive applications. Minimal code modifications are added decreasing as much as possible the elapsed time of execution of the studied routines. If multi-core processing is available, the user can activate OpenMP directives to speed up the execution using all resources of the CPU. If multi-node processing is available, the execution is enhanced using MPI messages between the compute nodes.Four case studies are presented: experimental variogram calculation, kriging estimation, sequential gaussian and indicator simulation. For each application, three scenarios (small, large and extra large) are tested using a desktop environment with 4 CPU-cores and a multi-node server with 128 CPU-nodes. Elapsed times, speedup and efficiency results are shown.
A software tool for geostatistical analysis of thermal response test data: GA-TRT
NASA Astrophysics Data System (ADS)
Focaccia, Sara; Tinti, Francesco; Bruno, Roberto
2013-09-01
In this paper we present a new method (DCE - Drift and Conditional Estimation), coupling Infinite Line Source (ILS) theory with geostatistics, to interpret thermal response test (TRT) data and the relative implementing user-friendly software (GA-TRT). Many methods (analytical and numerical) currently exist to analyze TRT data. The innovation derives from the fact that we use a probabilistic approach, able to overcome, without excessively complicated calculations, many interpretation problems (choice of the guess value of ground volumetric heat capacity, identification of the fluctuations of recorded data, inability to provide a measure of the precision of the estimates obtained) that cannot be solved otherwise. The new procedure is based on a geostatistical drift analysis of temperature records which leads to a precise equivalent ground thermal conductivity (λg) estimation, confirmed by the calculation of its estimation variance. Afterwards, based on λg, a monovariate regression on the original data allows for the identification of the theoretical relationship between ground volumetric heat capacity (cg) and borehole thermal resistance (Rb). By assuming the monovariate Probability Distribution Function (PDF) for each variable, the joint conditional PDF to the cg-Rb relationship is found; finally, the conditional expectation allows for the identification of the correct and optimal couple of the cg-Rb estimated values.
Geostatistical analysis of soil moisture distribution in a part of Solani River catchment
NASA Astrophysics Data System (ADS)
Kumar, Kamal; Arora, M. K.; Hariprasad, K. S.
2016-03-01
The aim of this paper is to estimate soil moisture at spatial level by applying geostatistical techniques on the point observations of soil moisture in parts of Solani River catchment in Haridwar district of India. Undisturbed soil samples were collected at 69 locations with soil core sampler at a depth of 0-10 cm from the soil surface. Out of these, discrete soil moisture observations at 49 locations were used to generate a spatial soil moisture distribution map of the region. Two geostatistical techniques, namely, moving average and kriging, were adopted. Root mean square error (RMSE) between observed and estimated soil moisture at remaining 20 locations was determined to assess the accuracy of the estimated soil moisture. Both techniques resulted in low RMSE at small limiting distance, which increased with the increase in the limiting distance. The root mean square error varied from 7.42 to 9.77 in moving average method, while in case of kriging it varied from 7.33 to 9.99 indicating similar performance of the two techniques.
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif
2015-02-03
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
NASA Astrophysics Data System (ADS)
de Carvalho Alves, Marcelo; de Carvalho, Luiz Gonsaga; Vianello, Rubens Leite; Sediyama, Gilberto C.; de Oliveira, Marcelo Silva; de Sá Junior, Arionaldo
2013-07-01
The objective of the present study was to use the simple cokriging methodology to characterize the spatial variability of Penman-Monteith reference evapotranspiration and Thornthwaite potential evapotranspiration methods based on Moderate Resolution Imaging Spetroradiometer (MODIS) global evapotranspiration products and high-resolution surfaces of WordClim temperature and precipitation data. The climatic element data referred to 39 National Institute of Meteorology climatic stations located in Minas Gerais state, Brazil and surrounding states. The use of geostatistics and simple cokriging technique enabled the characterization of the spatial variability of the evapotranspiration providing uncertainty information on the spatial prediction pattern. Evapotranspiration and precipitation surfaces were implemented for the climatic classification in Minas Gerais. Multivariate geostatistical determined improvements of evapotranspiration spatial information. The regions in the south of Minas Gerais derived from the moisture index estimated with the MODIS evapotranspiration (2000-2010), presented divergence of humid conditions when compared to the moisture index derived from the simple kriged and cokriged evapotranspiration (1961-1990), indicating climate change in this region. There was stronger pattern of crossed covariance between evapotranspiration and precipitation rather than temperature, indicating that trends in precipitation could be one of the main external drivers of the evapotranspiration in Minas Gerais state, Brazil.
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
NASA Astrophysics Data System (ADS)
Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif
2015-02-01
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
The Direct Sampling method to perform multiple-point geostatistical simulations
NASA Astrophysics Data System (ADS)
Mariethoz, Gregoire; Renard, Philippe; Straubhaar, Julien
2010-11-01
Multiple-point geostatistics is a general statistical framework to model spatial fields displaying a wide range of complex structures. In particular, it allows controlling connectivity patterns that have a critical importance for groundwater flow and transport problems. This approach involves considering data events (spatial arrangements of values) derived from a training image (TI). All data events found in the TI are usually stored in a database, which is used to retrieve conditional probabilities for the simulation. Instead, we propose to sample directly the training image for a given data event, making the database unnecessary. Our method is statistically equivalent to previous implementations, but in addition it allows extending the application of multiple-point geostatistics to continuous variables and to multivariate problems. The method can be used for the simulation of geological heterogeneity, accounting or not for indirect observations such as geophysics. We show its applicability in the presence of complex features, nonlinear relationships between variables, and with various cases of nonstationarity. Computationally, it is fast, easy to parallelize, parsimonious in memory needs, and straightforward to implement.
Map on predicted deposition of Cs-137 in Spanish soils from geostatistical analyses.
Caro, A; Legarda, F; Romero, L; Herranz, M; Barrera, M; Valiño, F; Idoeta, R; Olondo, C
2013-01-01
The knowledge of the distribution of (137)Cs deposition over Spanish mainland soils along with the geographical, physical and morphological terrain information enable us to know the (137)Cs background content in soil. This could be useful as a tool in a hypothetical situation of an accident involving a radioactive discharge or in soil erosion studies. A Geographic Information System (GIS) would allow the gathering of all the mentioned information. In this work, gamma measurements of (137)Cs on 34 Spanish mainland soils, rainfall data taken from 778 weather stations, soil types and geographical and physical terrain information were input into a GIS. Geostatistical techniques were applied to interpolate values of (137)Cs activity at unsampled places, obtaining prediction maps of (137)Cs deposition. Up to now, geostatistical methods have been used to model spatial continuity of data. Through semivariance and cross-covariance functions the spatial correlation of such data can be studied and described. Ordinary and simple kriging techniques were carried out to map spatial patterns of (137)Cs deposition, and ordinary and simple co-kriging were used to improve the prediction map obtained through a second related variable: namely the rainfall. To choose the best prediction map of (137)Cs deposition, the spatial dependence of the variable, the correlation coefficient and the prediction errors were evaluated using the different models previously mentioned. The best result for (137)Cs deposition map was obtained when applying the co-kriging techniques.
Rhea, L.; Person, M.; Marsily, G. de; Ledoux, E.; Galli, A.
1994-11-01
This paper critically evaluates the utility of two different geostatistical methods in tracing long-distance oil migration through sedimentary basins. Geostatistical models of petroleum migration based on kriging and the conditional simulation method are assessed by comparing them to {open_quotes}known{close_quotes} oil migration rates and directions through a numerical carrier bed. In this example, the numerical carrier bed, which serves as {open_quotes}ground truth{close_quotes} in the study, incorporates a synthetic permeability field generated using the method of turning bands. Different representations of lateral permeability heterogeneity of the carrier bed are incorporated into a quasi-three-dimensional model of secondary oil migration. The geometric configuration of the carrier bed is intended to represent migration conditions within the center of a saucer-shaped intracratonic sag basin. In all of the numerical experiments, oil is sourced in the lowest 10% of a saucer-shaped carrier bed and migrates 10-14 km outward in a radial fashion by buoyancy. The effects of vertical permeability variations on secondary oil migration were not considered in the study.
Geostatistical borehole image-based mapping of karst-carbonate aquifer pores
Michael Sukop,; Cunningham, Kevin J.
2016-01-01
Quantification of the character and spatial distribution of porosity in carbonate aquifers is important as input into computer models used in the calculation of intrinsic permeability and for next-generation, high-resolution groundwater flow simulations. Digital, optical, borehole-wall image data from three closely spaced boreholes in the karst-carbonate Biscayne aquifer in southeastern Florida are used in geostatistical experiments to assess the capabilities of various methods to create realistic two-dimensional models of vuggy megaporosity and matrix-porosity distribution in the limestone that composes the aquifer. When the borehole image data alone were used as the model training image, multiple-point geostatistics failed to detect the known spatial autocorrelation of vuggy megaporosity and matrix porosity among the three boreholes, which were only 10 m apart. Variogram analysis and subsequent Gaussian simulation produced results that showed a realistic conceptualization of horizontal continuity of strata dominated by vuggy megaporosity and matrix porosity among the three boreholes.
NASA Astrophysics Data System (ADS)
Ostrowski, J.; Shlomi, S.; Michalak, A.
2007-12-01
The process of estimating the release history of a contaminant in groundwater relies on coupling a limited number of concentration measurements with a groundwater flow and transport model in an inverse modeling framework. The information provided by available measurements is generally not sufficient to fully characterize the unknown release history; therefore, an accurate assessment of the estimation uncertainty is required. The modeler's level of confidence in the transport parameters, expressed as pdfs, can be incorporated into the inverse model to improve the accuracy of the release estimates. In this work, geostatistical inverse modeling is used in conjunction with Monte Carlo sampling of transport parameters to estimate groundwater contaminant release histories. Concentration non-negativity is enforced using a Gibbs sampling algorithm based on a truncated normal distribution. The method is applied to two one-dimensional test cases: a hypothetical dataset commonly used in validating contaminant source identification methods, and data collected from a tetrachloroethylene and trichloroethylene plume at the Dover Air Force Base in Delaware. The estimated release histories and associated uncertainties are compared to results from a geostatistical inverse model where uncertainty in transport parameters is ignored. Results show that the a posteriori uncertainty associated with the model that accounts for parameter uncertainty is higher, but that this model provides a more realistic representation of the release history based on available data. This modified inverse modeling technique has many applications, including assignment of liability in groundwater contamination cases, characterization of groundwater contamination, and model calibration.
Analysis of Large Scale Spatial Variability of Soil Moisture Using a Geostatistical Method
Lakhankar, Tarendra; Jones, Andrew S.; Combs, Cynthia L.; Sengupta, Manajit; Vonder Haar, Thomas H.; Khanbilvardi, Reza
2010-01-01
Spatial and temporal soil moisture dynamics are critically needed to improve the parameterization for hydrological and meteorological modeling processes. This study evaluates the statistical spatial structure of large-scale observed and simulated estimates of soil moisture under pre- and post-precipitation event conditions. This large scale variability is a crucial in calibration and validation of large-scale satellite based data assimilation systems. Spatial analysis using geostatistical approaches was used to validate modeled soil moisture by the Agriculture Meteorological (AGRMET) model using in situ measurements of soil moisture from a state-wide environmental monitoring network (Oklahoma Mesonet). The results show that AGRMET data produces larger spatial decorrelation compared to in situ based soil moisture data. The precipitation storms drive the soil moisture spatial structures at large scale, found smaller decorrelation length after precipitation. This study also evaluates the geostatistical approach for mitigation for quality control issues within in situ soil moisture network to estimates at soil moisture at unsampled stations. PMID:22315576
NASA Astrophysics Data System (ADS)
Pesquer, Lluís; Cortés, Ana; Serral, Ivette; Pons, Xavier
2011-11-01
The main goal of this study is to characterize the effects of lossy image compression procedures on the spatial patterns of remotely sensed images, as well as to test the performance of job distribution tools specifically designed for obtaining geostatistical parameters (variogram) in a High Performance Computing (HPC) environment. To this purpose, radiometrically and geometrically corrected Landsat-5 TM images from April, July, August and September 2006 were compressed using two different methods: Band-Independent Fixed-Rate (BIFR) and three-dimensional Discrete Wavelet Transform (3d-DWT) applied to the JPEG 2000 standard. For both methods, a wide range of compression ratios (2.5:1, 5:1, 10:1, 50:1, 100:1, 200:1 and 400:1, from soft to hard compression) were compared. Variogram analyses conclude that all compression ratios maintain the variogram shapes and that the higher ratios (more than 100:1) reduce variance in the sill parameter of about 5%. Moreover, the parallel solution in a distributed environment demonstrates that HPC offers a suitable scientific test bed for time demanding execution processes, as in geostatistical analyses of remote sensing images.
NASA Astrophysics Data System (ADS)
Mariethoz, Gregoire; Lefebvre, Sylvain
2014-05-01
Multiple-Point Simulations (MPS) is a family of geostatistical tools that has received a lot of attention in recent years for the characterization of spatial phenomena in geosciences. It relies on the definition of training images to represent a given type of spatial variability, or texture. We show that the algorithmic tools used are similar in many ways to techniques developed in computer graphics, where there is a need to generate large amounts of realistic textures for applications such as video games and animated movies. Similarly to MPS, these texture synthesis methods use training images, or exemplars, to generate realistic-looking graphical textures. Both domains of multiple-point geostatistics and example-based texture synthesis present similarities in their historic development and share similar concepts. These disciplines have however remained separated, and as a result significant algorithmic innovations in each discipline have not been universally adopted. Texture synthesis algorithms present drastically increased computational efficiency, patterns reproduction and user control. At the same time, MPS developed ways to condition models to spatial data and to produce 3D stochastic realizations, which have not been thoroughly investigated in the field of texture synthesis. In this paper we review the possible links between these disciplines and show the potential and limitations of using concepts and approaches from texture synthesis in MPS. We also provide guidelines on how recent developments could benefit both fields of research, and what challenges remain open.
NASA Astrophysics Data System (ADS)
Michalak, Anna M.; Kitanidis, Peter K.
2004-08-01
As the incidence of groundwater contamination continues to grow, a number of inverse modeling methods have been developed to address forensic groundwater problems. In this work the geostatistical approach to inverse modeling is extended to allow for the recovery of the antecedent distribution of a contaminant at a given point back in time, which is critical to the assessment of historical exposure to contamination. Such problems are typically strongly underdetermined, with a large number of points at which the distribution is to be estimated. To address this challenge, the computational efficiency of the new method is increased through the application of the adjoint state method. In addition, the adjoint problem is presented in a format that allows for the reuse of existing groundwater flow and transport codes as modules in the inverse modeling algorithm. As demonstrated in the presented applications, the geostatistical approach combined with the adjoint state method allow for a historical multidimensional contaminant distribution to be recovered even in heterogeneous media, where a numerical solution is required for the forward problem.
Verifying the high-order consistency of training images with data for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Pérez, Cristian; Mariethoz, Gregoire; Ortiz, Julián M.
2014-09-01
Parameter inference is a key aspect of spatial modeling. A major appeal of variograms is that they allow inferring the spatial structure solely based on conditioning data. This is very convenient when the modeler does not have a ready-made geological interpretation. To date, such an easy and automated interpretation is not available in the context of most multiple-point geostatistics applications. Because training images are generally conceptual models, their preparation is often based on subjective criteria of the modeling expert. As a consequence, selection of an appropriate training image is one of the main issues one must face when using multiple-point simulation. This paper addresses the development of a geostatistical tool that addresses two separate problems. It allows (1) ranking training images according to their relative compatibility to the data, and (2) obtaining an absolute measure quantifying the consistency between training image and data in terms of spatial structure. For both, two alternative implementations are developed. The first one computes the frequency of each pattern in each training image. This method is statistically sound but computationally demanding. The second implementation obtains similar results at a lesser computational cost using a direct sampling approach. The applicability of the methodologies is successfully evaluated in two synthetic 2D examples and one real 3D mining example at the Escondida Norte deposit.
Geostatistics: a common link between medical geography, mathematical geology, and medical geology
Goovaerts, P.
2015-01-01
Synopsis Since its development in the mining industry, geostatistics has emerged as the primary tool for spatial data analysis in various fields, ranging from earth and atmospheric sciences to agriculture, soil science, remote sensing, and more recently environmental exposure assessment. In the last few years, these tools have been tailored to the field of medical geography or spatial epidemiology, which is concerned with the study of spatial patterns of disease incidence and mortality and the identification of potential ‘causes’ of disease, such as environmental exposure, diet and unhealthy behaviours, economic or socio-demographic factors. On the other hand, medical geology is an emerging interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. This paper provides an introduction to the field of medical geology with an overview of geostatistical methods available for the analysis of geological and health data. Key concepts are illustrated using the mapping of groundwater arsenic concentration across eleven Michigan counties and the exploration of its relationship to the incidence of prostate cancer at the township level. PMID:25722963
Castaing, C.; Genter, A.; Ouillon, G.
1995-08-01
Datasets of the geometry of fracture systems were analysed at various scales in the western Arabian sedimentary platform by means of geostatistical, multifractal, and anisotropic-wavelet techniques. The investigations covered a wide range of scales, from regional to outcrops in a well-exposed area, and were based on field mapping of fractures, and the interpretation and digitizing of fracture patterns on aerial photographs and satellite images. As a first step, fracture data sets were used to examine the direction, size, spacing and density systematics, and the variability in these quantities with space and scale. Secondly, a multifractal analysis was carried out, which consists in estimating the moments of the spatial distribution of fractures at different resolutions. This global multifractal method was complemented by a local wavelet analysis, using a new anisotropic technique tailored to linear structures. For a map with a given scale of detail, this procedure permits to define integrated fracture patterns and their associated directions at a more regional scale. The main result of this combined approach is that fracturing is not a self-similar process from the centimeter scale up to the one-million-kilometer scale. Spatial distribution of faults appears as being highly controlled by the thickness of the different rheological layers that constitute the crust. A proceeding for upscaling fracture systems in sedimentary reservoirs can be proposed, based on (i) a power law for joint-length distribution, (ii) characteristic joint spacing depending on the critical sedimentary units, and (iii) fractal fault geometry for faults larger than the whole thickness of the sedimentary basin.
NASA Astrophysics Data System (ADS)
Visser, A.; Singleton, M. J.; Moran, J. E.; Fram, M. S.; Kulongoski, J. T.; Esser, B. K.
2014-12-01
Key characteristics of California groundwater systems related to aquifer vulnerability, sustainability, recharge locations and mechanisms, and anthropogenic impact on recharge, are revealed in a spatial geostatistical analysis of the data set of tritium, dissolved noble gas and helium isotope analyses collected for the California State Water Resources Control Board's Groundwater Ambient Monitoring and Assessment (GAMA) and California Aquifer Susceptibility (CAS) programs. Over 4,000 tritium and noble gas analyses are available from wells across California. 25% of the analyzed samples contained less than 1 pCi/L indicating recharge occurred before 1950. The correlation length of tritium concentration is 120 km. Nearly 50% of the wells show a significant component of terrigenic helium. Over 50% of these samples show a terrigenic helium isotope ratio (Rter) that is significantly higher than the radiogenic helium isotope ratio (Rrad = 2×10-8). Rter values of more than three times the atmospheric isotope ratio (Ra = 1.384×10-6) are associated with known faults and volcanic provinces in Northern California. In the Central Valley, Rter varies from radiogenic to 2.25 Ra, complicating 3H/3He dating. The Rter was mapped by kriging, showing a correlation length of less than 50 km. The local predicted Rter was used to separate tritiogenic from atmospheric and terrigenic 3He. Regional groundwater recharge areas, indicated by young groundwater ages, are located in the southern Santa Clara Basin and in the upper LA basin and in the eastern San Joaquin Valley and along unlined canals carrying Colorado River water. Recharge in California is dominated by agricultural return flows, river recharge and managed aquifer recharge rather than precipitation excess. Combined application of noble gases and other groundwater tracers reveal the impact of engineered groundwater recharge and prove invaluable for the study of complex groundwater systems. This work was performed under the
Quality measures for geostatistical prediction of log-normal soil properties.
NASA Astrophysics Data System (ADS)
Lark, R. M.
2012-04-01
A signature of non-linear processes in the soil is the non-normal distribution of soil properties. A common non-normal distribution is the log-normal, in which the variable Z can be transformed to a variable with a normal distribution by Y = log e(Z). Log-normal variables are common in soil geochemistry and hydrology. It is standard practice in geostatistics to use the log-transformation for such variables before spatial modelling and prediction, and there are procedures to back-transform predictions of Y to the original scale of measurement Z. This is important because values on the original scale are commonly required either for scientific purposes or for practical applications such as the assessment of potential contaminant concentrations in soil. One of the strengths of geostatistics is that geostatistical prediction returns a prediction error variance. Furthermore, this variance can be computed before a survey is undertaken, for a range of possible different sampling networks, since it depends only on the disposition of sample sites, and the variogram model of spatial dependence. This allows the most efficient network to be selected: one which will provide estimates of sufficient precision (where the prediction error variances are within acceptable bounds) without over-sampling. In log-normal kriging the prediction error variance depends not only on the variogram and the sampling array, but also on the conditional mean value of the variable, which is not known until after sampling. This means that the usual pre-survey quality measures which can be computed to guide the planning of geostatistical surveys are not available for log-normal variables. Given that many critical variables, such as contaminant concentrations, are often log-normally distributed, this is a serious gap in the capablity of geostatistics to facilitate rational sampling design for environmental management and monitoring. In this paper I propose and demonstrate some quality measures that can
Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.
2013-01-01
The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the speciﬁc data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overﬁtting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of ﬁt or estimate a balance between ﬁt and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.
NASA Astrophysics Data System (ADS)
Razack, Moumtaz; Lasm, Théophile
2006-06-01
This work is aimed at estimating the transmissivity of highly fractured hard rock aquifers using a geostatistical approach. The studied aquifer is formed by the crystalline and metamorphic rocks of the Western Ivory Coast (West Africa), in the Man Danané area. The study area covers 7290 km 2 (90 km×81 km). The fracturing network is dense and well connected, without a marked fracture direction. A data base comprising 118 transmissivity ( T) values and 154 specific capacity ( Q/ s) values was compiled. A significant empirical relationship between T and Q/ s was found, which enabled the transmissivity data to be supplemented. The variographic analysis of the two variables showed that the variograms of T and Q/ s (which are lognormal variables) are much more structured than those of log T and log Q/ s (which are normal variables). This result is contrary to what was previously published and raises the question whether normality is necessary in geostatistical analysis. Several input and geostatistical estimations of the transmissivity were tested using the cross validation procedure: measured transmissivity data; supplemented transmissivity data; kriging; cokriging. The cross validation results showed that the best estimation is provided using the kriging procedure, the transmissivity field represented by the whole data sample (measured+estimated using specific capacity) and the structural model evaluated solely on the measured transmissivity. The geostatistical approach provided in fine a reliable estimation of the transmissivity of the Man Danané aquifer, which will be used as an input in forthcoming modelling.
NASA Astrophysics Data System (ADS)
Robidoux, P.; Roberge, J.; Urbina Oviedo, C. A.
2011-12-01
The origin of magmatism and the role of the subducted Coco's Plate in the Chichinautzin volcanic field (CVF), Mexico is still a subject of debate. It has been established that mafic magmas of alkali type (subduction) and calc-alkali type (OIB) are produced in the CVF and both groups cannot be related by simple fractional crystallization. Therefore, many geochemical studies have been done, and many models have been proposed. The main goal of the work present here is to provide a new tool for the visualization and interpretation of geochemical data using geostatistics and geospatial analysis techniques. It contains a complete geodatabase built from referred samples over the 2500 km2 area of CVF and its neighbour stratovolcanoes (Popocatepetl, Iztaccihuatl and Nevado de Toluca). From this database, map of different geochemical markers were done to visualise geochemical signature in a geographical manner, to test the statistic distribution with a cartographic technique and highlight any spatial correlations. The distribution and regionalization of the geochemical signatures can be viewed in a two-dimensional space using a specific spatial analysis tools from a Geographic Information System (GIS). The model of spatial distribution is tested with Linear Decrease (LD) and Inverse Distance Weight (IDW) interpolation technique because they best represent the geostatistical characteristics of the geodatabase. We found that ratio of Ba/Nb, Nb/Ta, Th/Nb show first order tendency, which means visible spatial variation over a large scale area. Monogenetic volcanoes in the center of the CVF have distinct values compare to those of the Popocatepetl-Iztaccihuatl polygenetic complex which are spatially well defined. Inside the Valley of Mexico, a large quantity of monogenetic cone in the eastern portion of CVF has ratios similar to the Iztaccihuatl and Popocatepetl complex. Other ratios like alkalis vs SiO2, V/Ti, La/Yb, Zr/Y show different spatial tendencies. In that case, second
Geostatistical Evaluation of Spring Water Quality in an Urbanizing Carbonate Aquifer
NASA Astrophysics Data System (ADS)
McGinty, A.; Welty, C.
2003-04-01
As part of an investigation of the impacts of urbanization on the hydrology and ecology of Valley Creek watershed near Philadelphia, Pennsylvania, we have analyzed the chemical composition of 110 springs to assess the relative influence of geology and anthropogenic activities on water quality. The 60 km^2 watershed is underlain by productive fractured rock aquifers composed of Cambrian and Ordovician carbonate rocks in the central valley and Cambrian crystalline and siliciclastic rocks (quartzite and phyllite) in the north and south hills that border the valley. All tributaries of the surface water system originate in the crystalline and siliciclastic hills. The watershed is covered by 17% impervious area and contains 6 major hazardous waste sites, one active quarrying operation and one golf course; 25% of the area utilizes septic systems for sewage disposal. We identified 172 springs, 110 of which had measurable flow rates ranging from 0.002 to 5 l/s. The mapped surficial geology appears as an anisotropic pattern, with long bands of rock formations paralleling the geographic orientation of the valley. Mapped development appears as a more isotropic pattern, characterized by isolated patches of land use that are not coincident with the evident geologic pattern. Superimposed upon these characteristics is a dense array of depressions and shallow sinkholes in the carbonate rocks, and a system of major faults at several formation contacts. We used indicator geostatistics to quantitatively characterize the spatial extent of the major geologic formations and patterns of land use. Maximum correlation scales for the rock types corresponded with strike direction and ranged from 1000 to 3000 m. Anisotropy ratios ranged from 2 to 4. Land-use correlation scales were generally smaller (200 to 500 m) with anisotropy ratios of around 1.2, i.e., nearly isotropic as predicted. Geostatistical analysis of spring water quality parameters related to geology (pH, specific conductance
Introduction to this Special Issue on Geostatistics and Scaling of Remote Sensing
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.
1999-01-01
The germination of this special PE&RS issue began at the Royal Geographical Society (with the Institute of British Geographers)(RCS-IBC) annual meeting in January, 1997 held at the University of Exeter in Exeter, England. The cold and snow of an England winter were greatly tempered by the friendly and cordial discussions that ensued at the meeting on possible ways to foster both dialog and research across "the Big Pond" between geographers in the US and the UK on the use of geostatistics and geospatial techniques for remote sensing of land surface processes. It was decided that one way to stimulate and enhance cooperation on the application of geostatistics and geospatial methods in remote sensing was to hold parallel sessions on these topics at appropriate meeting venues in 1998 in both the US and the UK Selected papers given at these sessions would be published as a special issue of PE&RS on the US side, and as a special issue of Computers and Geosciences (C&G) on the UK side, to highlight the commonality in research on geostatistics and geospatial methods in remote sensing and spatial data analysis on both sides of the Atlantic Ocean. As a consequence, a session on "Ceostatistics and Geospatial Techniques for Remote Sensing of Land Surface Processes" was held at the Association of American Geographers (AAG) annual meeting in Boston, Massachusetts in March, 1998, sponsored by the AAG's Remote Sensing Specialty Group (RSSG). A similar session was held at the RGS-IBG annual meeting in Guildford, Surrey, England in January 1998, organized by the Modeling and Advanced Techniques Special Interest Group (MAT SIG) of the Remote Sensing Society (RSS). The six papers that in part, comprise this issue of PE&RS, are the US complement to such a dual journal publication effort. Both of us are co-editors of each of the journal special issues, with the lead editor of each journal being from their respective side of the Atlantic where the journals are published. The special
Hevesi, Joseph A.; Istok, Jonathan D.; Flint, Alan L.
1992-01-01
Values of average annual precipitation (AAP) are desired for hydrologic studies within a watershed containing Yucca Mountain, Nevada, a potential site for a high-level nuclear-waste repository. Reliable values of AAP are not yet available for most areas within this watershed because of a sparsity of precipitation measurements and the need to obtain measurements over a sufficient length of time. To estimate AAP over the entire watershed, historical precipitation data and station elevations were obtained from a network of 62 stations in southern Nevada and southeastern California. Multivariate geostatistics (cokriging) was selected as an estimation method because of a significant (p = 0.05) correlation of r = .75 between the natural log of AAP and station elevation. A sample direct variogram for the transformed variable, TAAP = ln [(AAP) 1000], was fitted with an isotropic, spherical model defined by a small nugget value of 5000, a range of 190 000 ft, and a sill value equal to the sample variance of 163 151. Elevations for 1531 additional locations were obtained from topographic maps to improve the accuracy of cokriged estimates. A sample direct variogram for elevation was fitted with an isotropic model consisting of a nugget value of 5500 and three nested transition structures: a Gaussian structure with a range of 61 000 ft, a spherical structure with a range of 70 000 ft, and a quasi-stationary, linear structure. The use of an isotropic, stationary model for elevation was considered valid within a sliding-neighborhood radius of 120 000 ft. The problem of fitting a positive-definite, nonlinear model of coregionalization to an inconsistent sample cross variogram for TAAP and elevation was solved by a modified use of the Cauchy-Schwarz inequality. A selected cross-variogram model consisted of two nested structures: a Gaussian structure with a range of 61 000 ft and a spherical structure with a range of 190 000 ft. Cross validation was used for model selection and for
Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.
1992-01-01
Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances
NASA Astrophysics Data System (ADS)
Aksoy, Ece; Panagos, Panos; Montanarella, Luca
2013-04-01
Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because SOC is an important soil component that plays key roles in the functions of both natural ecosystems and agricultural systems. The SOC content varies from place to place and it is strongly related with climate variables (temperature and rainfall), terrain features, soil texture, parent material, vegetation, land-use types, and human management (management and degradation) at different spatial scales. Geostatistical techniques allow for the prediction of soil properties using soil information and environmental covariates. In this study, assessment of SOC distribution has been predicted using combination of LUCAS soil samples with local soil data and ten spatio-temporal predictors (slope, aspect, elevation, CTI, CORINE land-cover classification, parent material, texture, WRB soil classification, average temperature and precipitation) with Regression-Kriging method in Europe scale. Significant correlation between the covariates and the organic carbon dependent variable was found.
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It is recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.
NASA Astrophysics Data System (ADS)
Irving, James; Holliger, Klaus
2010-11-01
Estimation of the spatial statistics of subsurface velocity heterogeneity from surface-based geophysical reflection survey data is a problem of significant interest in seismic and ground-penetrating radar (GPR) research. A method to effectively address this problem has been recently presented, but our knowledge regarding the resolution of the estimated parameters is still inadequate. Here we examine this issue using an analytical approach that is based on the realistic assumption that the subsurface velocity structure can be characterized as a band-limited scale-invariant medium. Our work importantly confirms recent numerical findings that the inversion of seismic or GPR reflection data for the geostatistical properties of the probed subsurface region is sensitive to the aspect ratio of the velocity heterogeneity and to the decay of its power spectrum, but not to the individual values of the horizontal and vertical correlation lengths.
NASA Astrophysics Data System (ADS)
Ibrahim, Elsy; Adam, Stefanie; De Wever, Aaike; Govaerts, Annelies; Vervoort, Andre; Monbaliu, Jaak
2014-08-01
To investigate bio-chemical processes of intertidal sediments, variations in sediment properties such as moisture content, mud content, and chlorophyll a content need to be understood. Remote sensing has been an efficient alternative to traditional data collection methods for such properties. Yet, with the availability of various types of useful sensors, choosing a suitable spatial resolution is challenging, especially that each type has its own cost, availability, and data specifications. This paper investigates the losses in spatial information of sediment properties on the Molenplaat, an intertidal flat on the Western-Scheldt estuary, upon the use of various resolutions. This was carried out using a synergy between remote sensing and geostatistics. The results showed that for the Molenplaat, chlorophyll a content can be well represented by low to medium resolutions. Yet, for moisture and mud content, spatial structures would be lost upon any decrease of resolution from a 4 m×4 m pixel size.
Yates, M V; Yates, S R; Warrick, A W; Gerba, C P
1986-09-01
Water samples were collected from 71 public drinking-water supply wells in the Tucson, Ariz., basin. Virus decay rates in the water samples were determined with MS-2 coliphage as a model virus. The correlations between the virus decay rates and the sample locations were shown by fitting a spherical model to the experimental semivariogram. Kriging, a geostatistical technique, was used to calculate virus decay rates at unsampled locations by using the known values at nearby wells. Based on the regional characteristics of groundwater flow and the kriged estimates of virus decay rates, a contour map of the area was constructed. The map shows the variation in separation distances that would have to be maintained between wells and sources of contamination to afford similar degrees of protection from viral contamination of the drinking water in wells throughout the basin.
Han, Xujun; Li, Xin; Rigon, Riccardo; Jin, Rui; Endrizzi, Stefano
2015-01-01
The observation could be used to reduce the model uncertainties with data assimilation. If the observation cannot cover the whole model area due to spatial availability or instrument ability, how to do data assimilation at locations not covered by observation? Two commonly used strategies were firstly described: One is covariance localization (CL); the other is observation localization (OL). Compared with CL, OL is easy to parallelize and more efficient for large-scale analysis. This paper evaluated OL in soil moisture profile characterizations, in which the geostatistical semivariogram was used to fit the spatial correlated characteristics of synthetic L-Band microwave brightness temperature measurement. The fitted semivariogram model and the local ensemble transform Kalman filter algorithm are combined together to weight and assimilate the observations within a local region surrounding the grid cell of land surface model to be analyzed. Six scenarios were compared: 1_Obs with one nearest observation assimilated, 5_Obs with no more than five nearest local observations assimilated, and 9_Obs with no more than nine nearest local observations assimilated. The scenarios with no more than 16, 25, and 36 local observations were also compared. From the results we can conclude that more local observations involved in assimilation will improve estimations with an upper bound of 9 observations in this case. This study demonstrates the potentials of geostatistical correlation representation in OL to improve data assimilation of catchment scale soil moisture using synthetic L-band microwave brightness temperature, which cannot cover the study area fully in space due to vegetation effects. PMID:25635771
Boysen, Courtney; Davis, Elizabeth G; Beard, Laurie A; Lubbers, Brian V; Raghavan, Ram K
2015-01-01
Kansas witnessed an unprecedented outbreak in Corynebacterium pseudotuberculosis infection among horses, a disease commonly referred to as pigeon fever during fall 2012. Bayesian geostatistical models were developed to identify key environmental and climatic risk factors associated with C. pseudotuberculosis infection in horses. Positive infection status among horses (cases) was determined by positive test results for characteristic abscess formation, positive bacterial culture on purulent material obtained from a lanced abscess (n = 82), or positive serologic evidence of exposure to organism (≥ 1:512)(n = 11). Horses negative for these tests (n = 172)(controls) were considered free of infection. Information pertaining to horse demographics and stabled location were obtained through review of medical records and/or contact with horse owners via telephone. Covariate information for environmental and climatic determinants were obtained from USDA (soil attributes), USGS (land use/land cover), and NASA MODIS and NASA Prediction of Worldwide Renewable Resources (climate). Candidate covariates were screened using univariate regression models followed by Bayesian geostatistical models with and without covariates. The best performing model indicated a protective effect for higher soil moisture content (OR = 0.53, 95% CrI = 0.25, 0.71), and detrimental effects for higher land surface temperature (≥ 35°C) (OR = 2.81, 95% CrI = 2.21, 3.85) and habitat fragmentation (OR = 1.31, 95% CrI = 1.27, 2.22) for C. pseudotuberculosis infection status in horses, while age, gender and breed had no effect. Preventative and ecoclimatic significance of these findings are discussed.
Boysen, Courtney; Davis, Elizabeth G.; Beard, Laurie A.; Lubbers, Brian V.; Raghavan, Ram K.
2015-01-01
Kansas witnessed an unprecedented outbreak in Corynebacterium pseudotuberculosis infection among horses, a disease commonly referred to as pigeon fever during fall 2012. Bayesian geostatistical models were developed to identify key environmental and climatic risk factors associated with C. pseudotuberculosis infection in horses. Positive infection status among horses (cases) was determined by positive test results for characteristic abscess formation, positive bacterial culture on purulent material obtained from a lanced abscess (n = 82), or positive serologic evidence of exposure to organism (≥1:512)(n = 11). Horses negative for these tests (n = 172)(controls) were considered free of infection. Information pertaining to horse demographics and stabled location were obtained through review of medical records and/or contact with horse owners via telephone. Covariate information for environmental and climatic determinants were obtained from USDA (soil attributes), USGS (land use/land cover), and NASA MODIS and NASA Prediction of Worldwide Renewable Resources (climate). Candidate covariates were screened using univariate regression models followed by Bayesian geostatistical models with and without covariates. The best performing model indicated a protective effect for higher soil moisture content (OR = 0.53, 95% CrI = 0.25, 0.71), and detrimental effects for higher land surface temperature (≥35°C) (OR = 2.81, 95% CrI = 2.21, 3.85) and habitat fragmentation (OR = 1.31, 95% CrI = 1.27, 2.22) for C. pseudotuberculosis infection status in horses, while age, gender and breed had no effect. Preventative and ecoclimatic significance of these findings are discussed. PMID:26473728
Han, Xujun; Li, Xin; Rigon, Riccardo; Jin, Rui; Endrizzi, Stefano
2015-01-01
The observation could be used to reduce the model uncertainties with data assimilation. If the observation cannot cover the whole model area due to spatial availability or instrument ability, how to do data assimilation at locations not covered by observation? Two commonly used strategies were firstly described: One is covariance localization (CL); the other is observation localization (OL). Compared with CL, OL is easy to parallelize and more efficient for large-scale analysis. This paper evaluated OL in soil moisture profile characterizations, in which the geostatistical semivariogram was used to fit the spatial correlated characteristics of synthetic L-Band microwave brightness temperature measurement. The fitted semivariogram model and the local ensemble transform Kalman filter algorithm are combined together to weight and assimilate the observations within a local region surrounding the grid cell of land surface model to be analyzed. Six scenarios were compared: 1_Obs with one nearest observation assimilated, 5_Obs with no more than five nearest local observations assimilated, and 9_Obs with no more than nine nearest local observations assimilated. The scenarios with no more than 16, 25, and 36 local observations were also compared. From the results we can conclude that more local observations involved in assimilation will improve estimations with an upper bound of 9 observations in this case. This study demonstrates the potentials of geostatistical correlation representation in OL to improve data assimilation of catchment scale soil moisture using synthetic L-band microwave brightness temperature, which cannot cover the study area fully in space due to vegetation effects.
NASA Astrophysics Data System (ADS)
Goovaerts, P.; Avruskin, G.; Meliker, J.; Slotnick, M.; Jacquez, G.; Nriagu, J.
2003-12-01
Assessment of the health risks associated with exposure to elevated levels of arsenic in drinking water has become the subject of considerable interest and some controversy in both regulatory and public health communities. The objective of this research is to explore the factors that have contributed to the observed geographic co-clustering in bladder cancer mortality and arsenic concentrations in drinking water in Michigan. A corner stone is the building of a probabilistic space-time model of arsenic concentrations, accounting for information collected at private residential wells and the hydrogeochemistry of the area. Because of the small changes in concentration observed in time, the study has focused on the spatial variability of arsenic, which can be considerable over very short distances. Various geostatistical techniques, based either on lognormal or indicator transforms of the data to accommodate the highly skewed distribution, have been compared using a cross validation procedure. The most promising approach involves a soft indicator coding of arsenic measurements, which allows one to account for data below the detection limit and the magnitude of measurement errors. Prior probabilities of exceeding various arsenic thresholds are also derived from secondary information, such as type of bedrock and surficial material, well casing depth, using logistic regression. Both well and secondary data are combined using kriging, leading to a non-parametric assessment of the uncertainty attached to arsenic concentration at each node of a 500m grid. This geostatistical model can be used to map either the expected arsenic concentration, the probability that it exceeds any giventhreshold, or the variance of the prediction indicating where supplementary information should be collected. The accuracy and precision of these local probability distributions is assessed using cross validation.
Boysen, Courtney; Davis, Elizabeth G; Beard, Laurie A; Lubbers, Brian V; Raghavan, Ram K
2015-01-01
Kansas witnessed an unprecedented outbreak in Corynebacterium pseudotuberculosis infection among horses, a disease commonly referred to as pigeon fever during fall 2012. Bayesian geostatistical models were developed to identify key environmental and climatic risk factors associated with C. pseudotuberculosis infection in horses. Positive infection status among horses (cases) was determined by positive test results for characteristic abscess formation, positive bacterial culture on purulent material obtained from a lanced abscess (n = 82), or positive serologic evidence of exposure to organism (≥ 1:512)(n = 11). Horses negative for these tests (n = 172)(controls) were considered free of infection. Information pertaining to horse demographics and stabled location were obtained through review of medical records and/or contact with horse owners via telephone. Covariate information for environmental and climatic determinants were obtained from USDA (soil attributes), USGS (land use/land cover), and NASA MODIS and NASA Prediction of Worldwide Renewable Resources (climate). Candidate covariates were screened using univariate regression models followed by Bayesian geostatistical models with and without covariates. The best performing model indicated a protective effect for higher soil moisture content (OR = 0.53, 95% CrI = 0.25, 0.71), and detrimental effects for higher land surface temperature (≥ 35°C) (OR = 2.81, 95% CrI = 2.21, 3.85) and habitat fragmentation (OR = 1.31, 95% CrI = 1.27, 2.22) for C. pseudotuberculosis infection status in horses, while age, gender and breed had no effect. Preventative and ecoclimatic significance of these findings are discussed. PMID:26473728
NASA Astrophysics Data System (ADS)
Tanaka, Masahiro; Hachiya, Shogo; Ishii, Tomoya; Ning, Sheyang; Tsurumi, Kota; Takeuchi, Ken
2016-04-01
A 0.6-1.0 V, 25.9 mm2 boost converter is proposed to generate resistive random access memory (ReRAM) write (set/reset) voltage for three-dimensional (3D) integrated ReRAM and NAND flash hybrid solid-state drive (SSD). The proposed boost converter uses an integrated area-efficient V BUF generation circuit to obtain short ReRAM sector write time, small circuit size, and small energy consumption simultaneously. In specific, the proposed boost converter reduces ReRAM sector write time by 65% compared with a conventional one-stage boost converter (Conventional 1) which uses 1.0 V operating voltage. On the other hand, by using the same ReRAM sector write time, the proposed boost converter reduces 49% circuit area and 46% energy consumption compared with a conventional two-stage boost converter (Conventional 2). In addition, by using the proposed boost converter, the operating voltage, V DD, can be reduced to 0.6 V. The lowest 159 nJ energy consumption can be obtained when V DD is 0.7 V.
Zimmerman, D.A.; Gallegos, D.P.
1993-10-01
The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ``Geostatistics Test Problem`` is being developed to evaluate a number of inverse techniques that may be used for flow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets to span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity (possibly including measurement error) or other information such as drawdowns from pumping wells, and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No. 1.
2014-01-01
Background Although dental care settings provide an exceptional opportunity to reach smokers and provide brief cessation advice and treatment to reduce oral and other tobacco-related health conditions, dental care providers demonstrate limited adherence to evidence-based guidelines for treatment of tobacco use and dependence. Methods/Design Guided by a multi-level, conceptual framework that emphasizes changes in provider beliefs and organizational characteristics as drivers of improvement in tobacco treatment delivery, the current protocol will use a cluster, randomized design and multiple data sources (patient exit interviews, provider surveys, site observations, chart audits, and semi-structured provider interviews) to study the process of implementing clinical practice guidelines for treating tobacco dependence in 18 public dental care clinics in New York City. The specific aims of this comparative-effectiveness research trial are to: compare the effectiveness of three promising strategies for implementation of tobacco use treatment guidelines—staff training and current best practices (CBP), CBP + provider performance feedback (PF), and CBP + PF + provider reimbursement for delivery of tobacco cessation treatment (pay-for-performance, or P4P); examine potential theory-driven mechanisms hypothesized to explain the comparative effectiveness of three strategies for implementation; and identify baseline organizational factors that influence the implementation of evidence-based tobacco use treatment practices in dental clinics. The primary outcome is change in providers’ tobacco treatment practices and the secondary outcomes are cost per quit, use of tobacco cessation treatments, quit attempts, and smoking abstinence. Discussion We hypothesize that the value of these promising implementation strategies is additive and that incorporating all three strategies (CBP, PF, and P4P) will be superior to CBP alone and CBP + PF in improving delivery of
Nedelcu, Marius; Verhaeghe, Pierre; Skalli, Mehdi; Champault, Gerard; Barrat, Christophe; Sebbag, Hugues; Reche, Fabian; Passebois, Laurent; Beyrne, Daniel; Gugenheim, Jean; Berdah, Stephane; Bouayed, Amine; Michel Fabre, Jean; Nocca, David
2016-03-01
The use of parietal synthetic prosthetic reinforcement material in potentially contaminated settings is not recommended, as there is a risk that the prosthesis may become infected. Thus, simple parietal herniorrhaphy, is the conventional treatment, even though there is a significant risk that the hernia may recur. Using new biomaterials of animal origin presently appears to offer a new therapeutic solution, but their effectiveness has yet to be demonstrated. The purpose of this multicenter prospective randomized single-blind study was to compare the surgical treatment of inguinal hernia or abdominal incisional hernia by simple parietal herniorrhaphy without prosthetic reinforcement (Group A), with Tutomesh TUTOGEN biological prosthesis reinforcement parietal herniorrhaphy (Group B), in a potentially contaminated setting. We examined early postoperative complications in the first month after the operation, performed an assessment after one year of survival without recurrence and analyzed the quality of life and pain of the patients (using SF-12 health status questionnaire and Visual Analog Pain Scale) at 1, 6, and 12 months, together with an economic impact study. Hundred and thirty four patients were enrolled between January 2009 and October 2010 in 20 French hospitals. The groups were comparable with respect to their enrollment characteristics, their history, types of operative indications and procedures carried out. At one month post-op, the rate of infectious complications (n(A) = 11(18.33%) vs. n(B) = 12(19.05%), p = 0.919) was not significantly different between the two groups. The assessment after one year of survival without recurrence revealed that survival was significantly greater in Group B (Group A recurrence: 10, Group B: 3; p = 0.0475). No difference in the patients' quality of life was demonstrated at 1, 6, or 12 months. However, at the 1 month follow-up, the "perceived health" rating seemed better in the group with Tutomesh (p
Nedelcu, Marius; Verhaeghe, Pierre; Skalli, Mehdi; Champault, Gerard; Barrat, Christophe; Sebbag, Hugues; Reche, Fabian; Passebois, Laurent; Beyrne, Daniel; Gugenheim, Jean; Berdah, Stephane; Bouayed, Amine; Michel Fabre, Jean; Nocca, David
2016-03-01
The use of parietal synthetic prosthetic reinforcement material in potentially contaminated settings is not recommended, as there is a risk that the prosthesis may become infected. Thus, simple parietal herniorrhaphy, is the conventional treatment, even though there is a significant risk that the hernia may recur. Using new biomaterials of animal origin presently appears to offer a new therapeutic solution, but their effectiveness has yet to be demonstrated. The purpose of this multicenter prospective randomized single-blind study was to compare the surgical treatment of inguinal hernia or abdominal incisional hernia by simple parietal herniorrhaphy without prosthetic reinforcement (Group A), with Tutomesh TUTOGEN biological prosthesis reinforcement parietal herniorrhaphy (Group B), in a potentially contaminated setting. We examined early postoperative complications in the first month after the operation, performed an assessment after one year of survival without recurrence and analyzed the quality of life and pain of the patients (using SF-12 health status questionnaire and Visual Analog Pain Scale) at 1, 6, and 12 months, together with an economic impact study. Hundred and thirty four patients were enrolled between January 2009 and October 2010 in 20 French hospitals. The groups were comparable with respect to their enrollment characteristics, their history, types of operative indications and procedures carried out. At one month post-op, the rate of infectious complications (n(A) = 11(18.33%) vs. n(B) = 12(19.05%), p = 0.919) was not significantly different between the two groups. The assessment after one year of survival without recurrence revealed that survival was significantly greater in Group B (Group A recurrence: 10, Group B: 3; p = 0.0475). No difference in the patients' quality of life was demonstrated at 1, 6, or 12 months. However, at the 1 month follow-up, the "perceived health" rating seemed better in the group with Tutomesh (p
NASA Astrophysics Data System (ADS)
Golay, Jean; Kanevski, Mikhaïl
2013-04-01
The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal
Ahmed, Zia U; Panaullah, Golam M; DeGloria, Stephen D; Duxbury, John M
2011-12-15
Knowledge of the spatial correlation of soil arsenic (As) concentrations with environmental variables is needed to assess the nature and extent of the risk of As contamination from irrigation water in Bangladesh. We analyzed 263 paired groundwater and paddy soil samples covering highland (HL) and medium highland-1 (MHL-1) land types for geostatistical mapping of soil As and delineation of As contaminated areas in Tala Upazilla, Satkhira district. We also collected 74 non-rice soil samples to assess the baseline concentration of soil As for this area. The mean soil As concentrations (mg/kg) for different land types under rice and non-rice crops were: rice-MHL-1 (21.2)>rice-HL (14.1)>non-rice-MHL-1 (11.9)>non-rice-HL (7.2). Multiple regression analyses showed that irrigation water As, Fe, land elevation and years of tubewell operation are the important factors affecting the concentrations of As in HL paddy soils. Only years of tubewell operation affected As concentration in the MHL-1 paddy soils. Quantitatively similar increases in soil As above the estimated baseline-As concentration were observed for rice soils on HL and MHL-1 after 6-8 years of groundwater irrigation, implying strong retention of As added in irrigation water in both land types. Application of single geostatistical methods with secondary variables such as regression kriging (RK) and ordinary co-kriging (OCK) gave little improvement in prediction of soil As over ordinary kriging (OK). Comparing single prediction methods, kriging within strata (KWS), the combination of RK for HL and OCK for MHL-1, gave more accurate soil As predictions and showed the lowest misclassification of declaring a location "contaminated" with respect to 14.8 mg As/kg, the highest value obtained for the baseline soil As concentration. Prediction of soil As buildup over time indicated that 75% or the soils cropped to rice would contain at least 30 mg/L As by the year 2020. PMID:22055452
Warnery, E; Ielsch, G; Lajaunie, C; Cale, E; Wackernagel, H; Debayle, C; Guillevic, J
2015-01-01
Terrestrial gamma dose rates show important spatial variations in France. Previous studies resulted in maps of arithmetic means of indoor terrestrial gamma dose rates by "departement" (French district). However, numerous areas could not be characterized due to the lack of data. The aim of our work was to obtain more precise estimates of the spatial variability of indoor terrestrial gamma dose rates in France by using a more recent and complete data base and geostatistics. The study was based on the exploitation of 97,595 measurements results distributed in 17,404 locations covering all of France. Measurements were done by the Institute for Radioprotection and Nuclear Safety (IRSN) using RPL (Radio Photo Luminescent) dosimeters, exposed during several months between years 2011 and 2012 in French dentist surgeries and veterinary clinics. The data used came from dosimeters which were not exposed to anthropic sources. After removing the cosmic rays contribution in order to study only the telluric gamma radiation, it was decided to work with the arithmetic means of the time-series measurements, weighted by the time-exposure of the dosimeters, for each location. The values varied between 13 and 349 nSv/h, with an arithmetic mean of 76 nSv/h. The observed statistical distribution of the gamma dose rates was skewed to the right. Firstly, ordinary kriging was performed in order to predict the gamma dose rate on cells of 1*1 km(2), all over the domain. The second step of the study was to use an auxiliary variable in estimates. The IRSN achieved in 2010 a classification of the French geological formations, characterizing their uranium potential on the bases of geology and local measurement results of rocks uranium content. This information is georeferenced in a map at the scale 1:1,000,000. The geological uranium potential (GUP) was classified in 5 qualitative categories. As telluric gamma rays mostly come from the progenies of the (238)Uranium series present in rocks, this
Ahmed, Zia U; Panaullah, Golam M; DeGloria, Stephen D; Duxbury, John M
2011-12-15
Knowledge of the spatial correlation of soil arsenic (As) concentrations with environmental variables is needed to assess the nature and extent of the risk of As contamination from irrigation water in Bangladesh. We analyzed 263 paired groundwater and paddy soil samples covering highland (HL) and medium highland-1 (MHL-1) land types for geostatistical mapping of soil As and delineation of As contaminated areas in Tala Upazilla, Satkhira district. We also collected 74 non-rice soil samples to assess the baseline concentration of soil As for this area. The mean soil As concentrations (mg/kg) for different land types under rice and non-rice crops were: rice-MHL-1 (21.2)>rice-HL (14.1)>non-rice-MHL-1 (11.9)>non-rice-HL (7.2). Multiple regression analyses showed that irrigation water As, Fe, land elevation and years of tubewell operation are the important factors affecting the concentrations of As in HL paddy soils. Only years of tubewell operation affected As concentration in the MHL-1 paddy soils. Quantitatively similar increases in soil As above the estimated baseline-As concentration were observed for rice soils on HL and MHL-1 after 6-8 years of groundwater irrigation, implying strong retention of As added in irrigation water in both land types. Application of single geostatistical methods with secondary variables such as regression kriging (RK) and ordinary co-kriging (OCK) gave little improvement in prediction of soil As over ordinary kriging (OK). Comparing single prediction methods, kriging within strata (KWS), the combination of RK for HL and OCK for MHL-1, gave more accurate soil As predictions and showed the lowest misclassification of declaring a location "contaminated" with respect to 14.8 mg As/kg, the highest value obtained for the baseline soil As concentration. Prediction of soil As buildup over time indicated that 75% or the soils cropped to rice would contain at least 30 mg/L As by the year 2020.
Geostatistical techniques to assess the influence of soil density on sugarcane productivity
NASA Astrophysics Data System (ADS)
Marques, Karina; Silva, Wellington; Almeida, Ceres; Bezerra, Joel; Almeida, Brivaldo; Siqueira, Glecio
2013-04-01
Spatial variation in some soil properties on small distances occur even on homogeneous areas with same soil class that can influence to crop productivity. This variability must be incorporated into the procedures and techniques used in agriculture. Thus, it is necessary to know it to optimize agricultural practices. This study aimed to evaluate the influence of soil density on the sugarcane productivity by geostatistical techniques. The area is located on Rio Formoso city, Pernambuco (Brazil), at latitude 08°38'91"S and longitude 35°16'08"W, where the climate is rainy tropical. About of 243 georeferenced undisturbed soil samples (clods) were collected on lowland area at three depths (0-20, 20-40 and 40-60cm) grid spacing of 15 x 30 m. The total area has 7.5 ha, divided equally into three subareas. Statistical and geostatistics analysis were done. It was found that soil density increased with depth Bulk density and can be used as an index of relative compaction. Machine weight, track or tire design and soil water content at the time of traffic are some of the factors that determine the amount of soil compaction and resulting changes in the plant root environment. These points can have influenced the highest soil density found in subarea 1. This subarea was intensively mechanized and it presents poor drainage and seasonal flood. Based on semivariograms models fitted, we can say that soil density showed spatial dependence in subarea 1 at all depths (Gaussian (0-20cm) and spherical both 20-40 and 40-60cm). Unlike this, the models fitted to subarea 2 were to 0-20 and 40-60cm depths, exponential and on subarea 3, at 0-20cm (Gaussian). Pure nugget effect was found on 20-40cm depth at the subareas 2 and 3, and 40-60cm on the subarea 3. Subarea 1 had higher soil density and lower sugarcane productivity thus, it is known that root development and nutrient uptake are directly influenced by soil density.
Characterization of an unregulated landfill using surface-based geophysics and geostatistics
Woldt, W.E.; Jones, D.D.; Hagemeister, M.E.
1998-11-01
This paper develops a geoelectrical and geostatistical-based methodology that can be used to screen unregulated landfills for the presence of leachate and obtain an approximation of the vertical/spatial extent of waste. The methodology uses a surface electromagnetic (EM) survey combined with indicator kriging. Indicator kriging allows for the use of EM data that have been collected over highly electrically conductive material such as that occurring within landfills. Indicator kriging maps were generated for several vertical sections cut through a landfill to establish the landfill strata. Similarly, horizontal sections were generated to evaluate the areas extent of waste and leachate in the vadose zone and saturated zones, respectively. The horizontal and vertical maps were combined to estimate the volume of solid waste and liquid within three zones: (1) waste, (2) waste and/or leachate, and (3) potential leachate. The methodology appears to hold promise in providing results that can be used as part of a hazard assessment, or to assist in the placement of monitoring wells at sites requiring additional study. A case study is used to demonstrate the methodology at an unregulated landfill in eastern Nebraska.
Geostatistics as a tool to study mite dispersion in physic nut plantations.
Rosado, J F; Picanço, M C; Sarmento, R A; Pereira, R M; Pedro-Neto, M; Galdino, T V S; de Sousa Saraiva, A; Erasmo, E A L
2015-08-01
Spatial distribution studies in pest management identify the locations where pest attacks on crops are most severe, enabling us to understand and predict the movement of such pests. Studies on the spatial distribution of two mite species, however, are rather scarce. The mites Polyphagotarsonemus latus and Tetranychus bastosi are the major pests affecting physic nut plantations (Jatropha curcas). Therefore, the objective of this study was to measure the spatial distributions of P. latus and T. bastosi in the physic nut plantations. Mite densities were monitored over 2 years in two different plantations. Sample locations were georeferenced. The experimental data were analyzed using geostatistical analyses. The total mite density was found to be higher when only one species was present (T. bastosi). When both the mite species were found in the same plantation, their peak densities occurred at different times. These mites, however, exhibited uniform spatial distribution when found at extreme densities (low or high). However, the mites showed an aggregated distribution in intermediate densities. Mite spatial distribution models were isotropic. Mite colonization commenced at the periphery of the areas under study, whereas the high-density patches extended until they reached 30 m in diameter. This has not been reported for J. curcas plants before. PMID:25895655
Applying Geostatistical Analysis to Crime Data: Car-Related Thefts in the Baltic States.
Kerry, Ruth; Goovaerts, Pierre; Haining, Robert P; Ceccato, Vania
2010-01-01
Geostatistical methods have rarely been applied to area-level offense data. This article demonstrates their potential for improving the interpretation and understanding of crime patterns using previously analyzed data about car-related thefts for Estonia, Latvia, and Lithuania in 2000. The variogram is used to inform about the scales of variation in offense, social, and economic data. Area-to-area and area-to-point Poisson kriging are used to filter the noise caused by the small number problem. The latter is also used to produce continuous maps of the estimated crime risk (expected number of crimes per 10,000 habitants), thereby reducing the visual bias of large spatial units. In seeking to detect the most likely crime clusters, the uncertainty attached to crime risk estimates is handled through a local cluster analysis using stochastic simulation. Factorial kriging analysis is used to estimate the local- and regional-scale spatial components of the crime risk and explanatory variables. Then regression modeling is used to determine which factors are associated with the risk of car-related theft at different scales.
Definition of radon prone areas in Friuli Venezia Giulia region, Italy, using geostatistical tools.
Cafaro, C; Bossew, P; Giovani, C; Garavaglia, M
2014-12-01
Studying the geographical distribution of indoor radon concentration, using geostatistical interpolation methods, has become common for predicting and estimating the risk to the population. Here we analyse the case of Friuli Venezia Giulia (FVG), the north easternmost region of Italy. Mean value and standard deviation are, respectively, 153 Bq/m(3) and 183 Bq/m(3). The geometric mean value is 100 Bq/m(3). Spatial datasets of indoor radon concentrations are usually affected by clustering and apparent non-stationarity issues, which can eventually yield arguable results. The clustering of the present dataset seems to be non preferential. Therefore the areal estimations are not expected to be affected. Conversely, nothing can be said on the non stationarity issues and its effects. After discussing the correlation of geology with indoor radon concentration It appears they are created by the same geologic features influencing the mean and median values, and can't be eliminated via a map-based approach. To tackle these problems, in this work we deal with multiple definitions of RPA, but only in quaternary areas of FVG, using extensive simulation techniques. PMID:25261867
Geostatistical methods for rock mass quality prediction using borehole and geophysical survey data
NASA Astrophysics Data System (ADS)
Chen, J.; Rubin, Y.; Sege, J. E.; Li, X.; Hehua, Z.
2015-12-01
For long, deep tunnels, the number of geotechnical borehole investigations during the preconstruction stage is generally limited. Yet tunnels are often constructed in geological structures with complex geometries, and in which the rock mass is fragmented from past structural deformations. Tunnel Geology Prediction (TGP) is a geophysical technique widely used during tunnel construction in China to ensure safety during construction and to prevent geological disasters. In this paper, geostatistical techniques were applied in order to integrate seismic velocity from TGP and borehole information into spatial predictions of RMR (Rock Mass Rating) in unexcavated areas. This approach is intended to apply conditional probability methods to transform seismic velocities to directly observed RMR values. The initial spatial distribution of RMR, inferred from the boreholes, was updated by including geophysical survey data in a co-kriging approach. The method applied to a real tunnel project shows significant improvements in rock mass quality predictions after including geophysical survey data, leading to better decision-making for construction safety design.
Definition of radon prone areas in Friuli Venezia Giulia region, Italy, using geostatistical tools.
Cafaro, C; Bossew, P; Giovani, C; Garavaglia, M
2014-12-01
Studying the geographical distribution of indoor radon concentration, using geostatistical interpolation methods, has become common for predicting and estimating the risk to the population. Here we analyse the case of Friuli Venezia Giulia (FVG), the north easternmost region of Italy. Mean value and standard deviation are, respectively, 153 Bq/m(3) and 183 Bq/m(3). The geometric mean value is 100 Bq/m(3). Spatial datasets of indoor radon concentrations are usually affected by clustering and apparent non-stationarity issues, which can eventually yield arguable results. The clustering of the present dataset seems to be non preferential. Therefore the areal estimations are not expected to be affected. Conversely, nothing can be said on the non stationarity issues and its effects. After discussing the correlation of geology with indoor radon concentration It appears they are created by the same geologic features influencing the mean and median values, and can't be eliminated via a map-based approach. To tackle these problems, in this work we deal with multiple definitions of RPA, but only in quaternary areas of FVG, using extensive simulation techniques.
Modelling ambient ozone in an urban area using an objective model and geostatistical algorithms
NASA Astrophysics Data System (ADS)
Moral, Francisco J.; Rebollo, Francisco J.; Valiente, Pablo; López, Fernando; Muñoz de la Peña, Arsenio
2012-12-01
Ground-level tropospheric ozone is one of the air pollutants of most concern. Ozone levels continue to exceed both target values and the long-term objectives established in EU legislation to protect human health and prevent damage to ecosystems, agricultural crops and materials. Researchers or decision-makers frequently need information about atmospheric pollution patterns in urbanized areas. The preparation of this type of information is a complex task, due to the influence of several factors and their variability over time. In this work, some results of urban ozone distribution patterns in the city of Badajoz, which is the largest (140,000 inhabitants) and most industrialized city in Extremadura region (southwest Spain) are shown. Twelve sampling campaigns, one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the overall ozone level at each sampling location during the time interval considered, the measured ozone data were analysed using a new methodology based on the formulation of the Rasch model. As a result, a measure of overall ozone level which consolidates the monthly ground-level ozone measurements was obtained, getting moreover information about the influence on the overall ozone level of each monthly ozone measure. Finally, overall ozone level at locations where no measurements were available was estimated with geostatistical techniques and hazard assessment maps based on the spatial distribution of ozone were also generated.
Detection of masses in mammogram images using CNN, geostatistic functions and SVM.
Sampaio, Wener Borges; Diniz, Edgar Moraes; Silva, Aristófanes Corrêa; de Paiva, Anselmo Cardoso; Gattass, Marcelo
2011-08-01
Breast cancer occurs with high frequency among the world's population and its effects impact the patients' perception of their own sexuality and their very personal image. This work presents a computational methodology that helps specialists detect breast masses in mammogram images. The first stage of the methodology aims to improve the mammogram image. This stage consists in removing objects outside the breast, reducing noise and highlighting the internal structures of the breast. Next, cellular neural networks are used to segment the regions that might contain masses. These regions have their shapes analyzed through shape descriptors (eccentricity, circularity, density, circular disproportion and circular density) and their textures analyzed through geostatistic functions (Ripley's K function and Moran's and Geary's indexes). Support vector machines are used to classify the candidate regions as masses or non-masses, with sensitivity of 80%, rates of 0.84 false positives per image and 0.2 false negatives per image, and an area under the ROC curve of 0.87. PMID:21703605
Vogel, J.R.; Brown, G.O.
2003-01-01
Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.
Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps
NASA Astrophysics Data System (ADS)
Gundogdu, Ismail Bulent
2015-09-01
Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.
Wu, Wenyong; Yin, Shiyang; Liu, Honglu; Niu, Yong; Bao, Zhe
2014-10-01
The purpose of this study was to determine and evaluate the spatial changes in soil salinity by using geostatistical methods. The study focused on the suburb area of Beijing, where urban development led to water shortage and accelerated wastewater reuse to farm irrigation for more than 30 years. The data were then processed by GIS using three different interpolation techniques of ordinary kriging (OK), disjunctive kriging (DK), and universal kriging (UK). The normality test and overall trend analysis were applied for each interpolation technique to select the best fitted model for soil parameters. Results showed that OK was suitable for soil sodium adsorption ratio (SAR) and Na(+) interpolation; UK was suitable for soil Cl(-) and pH; DK was suitable for soil Ca(2+). The nugget-to-sill ratio was applied to evaluate the effects of structural and stochastic factors. The maps showed that the areas of non-saline soil and slight salinity soil accounted for 6.39 and 93.61%, respectively. The spatial distribution and accumulation of soil salt were significantly affected by the irrigation probabilities and drainage situation under long-term wastewater irrigation.
Messier, Kyle P.; Akita, Yasuyuki; Serre, Marc L.
2012-01-01
Geographic Information Systems (GIS) based techniques are cost-effective and efficient methods used by state agencies and epidemiology researchers for estimating concentration and exposure. However, budget limitations have made statewide assessments of contamination difficult, especially in groundwater media. Many studies have implemented address geocoding, land use regression, and geostatistics independently, but this is the first to examine the benefits of integrating these GIS techniques to address the need of statewide exposure assessments. A novel framework for concentration exposure is introduced that integrates address geocoding, land use regression (LUR), below detect data modeling, and Bayesian Maximum Entropy (BME). A LUR model was developed for Tetrachloroethylene that accounts for point sources and flow direction. We then integrate the LUR model into the BME method as a mean trend while also modeling below detects data as a truncated Gaussian probability distribution function. We increase available PCE data 4.7 times from previously available databases through multistage geocoding. The LUR model shows significant influence of dry cleaners at short ranges. The integration of the LUR model as mean trend in BME results in a 7.5% decrease in cross validation mean square error compared to BME with a constant mean trend. PMID:22264162
Messier, Kyle P; Akita, Yasuyuki; Serre, Marc L
2012-03-01
Geographic information systems (GIS) based techniques are cost-effective and efficient methods used by state agencies and epidemiology researchers for estimating concentration and exposure. However, budget limitations have made statewide assessments of contamination difficult, especially in groundwater media. Many studies have implemented address geocoding, land use regression, and geostatistics independently, but this is the first to examine the benefits of integrating these GIS techniques to address the need of statewide exposure assessments. A novel framework for concentration exposure is introduced that integrates address geocoding, land use regression (LUR), below detect data modeling, and Bayesian Maximum Entropy (BME). A LUR model was developed for tetrachloroethylene that accounts for point sources and flow direction. We then integrate the LUR model into the BME method as a mean trend while also modeling below detects data as a truncated Gaussian probability distribution function. We increase available PCE data 4.7 times from previously available databases through multistage geocoding. The LUR model shows significant influence of dry cleaners at short ranges. The integration of the LUR model as mean trend in BME results in a 7.5% decrease in cross validation mean square error compared to BME with a constant mean trend.
Wu, Wenyong; Yin, Shiyang; Liu, Honglu; Niu, Yong; Bao, Zhe
2014-10-01
The purpose of this study was to determine and evaluate the spatial changes in soil salinity by using geostatistical methods. The study focused on the suburb area of Beijing, where urban development led to water shortage and accelerated wastewater reuse to farm irrigation for more than 30 years. The data were then processed by GIS using three different interpolation techniques of ordinary kriging (OK), disjunctive kriging (DK), and universal kriging (UK). The normality test and overall trend analysis were applied for each interpolation technique to select the best fitted model for soil parameters. Results showed that OK was suitable for soil sodium adsorption ratio (SAR) and Na(+) interpolation; UK was suitable for soil Cl(-) and pH; DK was suitable for soil Ca(2+). The nugget-to-sill ratio was applied to evaluate the effects of structural and stochastic factors. The maps showed that the areas of non-saline soil and slight salinity soil accounted for 6.39 and 93.61%, respectively. The spatial distribution and accumulation of soil salt were significantly affected by the irrigation probabilities and drainage situation under long-term wastewater irrigation. PMID:25127658
A geostatistical approach to data harmonization - Application to radioactivity exposure data
NASA Astrophysics Data System (ADS)
Baume, O.; Skøien, J. O.; Heuvelink, G. B. M.; Pebesma, E. J.; Melles, S. J.
2011-06-01
Environmental issues such as air, groundwater pollution and climate change are frequently studied at spatial scales that cross boundaries between political and administrative regions. It is common for different administrations to employ different data collection methods. If these differences are not taken into account in spatial interpolation procedures then biases may appear and cause unrealistic results. The resulting maps may show misleading patterns and lead to wrong interpretations. Also, errors will propagate when these maps are used as input to environmental process models. In this paper we present and apply a geostatistical model that generalizes the universal kriging model such that it can handle heterogeneous data sources. The associated best linear unbiased estimation and prediction (BLUE and BLUP) equations are presented and it is shown that these lead to harmonized maps from which estimated biases are removed. The methodology is illustrated with an example of country bias removal in a radioactivity exposure assessment for four European countries. The application also addresses multicollinearity problems in data harmonization, which arise when both artificial bias factors and natural drifts are present and cannot easily be distinguished. Solutions for handling multicollinearity are suggested and directions for further investigations proposed.
Lamichhane, Jay Ram; Fabi, Alfredo; Ridolfi, Roberto; Varvaro, Leonardo
2013-01-01
Incidence of Xanthomonas arboricola pv. corylina, the causal agent of hazelnut bacterial blight, was analyzed spatially in relation to the pedoclimatic factors. Hazelnut grown in twelve municipalities situated in the province of Viterbo, central Italy was studied. A consistent number of bacterial isolates were obtained from the infected tissues of hazelnut collected in three years (2010–2012). The isolates, characterized by phenotypic tests, did not show any difference among them. Spatial patterns of pedoclimatic data, analyzed by geostatistics showed a strong positive correlation of disease incidence with higher values of rainfall, thermal shock and soil nitrogen; a weak positive correlation with soil aluminium content and a strong negative correlation with the values of Mg/K ratio. No correlation of the disease incidence was found with soil pH. Disease incidence ranged from very low (<1%) to very high (almost 75%) across the orchards. Young plants (4-year old) were the most affected by the disease confirming a weak negative correlation of the disease incidence with plant age. Plant cultivars did not show any difference in susceptibility to the pathogen. Possible role of climate change on the epidemiology of the disease is discussed. Improved management practices are recommended for effective control of the disease. PMID:23424654
NASA Astrophysics Data System (ADS)
Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel
2011-12-01
This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.
Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed
Balk, B.; Elder, K.; Baron, J.
1998-01-01
Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff. In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado. Geostatistics and classical statistics were used to estimate SWE distribution across the watershed. Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances. Snow densities were spatially modeled through regression analysis. Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE. The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths. Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.
NASA Astrophysics Data System (ADS)
Zhang, Ting; Du, Yi; Huang, Tao; Li, Xue
2016-02-01
Constrained by current hardware equipment and techniques, acquisition of geological data sometimes is difficult or even impossible. Stochastic simulation for geological data is helpful to address this issue, providing multiple possible results of geological data for resource prediction and risk evaluation. Multiple-point geostatistics (MPS) being one of the main branches of stochastic simulation can extract the intrinsic features of patterns from training images (TIs) that provide prior information to limit the under-determined simulated results, and then copy them to the simulated regions. Because the generated models from TIs are not always linear, some MPS methods using linear dimensionality reduction are not suitable to deal with nonlinear models of TIs. A new MPS method named ISOMAPSIM was proposed to resolve this issue, which reduces the dimensionality of patterns from TIs using isometric mapping (ISOMAP) and then classifies these low-dimensional patterns for simulation. Since conditional models including hard data and soft data influence the simulated results greatly, this paper further studies ISOMAPSIM using hard data and soft data to obtain more accurate simulations for geological modeling. Stochastic simulation of geological data is processed respectively under several conditions according to different situations of conditional models. The tests show that the proposed method can reproduce the structural characteristics of TIs under all conditions, but the condition using soft data and hard data together performs best in simulation quality; moreover, the proposed method shows its advantages over other MPS methods that use linear dimensionality reduction.
Mapping of Aspergillus Section Nigri in Southern Europe and Israel based on geostatistical analysis.
Battilani, P; Barbano, C; Marin, S; Sanchis, V; Kozakiewicz, Z; Magan, N
2006-09-01
Geostatistical analysis was applied to the incidence of Aspergillus Section Nigri and A. carbonarius in Southern Europe and Israel for the 3-year period 2001-2003 to facilitate identification of regions of high risk from contamination with these fungi and production of ochratoxin. The highest incidence of black aspergilli was normally observed at harvesting. At this grape growth stage, spatial variability of black aspergilli was significantly related to latitude and longitude, showing a positive West-East and North-South gradient. Predictive maps of infected berries incidence were drawn and showed the same trend in the 3 years, but incidence was highest in 2003, followed by 2001 and 2002. The highest incidence was always observed in Israel, Greece and Southern France, associated with the highest incidence of A. carbonarius. Southern Spain and Southern Italy also had relevant incidence of black aspergilli. The thermo-wetness maps for the 3 years showed a trend similar to the incidence of black aspergilli. The coldest and wettest year was 2002, while 2003 was the hottest and driest, particularly during August, with Israel being the hottest and driest country, followed by Greece and Southern Italy. This indicates that meteorological conditions can contribute to explain spatial distribution variation of black aspergilli within the Mediterranean basin. PMID:16737756
Assessing TCE source bioremediation by geostatistical analysis of a flux fence.
Cai, Zuansi; Wilson, Ryan D; Lerner, David N
2012-01-01
Mass discharge across transect planes is increasingly used as a metric for performance assessment of in situ groundwater remediation systems. Mass discharge estimates using concentrations measured in multilevel transects are often made by assuming a uniform flow field, and uncertainty contributions from spatial concentration and flow field variability are often overlooked. We extend our recently developed geostatistical approach to estimate mass discharge using transect data of concentration and hydraulic conductivity, so accounting for the spatial variability of both datasets. The magnitude and uncertainty of mass discharge were quantified by conditional simulation. An important benefit of the approach is that uncertainty is quantified as an integral part of the mass discharge estimate. We use this approach for performance assessment of a bioremediation experiment of a trichloroethene (TCE) source zone. Analyses of dissolved parent and daughter compounds demonstrated that the engineered bioremediation has elevated the degradation rate of TCE, resulting in a two-thirds reduction in the TCE mass discharge from the source zone. The biologically enhanced dissolution of TCE was not significant (~5%), and was less than expected. However, the discharges of the daughter products cis-1,2, dichloroethene (cDCE) and vinyl chloride (VC) increased, probably because of the rapid transformation of TCE from the source zone to the measurement transect. This suggests that enhancing the biodegradation of cDCE and VC will be crucial to successful engineered bioremediation of TCE source zones.
Ashton, Ruth A; Kefyalew, Takele; Rand, Alison; Sime, Heven; Assefa, Ashenafi; Mekasha, Addis; Edosa, Wasihun; Tesfaye, Gezahegn; Cano, Jorge; Teka, Hiwot; Reithinger, Richard; Pullan, Rachel L; Drakeley, Chris J; Brooker, Simon J
2015-07-01
Ethiopia has a diverse ecology and geography resulting in spatial and temporal variation in malaria transmission. Evidence-based strategies are thus needed to monitor transmission intensity and target interventions. A purposive selection of dried blood spots collected during cross-sectional school-based surveys in Oromia Regional State, Ethiopia, were tested for presence of antibodies against Plasmodium falciparum and P. vivax antigens. Spatially explicit binomial models of seroprevalence were created for each species using a Bayesian framework, and used to predict seroprevalence at 5 km resolution across Oromia. School seroprevalence showed a wider prevalence range than microscopy for both P. falciparum (0-50% versus 0-12.7%) and P. vivax (0-53.7% versus 0-4.5%), respectively. The P. falciparum model incorporated environmental predictors and spatial random effects, while P. vivax seroprevalence first-order trends were not adequately explained by environmental variables, and a spatial smoothing model was developed. This is the first demonstration of serological indicators being used to detect large-scale heterogeneity in malaria transmission using samples from cross-sectional school-based surveys. The findings support the incorporation of serological indicators into periodic large-scale surveillance such as Malaria Indicator Surveys, and with particular utility for low transmission and elimination settings.
Ashton, Ruth A.; Kefyalew, Takele; Rand, Alison; Sime, Heven; Assefa, Ashenafi; Mekasha, Addis; Edosa, Wasihun; Tesfaye, Gezahegn; Cano, Jorge; Teka, Hiwot; Reithinger, Richard; Pullan, Rachel L.; Drakeley, Chris J.; Brooker, Simon J.
2015-01-01
Ethiopia has a diverse ecology and geography resulting in spatial and temporal variation in malaria transmission. Evidence-based strategies are thus needed to monitor transmission intensity and target interventions. A purposive selection of dried blood spots collected during cross-sectional school-based surveys in Oromia Regional State, Ethiopia, were tested for presence of antibodies against Plasmodium falciparum and P. vivax antigens. Spatially explicit binomial models of seroprevalence were created for each species using a Bayesian framework, and used to predict seroprevalence at 5 km resolution across Oromia. School seroprevalence showed a wider prevalence range than microscopy for both P. falciparum (0–50% versus 0–12.7%) and P. vivax (0–53.7% versus 0–4.5%), respectively. The P. falciparum model incorporated environmental predictors and spatial random effects, while P. vivax seroprevalence first-order trends were not adequately explained by environmental variables, and a spatial smoothing model was developed. This is the first demonstration of serological indicators being used to detect large-scale heterogeneity in malaria transmission using samples from cross-sectional school-based surveys. The findings support the incorporation of serological indicators into periodic large-scale surveillance such as Malaria Indicator Surveys, and with particular utility for low transmission and elimination settings. PMID:25962770
NASA Technical Reports Server (NTRS)
Franklin, Rima B.; Blum, Linda K.; McComb, Alison C.; Mills, Aaron L.
2002-01-01
Small-scale variations in bacterial abundance and community structure were examined in salt marsh sediments from Virginia's eastern shore. Samples were collected at 5 cm intervals (horizontally) along a 50 cm elevation gradient, over a 215 cm horizontal transect. For each sample, bacterial abundance was determined using acridine orange direct counts and community structure was analyzed using randomly amplified polymorphic DNA fingerprinting of whole-community DNA extracts. A geostatistical analysis was used to determine the degree of spatial autocorrelation among the samples, for each variable and each direction (horizontal and vertical). The proportion of variance in bacterial abundance that could be accounted for by the spatial model was quite high (vertical: 60%, horizontal: 73%); significant autocorrelation was found among samples separated by 25 cm in the vertical direction and up to 115 cm horizontally. In contrast, most of the variability in community structure was not accounted for by simply considering the spatial separation of samples (vertical: 11%, horizontal: 22%), and must reflect variability from other parameters (e.g., variation at other spatial scales, experimental error, or environmental heterogeneity). Microbial community patch size based upon overall similarity in community structure varied between 17 cm (vertical) and 35 cm (horizontal). Overall, variability due to horizontal position (distance from the creek bank) was much smaller than that due to vertical position (elevation) for both community properties assayed. This suggests that processes more correlated with elevation (e.g., drainage and redox potential) vary at a smaller scale (therefore producing smaller patch sizes) than processes controlled by distance from the creek bank. c2002 Federation of European Microbiological Societies. Published by Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lyon, S. W.; Seibert, J.; Lembo, A. J.; Walter, M. T.; Steenhuis, T. S.
2006-02-01
Shallow water tables near-streams often lead to saturated, overland flow generating areas in catchments in humid climates. While these saturated areas are assumed to be principal biogeochemical hot-spots and important for issues such as non-point pollution sources, the spatial and temporal behavior of shallow water tables, and associated saturated areas, is not completely understood. This study demonstrates how geostatistical methods can be used to characterize the spatial and temporal variation of the shallow water table for the near-stream region. Event-based and seasonal changes in the spatial structure of the shallow water table, which influences the spatial pattern of surface saturation and related runoff generation, can be identified and used in conjunction to characterize the hydrology of an area. This is accomplished through semivariogram analysis and indicator kriging to produce maps combining soft data (i.e., proxy information to the variable of interest) representing general shallow water table patterns with hard data (i.e., actual measurements) that represent variation in the spatial structure of the shallow water table per rainfall event. The area used was a hillslope in the Catskill Mountains region of New York State. The shallow water table was monitored for a 120 m×180 m near-stream region at 44 sampling locations on 15-min intervals. Outflow of the area was measured at the same time interval. These data were analyzed at a short time interval (15 min) and at a long time interval (months) to characterize the changes in the hydrologic behavior of the hillslope. Indicator semivariograms based on binary-transformed ground water table data (i.e., 1 if exceeding the time-variable median depth to water table and 0 if not) were created for both short and long time intervals. For the short time interval, the indicator semivariograms showed a high degree of spatial structure in the shallow water table for the spring, with increased range during many rain
NASA Astrophysics Data System (ADS)
Lyon, S. W.; Seibert, J.; Lembo, A. J.; Walter, M. T.; Steenhuis, T. S.
2005-08-01
Shallow water tables in the near-stream region often lead to saturated areas in catchments in humid climates. While these saturated areas are assumed to be of importance for issues such as non-point pollution sources, little is known about the spatial and temporal behavior of shallow water tables and the resulting saturated areas. In this study, geostatistical methods are employed demonstrating their utility in investigating the spatial and temporal variation of the shallow water table for the near-stream region. Event-based and seasonal changes in the spatial structure of the shallow water table, which directly influences surface saturation and runoff generation, can be identified and used in conjunction to characterize the hydrology of an area. This is accomplished through semivariogram analysis and indicator kriging to produce maps combining supplemental soft data (i.e., proxy information to the variable of interest) representing seasonal trends in the shallow water table with hard data (i.e., the actual measurements) that represent variation in the spatial structure of the shallow water table per rainfall event. The area used was a hillslope located in the Catskill Mountains region of New York State. The shallow water table was monitored for a 120 m×180 m near-stream region at 44 sampling locations on 15-min intervals. Outflow of the area was measured at the same time interval. These data were analyzed at a short time interval (15 min) and at a long time interval (months) to characterize the changes in the hydrology of the region. Indicator semivariograms based on transforming the depth to ground water table data into binary values (i.e., 1 if exceeding the time-variable median depth to water table and 0 if not) were created for both time interval lengths. When considering only the short time interval, the indicator semivariograms for spring when there is excess rainfall show high spatial structure with increased ranges during rain events with surface
NASA Astrophysics Data System (ADS)
Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein
2014-12-01
Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data. This
Saito, Hirotaka; Goovaerts, Pierre; McKenna, Sean Andrew
2003-06-01
Efficient and reliable unexploded ordnance (UXO) site characterization is needed for decisions regarding future land use. There are several types of data available at UXO sites and geophysical signal maps are one of the most valuable sources of information. Incorporation of such information into site characterization requires a flexible and reliable methodology. Geostatistics allows one to account for exhaustive secondary information (i.e.,, known at every location within the field) in many different ways. Kriging and logistic regression were combined to map the probability of occurrence of at least one geophysical anomaly of interest, such as UXO, from a limited number of indicator data. Logistic regression is used to derive the trend from a geophysical signal map, and kriged residuals are added to the trend to estimate the probabilities of the presence of UXO at unsampled locations (simple kriging with varying local means or SKlm). Each location is identified for further remedial action if the estimated probability is greater than a given threshold. The technique is illustrated using a hypothetical UXO site generated by a UXO simulator, and a corresponding geophysical signal map. Indicator data are collected along two transects located within the site. Classification performances are then assessed by computing proportions of correct classification, false positive, false negative, and Kappa statistics. Two common approaches, one of which does not take any secondary information into account (ordinary indicator kriging) and a variant of common cokriging (collocated cokriging), were used for comparison purposes. Results indicate that accounting for exhaustive secondary information improves the overall characterization of UXO sites if an appropriate methodology, SKlm in this case, is used.
Chihi, Hayet; Galli, Alain; Ravenne, Christian; Tesson, Michel; Marsily, Ghislain de
2000-03-15
The object of this study is to build a three-dimensional (3D) geometric model of the stratigraphic units of the margin of the Rhone River on the basis of geophysical investigations by a network of seismic profiles at sea. The geometry of these units is described by depth charts of each surface identified by seismic profiling, which is done by geostatistics. The modeling starts by a statistical analysis by which we determine the parameters that enable us to calculate the variograms of the identified surfaces. After having determined the statistical parameters, we calculate the variograms of the variable Depth. By analyzing the behavior of the variogram we then can deduce whether the situation is stationary and if the variable has an anisotropic behavior. We tried the following two nonstationary methods to obtain our estimates: (a) The method of universal kriging if the underlying variogram was directly accessible. (b) The method of increments if the underlying variogram was not directly accessible. After having modeled the variograms of the increments and of the variable itself, we calculated the surfaces by kriging the variable Depth on a small-mesh estimation grid. The two methods then are compared and their respective advantages and disadvantages are discussed, as well as their fields of application. These methods are capable of being used widely in earth sciences for automatic mapping of geometric surfaces or for variables such as a piezometric surface or a concentration, which are not 'stationary,' that is, essentially, possess a gradient or a tendency to develop systematically in space.
NASA Astrophysics Data System (ADS)
Caseri, Angelica; Ramos, Maria Helena; Javelle, Pierre; Leblois, Etienne
2016-04-01
Floods are responsible for a major part of the total damage caused by natural disasters. Nowcasting systems providing public alerts to flash floods are very important to prevent damages from extreme events and reduce their socio-economic impacts. The major challenge of these systems is to capture high-risk situations in advance, with good accuracy in the intensity, location and timing of future intense precipitation events. Flash flood forecasting has been studied by several authors in different affected areas. The majority of the studies combines rain gauge data with radar imagery advection to improve prediction for the next few hours. Outputs of Numerical Weather Prediction (NWP) models have also been increasingly used to predict ensembles of extreme precipitation events that might trigger flash floods. One of the challenges of the use of NWP for ensemble nowcasting is to successfully generate ensemble forecasts of precipitation in a short time calculation period to enable the production of flood forecasts with sufficient advance to issue flash flood alerts. In this study, we investigate an alternative space-time geostatistical framework to generate multiple scenarios of future rainfall for flash floods nowcasting. The approach is based on conditional simulation and an advection method applied within the Turning Bands Method (TBM). Ensemble forecasts of precipitation fields are generated based on space-time properties given by radar images and precipitation data collected from rain gauges during the development of the rainfall event. The results show that the approach developed can be an interesting alternative to capture precipitation uncertainties in location and intensity and generate ensemble forecasts of rainfall that can be useful to improve alerts for flash floods, especially in small areas.
NASA Astrophysics Data System (ADS)
Yadav, V.; Mueller, K. L.; Dragoni, D.; Michalak, A. M.
2010-09-01
A coupled Bayesian model selection and geostatistical regression modeling approach is adopted for empirical analysis of gross primary productivity (GPP) at six AmeriFlux sites, including the Kennedy Space Center Scrub Oak, Vaira Ranch, Tonzi Ranch, Blodgett Forest, Morgan Monroe State Forest, and Harvard Forest sites. The analysis is performed at a continuum of temporal scales ranging from daily to monthly, for a period of seven years. A total of 10 covariates representing environmental stimuli and indices of plant physiology are considered in explaining variations in GPP. Similarly to other statistical methods, the presented approach estimates regression coefficients and uncertainties associated with the covariates in a selected regression model. Unlike traditional regression methods, however, the approach also estimates the uncertainty associated with the selection of a single "best" model of GPP. In addition, the approach provides an enhanced understanding of how the importance of specific covariates changes with the examined timescale (i.e. temporal resolution). An examination of changes in the importance of specific covariates across timescales reveals thresholds above or below which covariates become important in explaining GPP. Results indicate that most sites (especially those with a stronger seasonal cycle) exhibit at least one prominent scaling threshold between the daily and 20-day temporal scales. This demonstrates that environmental variables that explain GPP at synoptic scales are different from those that capture its seasonality. At shorter time scales, radiation, temperature, and vapor pressure deficit exert the most significant influence on GPP at most examined sites. At coarser time scales, however, the importance of these covariates in explaining GPP declines. Overall, unique best models are identified at most sites at the daily scale, whereas multiple competing models are identified at longer time scales.
NASA Astrophysics Data System (ADS)
Vasquez, D. A.; Swift, J. N.; Tan, S.; Darrah, T. H.
2013-12-01
The integration of precise geochemical analyses with quantitative engineering modeling into an interactive GIS system allows for a sophisticated and efficient method of reservoir engineering and characterization. Geographic Information Systems (GIS) is utilized as an advanced technique for oil field reservoir analysis by combining field engineering and geological/geochemical spatial datasets with the available systematic modeling and mapping methods to integrate the information into a spatially correlated first-hand approach in defining surface and subsurface characteristics. Three key methods of analysis include: 1) Geostatistical modeling to create a static and volumetric 3-dimensional representation of the geological body, 2) Numerical modeling to develop a dynamic and interactive 2-dimensional model of fluid flow across the reservoir and 3) Noble gas geochemistry to further define the physical conditions, components and history of the geologic system. Results thus far include using engineering algorithms for interpolating electrical well log properties across the field (spontaneous potential, resistivity) yielding a highly accurate and high-resolution 3D model of rock properties. Results so far also include using numerical finite difference methods (crank-nicholson) to solve for equations describing the distribution of pressure across field yielding a 2D simulation model of fluid flow across reservoir. Ongoing noble gas geochemistry results will also include determination of the source, thermal maturity and the extent/style of fluid migration (connectivity, continuity and directionality). Future work will include developing an inverse engineering algorithm to model for permeability, porosity and water saturation.This combination of new and efficient technological and analytical capabilities is geared to provide a better understanding of the field geology and hydrocarbon dynamics system with applications to determine the presence of hydrocarbon pay zones (or
Using Geostatistical Data Fusion Techniques and MODIS Data to Upscale Simulated Wheat Yield
NASA Astrophysics Data System (ADS)
Castrignano, A.; Buttafuoco, G.; Matese, A.; Toscano, P.
2014-12-01
Population growth increases food request. Assessing food demand and predicting the actual supply for a given location are critical components of strategic food security planning at regional scale. Crop yield can be simulated using crop models because is site-specific and determined by weather, management, length of growing season and soil properties. Crop models require reliable location-specific data that are not generally available. Obtaining these data at a large number of locations is time-consuming, costly and sometimes simply not feasible. An upscaling method to extend coverage of sparse estimates of crop yield to an appropriate extrapolation domain is required. This work is aimed to investigate the applicability of a geostatistical data fusion approach for merging remote sensing data with the predictions of a simulation model of wheat growth and production using ground-based data. The study area is Capitanata plain (4000 km2) located in Apulia Region, mostly cropped with durum wheat. The MODIS EVI/NDVI data products for Capitanata plain were downloaded from the Land Processes Distributed Active Archive Center (LPDAAC) remote for the whole crop cycle of durum wheat. Phenological development, biomass growth and grain quantity of durum wheat were simulated by the Delphi system, based on a crop simulation model linked to a database including soil properties, agronomical and meteorological data. Multicollocated cokriging was used to integrate secondary exhaustive information (multi-spectral MODIS data) with primary variable (sparsely distributed biomass/yield model predictions of durum wheat). The model estimates looked strongly spatially correlated with the radiance data (red and NIR bands) and the fusion data approach proved to be quite suitable and flexible to integrate data of different type and support.
NASA Astrophysics Data System (ADS)
Mariethoz, G.; Jha, S. K.; McCabe, M. F.; Evans, J. P.
2012-12-01
Recent advances in multiple-point geostatistics (MPS) offer new possibilities in remote sensing, surface hydrology and climate modeling. MPS is an ensemble of tools for the characterization of spatial phenomena. Its most prominent characteristic is the use of training images for defining what type of spatial patterns are deemed to result from the processes under study. In the last decade, MPS have been increasingly used to characterize 3D subsurface structures consisting of geological facies, with application primarily to reservoir engineering, hydrogeology and mining. Although the methods show good results, a consistent difficulty relates to finding appropriate training images to describe largely unknown geological formations. Despite this issue, the growing interest in MPS triggered a series of different methodological advances, leading to improved computational performance and increased flexibility. With these recent improvements, the scientific community now has unprecedented numerical tools that allow dealing with a wide range of problems outside the realm of subsurface applications. These include the simulation of continuous variables as well as complex non-linear ensembles of multivariate properties. It is found that these new tools are ideal to address a number of issues in scientific fields related to surface modeling of environmental systems and geophysical data. Shifting focus and investigating the application of MPS to surface hydrology results in a wealth of training images that are readily available, thanks to global networks of remote sensing measurements. This presentation will delineate recent results in this direction, including MPS applications to the stochastic downscaling of climate models, the completion of partially informed remote sensing images and the processing of geophysical data. A major advantage is the use of satellite images taken at regular intervals, which can be used to inform both the spatial and temporal variability of
NASA Astrophysics Data System (ADS)
Delrieu, Guy; Wijbrans, Annette; Boudevillain, Brice; Faure, Dominique; Bonnifait, Laurent; Kirstetter, Pierre-Emmanuel
2014-09-01
Compared to other estimation techniques, one advantage of geostatistical techniques is that they provide an index of the estimation accuracy of the variable of interest with the kriging estimation standard deviation (ESD). In the context of radar-raingauge quantitative precipitation estimation (QPE), we address in this article the question of how the kriging ESD can be transformed into a local spread of error by using the dependency of radar errors to the rain amount analyzed in previous work. The proposed approach is implemented for the most significant rain events observed in 2008 in the Cévennes-Vivarais region, France, by considering both the kriging with external drift (KED) and the ordinary kriging (OK) methods. A two-step procedure is implemented for estimating the rain estimation accuracy: (i) first kriging normalized ESDs are computed by using normalized variograms (sill equal to 1) to account for the observation system configuration and the spatial structure of the variable of interest (rainfall amount, residuals to the drift); (ii) based on the assumption of a linear relationship between the standard deviation and the mean of the variable of interest, a denormalization of the kriging ESDs is performed globally for a given rain event by using a cross-validation procedure. Despite the fact that the KED normalized ESDs are usually greater than the OK ones (due to an additional constraint in the kriging system and a weaker spatial structure of the residuals to the drift), the KED denormalized ESDs are generally smaller the OK ones, a result consistent with the better performance observed for the KED technique. The evolution of the mean and the standard deviation of the rainfall-scaled ESDs over a range of spatial (5-300 km2) and temporal (1-6 h) scales demonstrates that there is clear added value of the radar with respect to the raingauge network for the shortest scales, which are those of interest for flash-flood prediction in the considered region.
Sedda, Luigi; Mweempwa, Cornelius; Ducheyne, Els; De Pus, Claudia; Hendrickx, Guy; Rogers, David J.
2014-01-01
For the first time a Bayesian geostatistical version of the Moran Curve, a logarithmic form of the Ricker stock recruitment curve, is proposed that is able to give an estimate of net change in population demographic rates considering components such as fertility and density dependent and density independent mortalities. The method is applied to spatio-temporally referenced count data of tsetse flies obtained from fly-rounds. The model is a linear regression with three components: population rate of change estimated from the Moran curve, an explicit spatio-temporal covariance, and the observation error optimised within a Bayesian framework. The model was applied to the three main climate seasons of Zambia (rainy – January to April, cold-dry – May to August, and hot-dry – September to December) taking into account land surface temperature and (seasonally changing) cattle distribution. The model shows a maximum positive net change during the hot-dry season and a minimum between the rainy and cold-dry seasons. Density independent losses are correlated positively with day-time land surface temperature and negatively with night-time land surface temperature and cattle distribution. The inclusion of density dependent mortality increases considerably the goodness of fit of the model. Cross validation with an independent dataset taken from the same area resulted in a very accurate estimate of tsetse catches. In general, the overall framework provides an important tool for vector control and eradication by identifying vector population concentrations and local vector demographic rates. It can also be applied to the case of sustainable harvesting of natural populations. PMID:24755848
A Geostatistical Framework for Estimating Rain Intensity Fields Using Dense Rain Gauge Networks
NASA Astrophysics Data System (ADS)
Benoit, L.; Mariethoz, G.
2015-12-01
Rain gauges provide direct and continuous observations of rain accumulation with a high time resolution (up to 1min). However the representativeness of these measurements is restricted to the funnel where rainwater is collected. Due to the high spatial heterogeneity of rainfall, this poor spatial representativeness is a strong limitation for the detailed reconstruction of rain intensity fields. Here we propose a geostatistical framework that is able to generate an ensemble of simulated rain fields based on data from a dense rain gauge network. When the density of rain gauges is high (sensor spacing in the range 500m to 1km), the spatial correlation between precipitation time series becomes sufficient to identify and track the rain patterns observed at the rain gauge sampling rate. Rain observations derived from such networks can thus be used to reconstruct the rain field with a high resolution in both space and time (i.e. 1min in time, 100m in space). Our method produces an ensemble of realizations that honor the rain intensities measured throughout the rain gauge network and preserve the main features of the rain intensity field at the considered scale, i.e.: the advection and morphing properties of rain cells over time, the intermittency and the skewed distribution of rainfall, and the decrease of the rain rate near the rain cell borders (dry drift). This allows to image the observed rain field and characterize its main features, as well as to quantify the related uncertainty. The obtained reconstruction of the rainfall are continuous in time, and therefore can complement weather radar observations which are snapshots of the rain field. In addition, the application of this method to networks with a spatial extent comparable to the one of a radar pixel (i.e. around 1km2) could allow exploration of the rain field within a single radar pixel.
NASA Astrophysics Data System (ADS)
Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Tahmasebi, Pejman; Khoshdel, Hossein
2014-12-01
In facies modeling, the ideal objective is to integrate different sources of data to generate a model that has the highest consistency to reality with respect to geological shapes and their facies architectures. Multiple-point (geo)statistics (MPS) is a tool that gives the opportunity of reaching this goal via defining a training image (TI). A facies modeling workflow was conducted on a carbonate reservoir located southwest Iran. Through a sequence stratigraphic correlation among the wells, it was revealed that the interval under a modeling process was deposited in a tidal flat environment. Bahamas tidal flat environment which is one of the most well studied modern carbonate tidal flats was considered to be the source of required information for modeling a TI. In parallel, a neural network probability cube was generated based on a set of attributes derived from 3D seismic cube to be applied into the MPS algorithm as a soft conditioning data. Moreover, extracted channel bodies and drilled well log facies came to the modeling as hard data. Combination of these constraints resulted to a facies model which was greatly consistent to the geological scenarios. This study showed how analogy of modern occurrences can be set as the foundation for generating a training image. Channel morphology and facies types currently being deposited, which are crucial for modeling a training image, was inferred from modern occurrences. However, there were some practical considerations concerning the MPS algorithm used for facies simulation. The main limitation was the huge amount of RAM and CPU-time needed to perform simulations.
ERIC Educational Resources Information Center
Giorgis, Cyndi; Johnson, Nancy J.
2002-01-01
Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)
NASA Astrophysics Data System (ADS)
Zhang, Hua; Harter, Thomas; Sivakumar, Bellie
2006-06-01
Facies-based geostatistical models have become important tools for analyzing flow and mass transport processes in heterogeneous aquifers. Yet little is known about the relationship between these latter processes and the parameters of facies-based geostatistical models. In this study, we examine the transport of a nonpoint source solute normal (perpendicular) to the major bedding plane of an alluvial aquifer medium that contains multiple geologic facies, including interconnected, high-conductivity (coarse textured) facies. We also evaluate the dependence of the transport behavior on the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system's hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute traveltime probability density function (pdf) for solute flux from the water table to the bottom boundary (the production horizon) of the aquifer. The cases examined include two-, three-, and four-facies models, with mean length anisotropy ratios for horizontal to vertical facies, ek, from 25:1 to 300:1 and with a wide range of facies volume proportions (e.g., from 5 to 95% coarse-textured facies). Predictions of traveltime pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer. Those predictions of traveltime pdfs also are affected by the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and, to a lesser degree, the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, traveltime is not lognormally distributed as is often assumed. Also, macrodispersive behavior (variance of the traveltime) is found not to be a unique function of the conductivity variance. For the parameter range
NASA Technical Reports Server (NTRS)
Herzfeld, Ute C.
2002-01-01
The central objective of this project has been the development of geostatistical methods fro mapping elevation and ice surface characteristics from satellite radar altimeter (RA) and Syntheitc Aperture Radar (SAR) data. The main results are an Atlas of elevation maps of Antarctica, from GEOSAT RA data and an Atlas from ERS-1 RA data, including a total of about 200 maps with 3 km grid resolution. Maps and digital terrain models are applied to monitor and study changes in Antarctic ice streams and glaciers, including Lambert Glacier/Amery Ice Shelf, Mertz and Ninnis Glaciers, Jutulstraumen Glacier, Fimbul Ice Shelf, Slessor Glacier, Williamson Glacier and others.
NASA Technical Reports Server (NTRS)
Messaro. Semma; Harrison, Phillip
2010-01-01
Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.
NASA Astrophysics Data System (ADS)
Kisaka, M. Oscar; Mucheru-Muna, M.; Ngetich, F. K.; Mugwe, J.; Mugendi, D.; Mairura, F.; Shisanya, C.; Makokha, G. L.
2016-04-01
Drier parts of Kenya's Central Highlands endure persistent crop failure and declining agricultural productivity. These have, in part, attributed to high temperatures, prolonged dry spells and erratic rainfall. Understanding spatial-temporal variability of climatic indices such as rainfall at seasonal level is critical for optimal rain-fed agricultural productivity and natural resource management in the study area. However, the predominant setbacks in analysing hydro-meteorological events are occasioned by either lack, inadequate, or inconsistent meteorological data. Like in most other places, the sole sources of climatic data in the study region are scarce and only limited to single stations, yet with persistent missing/unrecorded data making their utilization a challenge. This study examined seasonal anomalies and variability in rainfall, drought occurrence and the efficacy of interpolation techniques in the drier regions of eastern Kenyan. Rainfall data from five stations (Machang'a, Kiritiri, Kiambere and Kindaruma and Embu) were sourced from both the Kenya Meteorology Department and on-site primary recording. Owing to some experimental work ongoing, automated recording for primary dailies in Machang'a have been ongoing since the year 2000 to date; thus, Machang'a was treated as reference (for period of record) station for selection of other stations in the region. The other stations had data sets of over 15 years with missing data of less than 10 % as required by the world meteorological organization whose quality check is subject to the Centre for Climate Systems Modeling (C2SM) through MeteoSwiss and EMPA bodies. The dailies were also subjected to homogeneity testing to evaluate whether they came from the same population. Rainfall anomaly index, coefficients of variance and probability were utilized in the analyses of rainfall variability. Spline, kriging and inverse distance weighting interpolation techniques were assessed using daily rainfall data and
Geostatistical Modeling of the Spatial Variability of Arsenic in Groundwater of Southeast Michigan
NASA Astrophysics Data System (ADS)
Avruskin, G.; Goovaerts, P.; Meliker, J.; Slotnick, M.; Jacquez, G. M.; Nriagu, J. O.
2004-12-01
The last decade has witnessed an increasing interest in assessing health risks caused by exposure to contaminants present in the soil, air, and water. A key component of any exposure study is a reliable model for the space-time distribution of pollutants. This paper compares the performances of multiGaussian and indicator kriging for modeling probabilistically the space-time distribution of arsenic concentrations in groundwater of Southeast Michigan, accounting for information collected at private residential wells and the hydrogeochemistry of the area. This model will later be combined with a space-time information system to assess the risk associated with exposure to low levels of arsenic in drinking water (typically 5-100 μ g/L), in particular for the development of bladder cancer. Because of the small changes in concentration observed in time, the study has focused on the spatial variability of arsenic. This study confirmed results in the literature that reported intense spatial non-homogeneity of As concentration, resulting in samples that greatly vary even when located a few meters apart. Indicator semivariograms further showed a better spatial connectivity of low concentrations while values exceeding 32 μ g/L (10% of wells) are spatially uncorrelated. Secondary information, such as proximity to Marshall Sandstone, helped only the prediction at a regional scale (i.e. beyond 15 kms), leaving the short-range variability largely unexplained. Several geostatistical tools were tailored to the features of the As dataset: (1) semivariogram values were standardized by the lag variance to correct for the preferential sampling of wells with high concentrations, (2) semivariogram modeling was conducted under the constraint of reproduction of the nugget effect inferred from colocated well measurements, (3) kriging systems were modified to account for repeated measurements at a series of wells while avoiding non-invertible kriging matrices, (4) kriging-based smoothing
Mapping Oxygen-18 in Meteoric Precipitation over Peninsular Spain using Geostatistical Tools
NASA Astrophysics Data System (ADS)
Capilla, J. E.; Rodriguez Arevalo, J.; Castaño Castaño, S.; Diaz Teijeiro, M.; Heredia Diaz, J.; Sanchez del Moral, R.
2011-12-01
Rainfall isotopic composition is a valuable source of information to understand and model the complexity of natural hydrological systems. The identification of water recharge origins, water flow trajectories, residence times and pollutants movement in the hydrologic cycle can greatly benefit from this information. It is also very useful in other environmental issues associated to water movement in the biosphere. Although the potential of stable isotopes data in hydrology and climatic studies is promising, so far, it has been strongly limited by the availability of data. A major challenge is to extend sparse measurements of stable isotopes data to surrounding geographic areas taking into account other secondary variables as latitude, altitude and climate related parameters. Current state-of-the-art provides different approaches mainly based in deterministic interpolation techniques. In Spain a network called REVIP, made up by 16 nodes, is maintained by CEDEX (Centro de Estudios y Experimentación de Obras públicas). At REVIP nodes, stable isotopes in meteoric precipitation (Oxygen-18 and Deuterium) have been continuously recorded since the year 2000. This network provides a rare opportunity to study the patterns of spatial distribution over the whole country. In fact, some accurate regression models have already been proposed that map stable isotopes against latitude and altitude. Yet, these regressions maintain small residuals at the network nodes that are possibly caused by the local average features of climatic events. There is an ongoing effort to improve these maps that includes the identification of relevant climatic parameters and the application of geostatistical techniques. This paper describes the application of a regression kriging methodology to map oxygen-18 over peninsular Spain using REVIP data. The methodology includes a prior process to obtain normalized stable isotope concentrations that are independent of latitude and altitude, a structural
Karagiannis-Voules, Dimitrios-Alexios; Odermatt, Peter; Biedermann, Patricia; Khieu, Virak; Schär, Fabian; Muth, Sinuon; Utzinger, Jürg; Vounatsou, Penelope
2015-01-01
Soil-transmitted helminth infections are intimately connected with poverty. Yet, there is a paucity of using socioeconomic proxies in spatially explicit risk profiling. We compiled household-level socioeconomic data pertaining to sanitation, drinking-water, education and nutrition from readily available Demographic and Health Surveys, Multiple Indicator Cluster Surveys and World Health Surveys for Cambodia and aggregated the data at village level. We conducted a systematic review to identify parasitological surveys and made every effort possible to extract, georeference and upload the data in the open source Global Neglected Tropical Diseases database. Bayesian geostatistical models were employed to spatially align the village-aggregated socioeconomic predictors with the soil-transmitted helminth infection data. The risk of soil-transmitted helminth infection was predicted at a grid of 1×1km covering Cambodia. Additionally, two separate individual-level spatial analyses were carried out, for Takeo and Preah Vihear provinces, to assess and quantify the association between soil-transmitted helminth infection and socioeconomic indicators at an individual level. Overall, we obtained socioeconomic proxies from 1624 locations across the country. Surveys focussing on soil-transmitted helminth infections were extracted from 16 sources reporting data from 238 unique locations. We found that the risk of soil-transmitted helminth infection from 2000 onwards was considerably lower than in surveys conducted earlier. Population-adjusted prevalences for school-aged children from 2000 onwards were 28.7% for hookworm, 1.5% for Ascaris lumbricoides and 0.9% for Trichuris trichiura. Surprisingly, at the country-wide analyses, we did not find any significant association between soil-transmitted helminth infection and village-aggregated socioeconomic proxies. Based also on the individual-level analyses we conclude that socioeconomic proxies might not be good predictors at an
Geostatistics applied to cross-well reflection seismic for imaging carbonate aquifers
NASA Astrophysics Data System (ADS)
Parra, Jorge; Emery, Xavier
2013-05-01
Cross-well seismic reflection data, acquired from a carbonate aquifer at Port Mayaca test site near the eastern boundary of Lake Okeechobee in Martin County, Florida, are used to delineate flow units in the region intercepted by two wells. The interwell impedance determined by inversion from the seismic reflection data allows us to visualize the major boundaries between the hydraulic units. The hydraulic (flow) unit properties are based on the integration of well logs and the carbonate structure, which consists of isolated vuggy carbonate units and interconnected vug systems within the carbonate matrix. The vuggy and matrix porosity logs based on Formation Micro-Imager (FMI) data provide information about highly permeable conduits at well locations. The integration of the inverted impedance and well logs using geostatistics helps us to assess the resolution of the cross-well seismic method for detecting conduits and to determine whether these conduits are continuous or discontinuous between wells. A productive water zone of the aquifer outlined by the well logs was selected for analysis and interpretation. The ELAN (Elemental Log Analysis) porosity from two wells was selected as primary data and the reflection seismic-based impedance as secondary data. The direct and cross variograms along the vertical wells capture nested structures associated with periodic carbonate units, which correspond to connected flow units between the wells. Alternatively, the horizontal variogram of impedance (secondary data) provides scale lengths that correspond to irregular boundary shapes of flow units. The ELAN porosity image obtained by cokriging exhibits three similar flow units at different depths. These units are thin conduits developed in the first well and, at about the middle of the interwell separation region, these conduits connect to thicker flow units that are intercepted by the second well. In addition, a high impedance zone (low porosity) at a depth of about 275 m, after
NASA Astrophysics Data System (ADS)
Comunian, Alessandro; Felletti, Fabrizio; Giacobbo, Francesca; Giudici, Mauro; Bersezio, Riccardo
2015-04-01
When building a geostatistical model of the hydrofacies distribution in a volume block it is important to include all the relevant available information. Localised information about the observed hydrofacies (hard data) are routinely included in the simulation procedures. Non stationarities in the hydrofacies distribution can be handled by considering auxiliary (soft) data extracted, for example, from the results of geophysical surveys. This piece of information can be included as auxiliary data both in variogram based methods (i.e. co-Kriging) and in multiple-point statistics (MPS) methods. The latter methods allow to formalise some soft knowledge about the considered model of heterogeneity using a training image. However, including information related to the stratigraphic hierarchy in the training image is rarely straightforward. In this work, a methodology to include the information about the stratigraphic hierarchy in the simulation process is formalised and implemented in a MPS framework. The methodology is applied and tested by reconstructing two model blocks of alluvial sediments with an approximate volume of few cubic meters. The external faces of the blocks, exposed in a quarry, were thoroughly mapped and their stratigraphic hierarchy was interpreted in a previous study. The bi-dimensional (2D) maps extracted from the faces, which are used as training images and as hard data, present a vertical trend and complex stratigraphic architectures. The training images and the conditioning data are classified according to the proposed stratigraphic hierarchy, and the hydrofacies codes are grouped to allow a sequence of interleaved binary MPS simulation. Every step of the simulation sequence corresponds to a group of hydrofacies defined in the stratigraphic hierarchy. The blocks simulated with the proposed methodology are compared with blocks simulated with a standard MPS approach. The comparisons are performed on many realisations using connectivity indicators and
Karagiannis-Voules, Dimitrios-Alexios; Odermatt, Peter; Biedermann, Patricia; Khieu, Virak; Schär, Fabian; Muth, Sinuon; Utzinger, Jürg; Vounatsou, Penelope
2015-01-01
Soil-transmitted helminth infections are intimately connected with poverty. Yet, there is a paucity of using socioeconomic proxies in spatially explicit risk profiling. We compiled household-level socioeconomic data pertaining to sanitation, drinking-water, education and nutrition from readily available Demographic and Health Surveys, Multiple Indicator Cluster Surveys and World Health Surveys for Cambodia and aggregated the data at village level. We conducted a systematic review to identify parasitological surveys and made every effort possible to extract, georeference and upload the data in the open source Global Neglected Tropical Diseases database. Bayesian geostatistical models were employed to spatially align the village-aggregated socioeconomic predictors with the soil-transmitted helminth infection data. The risk of soil-transmitted helminth infection was predicted at a grid of 1×1km covering Cambodia. Additionally, two separate individual-level spatial analyses were carried out, for Takeo and Preah Vihear provinces, to assess and quantify the association between soil-transmitted helminth infection and socioeconomic indicators at an individual level. Overall, we obtained socioeconomic proxies from 1624 locations across the country. Surveys focussing on soil-transmitted helminth infections were extracted from 16 sources reporting data from 238 unique locations. We found that the risk of soil-transmitted helminth infection from 2000 onwards was considerably lower than in surveys conducted earlier. Population-adjusted prevalences for school-aged children from 2000 onwards were 28.7% for hookworm, 1.5% for Ascaris lumbricoides and 0.9% for Trichuris trichiura. Surprisingly, at the country-wide analyses, we did not find any significant association between soil-transmitted helminth infection and village-aggregated socioeconomic proxies. Based also on the individual-level analyses we conclude that socioeconomic proxies might not be good predictors at an
NASA Astrophysics Data System (ADS)
Matiatos, Ioannis; Varouhakis, Emmanouil A.; Papadopoulou, Maria P.
2015-04-01
level and nitrate concentrations were produced and compared with those obtained from groundwater and mass transport numerical models. Preliminary results showed similar efficiency of the spatiotemporal geostatistical method with the numerical models. However data requirements of the former model were significantly less. Advantages and disadvantages of the methods performance were analysed and discussed indicating the characteristics of the different approaches.
Yang, Chieh-Hou; Lee, Wei-Feng
2002-01-01
Ground water reservoirs in the Choshuichi alluvial fan, central western Taiwan, were investigated using direct-current (DC) resistivity soundings at 190 locations, combined with hydrogeological measurements from 37 wells. In addition, attempts were made to calculate aquifer transmissivity from both surface DC resistivity measurements and geostatistically derived predictions of aquifer properties. DC resistivity sounding data are highly correlated to the hydraulic parameters in the Choshuichi alluvial fan. By estimating the spatial distribution of hydraulic conductivity from the kriged well data and the cokriged thickness of the correlative aquifer from both resistivity sounding data and well information, the transmissivity of the aquifer at each location can be obtained from the product of kriged hydraulic conductivity and computed thickness of the geoelectric layer. Thus, the spatial variation of the transmissivities in the study area is obtained. Our work is more comparable to Ahmed et al. (1988) than to the work of Niwas and Singhal (1981). The first "constraint" from Niwas and Singhal's work is a result of their use of linear regression. The geostatistical approach taken here (and by Ahmed et al. [1988]) is a natural improvement on the linear regression approach. PMID:11916121
Diaz-Lacava, A. N.; Walier, M.; Holler, D.; Steffens, M.; Gieger, C.; Furlanello, C.; Lamina, C.; Wichmann, H. E.; Becker, T.
2015-01-01
Aiming to investigate fine-scale patterns of genetic heterogeneity in modern humans from a geographic perspective, a genetic geostatistical approach framed within a geographic information system is presented. A sample collected for prospective studies in a small area of southern Germany was analyzed. None indication of genetic heterogeneity was detected in previous analysis. Socio-demographic and genotypic data of German citizens were analyzed (212 SNPs; n = 728). Genetic heterogeneity was evaluated with observed heterozygosity (HO). Best-fitting spatial autoregressive models were identified, using socio-demographic variables as covariates. Spatial analysis included surface interpolation and geostatistics of observed and predicted patterns. Prediction accuracy was quantified. Spatial autocorrelation was detected for both socio-demographic and genetic variables. Augsburg City and eastern suburban areas showed higher HO values. The selected model gave best predictions in suburban areas. Fine-scale patterns of genetic heterogeneity were observed. In accordance to literature, more urbanized areas showed higher levels of admixture. This approach showed efficacy for detecting and analyzing subtle patterns of genetic heterogeneity within small areas. It is scalable in number of loci, even up to whole-genome analysis. It may be suggested that this approach may be applicable to investigate the underlying genetic history that is, at least partially, embedded in geographic data. PMID:26258132
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
SETS. Set Equation Transformation System
Worrell, R.B.
1992-01-13
SETS is used for symbolic manipulation of Boolean equations, particularly the reduction of equations by the application of Boolean identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze noncoherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access through nullification of sensors in its protection system.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The
Is random access memory random?
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.
Giardina, Federica; Franke, Jonas; Vounatsou, Penelope
2015-11-26
The study of malaria spatial epidemiology has benefited from recent advances in geographic information system and geostatistical modelling. Significant progress in earth observation technologies has led to the development of moderate, high and very high resolution imagery. Extensive literature exists on the relationship between malaria and environmental/climatic factors in different geographical areas, but few studies have linked human malaria parasitemia survey data with remote sensing-derived land cover/land use variables and very few have used Earth Observation products. Comparison among the different resolution products to model parasitemia has not yet been investigated. In this study, we probe a proximity measure to incorporate different land cover classes and assess the effect of the spatial resolution of remotely sensed land cover and elevation on malaria risk estimation in Mozambique after adjusting for other environmental factors at a fixed spatial resolution. We used data from the Demographic and Health survey carried out in 2011, which collected malaria parasitemia data on children from 0 to 5 years old, analysing them with a Bayesian geostatistical model. We compared the risk predicted using land cover and elevation at moderate resolution with the risk obtained employing the same variables at high resolution. We used elevation data at moderate and high resolution and the land cover layer from the Moderate Resolution Imaging Spectroradiometer as well as the one produced by MALAREO, a project covering part of Mozambique during 2010-2012 that was funded by the European Union's 7th Framework Program. Moreover, the number of infected children was predicted at different spatial resolutions using AFRIPOP population data and the enhanced population data generated by the MALAREO project for comparison of estimates. The Bayesian geostatistical model showed that the main determinants of malaria presence are precipitation and day temperature. However, the presence
Ribeiro, Manuel C; Pinho, P; Branquinho, C; Llop, Esteve; Pereira, Maria J
2016-08-15
In most studies correlating health outcomes with air pollution, personal exposure assignments are based on measurements collected at air-quality monitoring stations not coinciding with health data locations. In such cases, interpolators are needed to predict air quality in unsampled locations and to assign personal exposures. Moreover, a measure of the spatial uncertainty of exposures should be incorporated, especially in urban areas where concentrations vary at short distances due to changes in land use and pollution intensity. These studies are limited by the lack of literature comparing exposure uncertainty derived from distinct spatial interpolators. Here, we addressed these issues with two interpolation methods: regression Kriging (RK) and ordinary Kriging (OK). These methods were used to generate air-quality simulations with a geostatistical algorithm. For each method, the geostatistical uncertainty was drawn from generalized linear model (GLM) analysis. We analyzed the association between air quality and birth weight. Personal health data (n=227) and exposure data were collected in Sines (Portugal) during 2007-2010. Because air-quality monitoring stations in the city do not offer high-spatial-resolution measurements (n=1), we used lichen data as an ecological indicator of air quality (n=83). We found no significant difference in the fit of GLMs with any of the geostatistical methods. With RK, however, the models tended to fit better more often and worse less often. Moreover, the geostatistical uncertainty results showed a marginally higher mean and precision with RK. Combined with lichen data and land-use data of high spatial resolution, RK is a more effective geostatistical method for relating health outcomes with air quality in urban areas. This is particularly important in small cities, which generally do not have expensive air-quality monitoring stations with high spatial resolution. Further, alternative ways of linking human activities with their
NASA Astrophysics Data System (ADS)
Leonetti, Marco; López, Cefe
2012-06-01
A random laser is formed by a haphazard assembly of nondescript optical scatters with optical gain. Multiple light scattering replaces the optical cavity of traditional lasers and the interplay between gain, scattering and size determines its unique properties. Random lasers studied till recently, consisted of irregularly shaped or polydisperse scatters, with some average scattering strength constant across the gain frequency band. Photonic glasses can sustain scattering resonances that can be placed in the gain window, since they are formed by monodisperse spheres [1]. The unique resonant scattering of this novel material allows controlling the lasing color via the diameter of the particles and their refractive index. Thus a random laser with a priori set lasing peak can be designed [2]. A special pumping scheme that enables to select the number of activated modes in a random laser permits to prepare RLs in two distinct regimes by controlling directionality through the shape of the pump [3]. When pumping is essentially unidirectional, few (barely interacting) modes are turned on that show as sharp, uncorrelated peaks in the spectrum. By increasing angular span of the pump beams, many resonances intervene generating a smooth emission spectrum with a high degree of correlation, and shorter lifetime. These are signs of a phaselocking transition, in which phases are clamped together so that modes oscillate synchronously.
NASA Astrophysics Data System (ADS)
Goovaerts, P.; Jacquez, G. M.; Marcus, A. W.
2004-12-01
Spatial data are periodically collected and processed to monitor, analyze and interpret developments in our changing environment. Remote sensing is a modern way of data collecting and has seen an enormous growth since launching of modern satellites and development of airborne sensors. In particular, the recent availability of high spatial resolution hyperspectral imagery (spatial resolution of less than 5 meters and including data collected over 64 or more bands of electromagnetic radiation for each pixel offers a great potential to significantly enhance environmental mapping and our ability to model spatial systems. High spatial resolution imagery contains a remarkable quantity of information that could be used to analyze spatial breaks (boundaries), areas of similarity (clusters), and spatial autocorrelation (associations) across the landscape. This paper addresses the specific issue of soil disturbance detection, which could indicate the presence of land mines or recent movements of troop and heavy equipment. A challenge presented by soil detection is to retain the measurement of fine-scale features (i.e. mineral soil changes, organic content changes, vegetation disturbance related changes, aspect changes) while still covering proportionally large spatial areas. An additional difficulty is that no ground data might be available for the calibration of spectral signatures, and little might be known about the size of patches of disturbed soils to be detected. This paper describes a new technique for automatic target detection which capitalizes on both spatial and across spectral bands correlation, does not require any a priori information on the target spectral signature but does not allow discrimination between targets. This approach involves successively a multivariate statistical analysis (principal component analysis) of all spectral bands, a geostatistical filtering of noise and regional background in the first principal components using factorial kriging, and
Quasi-Random Sequence Generators.
1994-03-01
Version 00 LPTAU generates quasi-random sequences. The sequences are uniformly distributed sets of L=2**30 points in the N-dimensional unit cube: I**N=[0,1]. The sequences are used as nodes for multidimensional integration, as searching points in global optimization, as trial points in multicriteria decision making, as quasi-random points for quasi Monte Carlo algorithms.
Random sequential adsorption on fractals.
Ciesla, Michal; Barbasz, Jakub
2012-07-28
Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions.
Carlson, R A; Osiensky, J L
2001-01-01
Soil water NO3- -N concentrations were monitored for an alfalfa-oat-bean rotation and an alfalfa-bean-bean rotation in the Idaho Snake River Plain as part of the USEPA Section 319 National Monitoring Program. This monitoring study was conducted to evaluate potential beneficial impacts of a USDA recommended crop rotation on subsurface NO3- -N concentrations at a 4.9 hectare (ha) farm test field. Soil water monitoring was conducted in cooperation with the USDA Snake River Plain Water Quality Demonstration Project. Geostatistical and statistical analyses of NO3- -N data collected from a network of soil water solution samplers (lysimeters) coupled with hydrogeological characterization of the field indicated that alfalfa followed by oats reduced soil water NO3- -N concentrations at least temporarily compared to alfalfa followed by beans which is the traditional practice in the area. Soil water NO3- -N sample data showed a unimodal distribution, through the first two months of the split field rotations, that changed to a distinct bimodal distribution three months into the rotations. Development of the bimodal distribution of soil water NO3- -N appeared to correspond directly to the rotational split of the field. The median soil water NO3- -N value calculated from the sample data was approximately 50mgL(-3) greater in the field half planted in beans as compared to the field half planted in oats. Geostatistical spatial mapping results using sequential Gaussian simulations (SGS) supported these findings. SGS results suggested that elevated concentrations of NO3- -N in the soil water were related to both stratigraphic factors as well as the rotational split.
Kura, Nura Umar; Ramli, Mohammad Firuz; Ibrahim, Shaharin; Sulaiman, Wan Nur Azmin; Aris, Ahmad Zaharin
2014-01-01
In this study, geophysics, geochemistry, and geostatistical techniques were integrated to assess seawater intrusion in Kapas Island due to its geological complexity and multiple contamination sources. Five resistivity profiles were measured using an electric resistivity technique. The results reveal very low resistivity <1 Ωm, suggesting either marine clay deposit or seawater intrusion or both along the majority of the resistivity images. As a result, geochemistry was further employed to verify the resistivity evidence. The Chadha and Stiff diagrams classify the island groundwater into Ca-HCO3, Ca-Na-HCO3, Na-HCO3, and Na-Cl water types, with Ca-HCO3 as the dominant. The Mg(2+)/Mg(2+)+Ca(2+), HCO3 (-)/anion, Cl(-)/HCO3 (-), Na(+)/Cl(-), and SO4 (2-)/Cl(-) ratios show that some sampling sites are affected by seawater intrusion; these sampling sites fall within the same areas that show low-resistivity values. The resulting ratios and resistivity values were then used in the geographical information system (GIS) environment to create the geostatistical map of individual indicators. These maps were then overlaid to create the final map showing seawater-affected areas. The final map successfully delineates the area that is actually undergoing seawater intrusion. The proposed technique is not area specific, and hence, it can work in any place with similar completed characteristics or under the influence of multiple contaminants so as to distinguish the area that is truly affected by any targeted pollutants from the rest. This information would provide managers and policy makers with the knowledge of the current situation and will serve as a guide and standard in water research for sustainable management plan. PMID:24532282
Riva, Monica; Guadagnini, Alberto; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier; Ptak, Thomas
2008-10-23
We analyze the relative importance of the selection of (1) the geostatistical model depicting the structural heterogeneity of an aquifer, and (2) the basic processes to be included in the conceptual model, to describe the main aspects of solute transport at an experimental site. We focus on the results of a forced-gradient tracer test performed at the "Lauswiesen" experimental site, near Tübingen, Germany. In the experiment, NaBr is injected into a well located 52 m from a pumping well. Multilevel breakthrough curves (BTCs) are measured in the latter. We conceptualize the aquifer as a three-dimensional, doubly stochastic composite medium, where distributions of geomaterials and attributes, e.g., hydraulic conductivity (K) and porosity (phi), can be uncertain. Several alternative transport processes are considered: advection, advection-dispersion and/or mass-transfer between mobile and immobile regions. Flow and transport are tackled within a stochastic Monte Carlo framework to describe key features of the experimental BTCs, such as temporal moments, peak time, and pronounced tailing. We find that, regardless the complexity of the conceptual transport model adopted, an adequate description of heterogeneity is crucial for generating alternative equally likely realizations of the system that are consistent with (a) the statistical description of the heterogeneous system, as inferred from the data, and (b) salient features of the depth-averaged breakthrough curve, including preferential paths, slow release of mass particles, and anomalous spreading. While the available geostatistical characterization of heterogeneity can explain most of the integrated behavior of transport (depth-averaged breakthrough curve), not all multilevel BTCs are described with equal success. This suggests that transport models simply based on integrated measurements may not ensure an accurate representation of many of the important features required in three-dimensional transport models. PMID
NASA Astrophysics Data System (ADS)
Bellotti, F.; Capra, L.; Sarocchi, D.; D'Antonio, M.
2010-03-01
Grain size analysis of volcaniclastic deposits is mainly used to study flow transport and depositional processes, in most cases by comparing some statistical parameters and how they change with distance from the source. In this work the geospatial and multivariate analyses are presented as a strong adaptable geostatistical tool applied to volcaniclastic deposits in order to provide an effective and relatively simple methodology for texture description, deposit discrimination and interpretation of depositional processes. We choose the case of Nevado de Toluca volcano (Mexico) due to existing knowledge of its geological evolution, stratigraphic succession and spatial distribution of volcaniclastic units. Grain size analyses and frequency distribution curves have been carried out to characterize and compare the 28-ka block-and-ash flow deposit associated to a dome destruction episode, and the El Morral debris avalanche deposit originated from the collapse of the south-eastern sector of the volcano. The geostatistical interpolation of sedimentological data allows to realize bidimensional maps draped over the volcano topography, showing the granulometric distribution, sorting and fine material concentration into the whole deposit with respect to topographic changes. In this way, it is possible to analyze a continuous surface of the grain size distribution of volcaniclastic deposits and better understand flow transport processes. The application of multivariate statistic analysis (discriminant function) indicates that this methodology could be useful in discriminating deposits with different origin or different depositional lithofacies within the same deposit. The proposed methodology could be an interesting approach to sustain more classical analysis of volcaniclastic deposits, especially where a clear field classification appears problematic because of a homogeneous texture of the deposits or their scarce and discontinuous outcrops. Our study is an example of the
Kura, Nura Umar; Ramli, Mohammad Firuz; Ibrahim, Shaharin; Sulaiman, Wan Nur Azmin; Aris, Ahmad Zaharin
2014-01-01
In this study, geophysics, geochemistry, and geostatistical techniques were integrated to assess seawater intrusion in Kapas Island due to its geological complexity and multiple contamination sources. Five resistivity profiles were measured using an electric resistivity technique. The results reveal very low resistivity <1 Ωm, suggesting either marine clay deposit or seawater intrusion or both along the majority of the resistivity images. As a result, geochemistry was further employed to verify the resistivity evidence. The Chadha and Stiff diagrams classify the island groundwater into Ca-HCO3, Ca-Na-HCO3, Na-HCO3, and Na-Cl water types, with Ca-HCO3 as the dominant. The Mg(2+)/Mg(2+)+Ca(2+), HCO3 (-)/anion, Cl(-)/HCO3 (-), Na(+)/Cl(-), and SO4 (2-)/Cl(-) ratios show that some sampling sites are affected by seawater intrusion; these sampling sites fall within the same areas that show low-resistivity values. The resulting ratios and resistivity values were then used in the geographical information system (GIS) environment to create the geostatistical map of individual indicators. These maps were then overlaid to create the final map showing seawater-affected areas. The final map successfully delineates the area that is actually undergoing seawater intrusion. The proposed technique is not area specific, and hence, it can work in any place with similar completed characteristics or under the influence of multiple contaminants so as to distinguish the area that is truly affected by any targeted pollutants from the rest. This information would provide managers and policy makers with the knowledge of the current situation and will serve as a guide and standard in water research for sustainable management plan.
Sheikhy Narany, Tahoora; Ramli, Mohammad Firuz; Aris, Ahmad Zaharin; Sulaiman, Wan Nor Azmin; Fakharian, Kazem
2014-09-01
In recent years, groundwater quality has become a global concern due to its effect on human life and natural ecosystems. To assess the groundwater quality in the Amol-Babol Plain, a total of 308 water samples were collected during wet and dry seasons in 2009. The samples were analysed for their physico-chemical and biological constituents. Multivariate statistical analysis and geostatistical techniques were applied to assess the spatial and temporal variabilities of groundwater quality and to identify the main factors and sources of contamination. Principal component analysis (PCA) revealed that seven factors explained around 75% of the total variance, which highlighted salinity, hardness and biological pollution as the dominant factors affecting the groundwater quality in the Plain. Two-way analysis of variance (ANOVA) was conducted on the dataset to evaluate the spatio-temporal variation. The results showed that there were no significant temporal variations between the two seasons, which explained the similarity between six component factors in dry and wet seasons based on the PCA results. There are also significant spatial differences (p > 0.05) of the parameters under study, including salinity, potassium, sulphate and dissolved oxygen in the plain. The least significant difference (LSD) test revealed that groundwater salinity in the eastern region is significantly different to the central and western side of the study area. Finally, multivariate analysis and geostatistical techniques were combined as an effective method for demonstrating the spatial structure of multivariate spatial data. It was concluded that multiple natural processes and anthropogenic activities were the main sources of groundwater salinization, hardness and microbiological contamination of the study area.
Barra, Marco; Petitgas, Pierre; Bonanno, Angelo; Somarakis, Stylianos; Woillez, Mathieu; Machias, Athanasios; Mazzola, Salvatore; Basilone, Gualtiero; Giannoulaki, Marianna
2015-01-01
Geostatistical techniques were applied and a series of spatial indicators were calculated (occupation, aggregation, location, dispersion, spatial autocorrelation and overlap) to characterize the spatial distributions of European anchovy and sardine during summer. Two ecosystems were compared for this purpose, both located in the Mediterranean Sea: the Strait of Sicily (upwelling area) and the North Aegean Sea (continental shelf area, influenced by freshwater). Although the biomass of anchovy and sardine presented high interannual variability in both areas, the location of the centres of gravity and the main spatial patches of their populations were very similar between years. The size of the patches representing the dominant part of the abundance (80%) was mostly ecosystem- and species-specific. Occupation (area of presence) appears to be shaped by the extent of suitable habitats in each ecosystem whereas aggregation patterns (how the populations are distributed within the area of presence) were species-specific and related to levels of population biomass. In the upwelling area, both species showed consistently higher occupation values compared to the continental shelf area. Certain characteristics of the spatial distribution of sardine (e.g. spreading area, overlapping with anchovy) differed substantially between the two ecosystems. Principal component analysis of geostatistical and spatial indicators revealed that biomass was significantly related to a suite of, rather than single, spatial indicators. At the spatial scale of our study, strong correlations emerged between biomass and the first principal component axis with highly positive loadings for occupation, aggregation and patchiness, independently of species and ecosystem. Overlapping between anchovy and sardine increased with the increase of sardine biomass but decreased with the increase of anchovy. This contrasting pattern was attributed to the location of the respective major patches combined with the
Barra, Marco; Petitgas, Pierre; Bonanno, Angelo; Somarakis, Stylianos; Woillez, Mathieu; Machias, Athanasios; Mazzola, Salvatore; Basilone, Gualtiero; Giannoulaki, Marianna
2015-01-01
Geostatistical techniques were applied and a series of spatial indicators were calculated (occupation, aggregation, location, dispersion, spatial autocorrelation and overlap) to characterize the spatial distributions of European anchovy and sardine during summer. Two ecosystems were compared for this purpose, both located in the Mediterranean Sea: the Strait of Sicily (upwelling area) and the North Aegean Sea (continental shelf area, influenced by freshwater). Although the biomass of anchovy and sardine presented high interannual variability in both areas, the location of the centres of gravity and the main spatial patches of their populations were very similar between years. The size of the patches representing the dominant part of the abundance (80%) was mostly ecosystem- and species-specific. Occupation (area of presence) appears to be shaped by the extent of suitable habitats in each ecosystem whereas aggregation patterns (how the populations are distributed within the area of presence) were species-specific and related to levels of population biomass. In the upwelling area, both species showed consistently higher occupation values compared to the continental shelf area. Certain characteristics of the spatial distribution of sardine (e.g. spreading area, overlapping with anchovy) differed substantially between the two ecosystems. Principal component analysis of geostatistical and spatial indicators revealed that biomass was significantly related to a suite of, rather than single, spatial indicators. At the spatial scale of our study, strong correlations emerged between biomass and the first principal component axis with highly positive loadings for occupation, aggregation and patchiness, independently of species and ecosystem. Overlapping between anchovy and sardine increased with the increase of sardine biomass but decreased with the increase of anchovy. This contrasting pattern was attributed to the location of the respective major patches combined with the
NASA Astrophysics Data System (ADS)
Fabijańczyk, Piotr; Zawadzki, Jarosław
2016-04-01
Field magnetometry is fast method that was previously effectively used to assess the potential soil pollution. One of the most popular devices that are used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. In this connection, measured values of soil magnetic susceptibility have to be usually validated using more precise, but also much more expensive, chemical measurements. The goal of this study was to analyze validation methods of magnetometric measurements using chemical analyses of a concentration of elements in soil. Additionally, validation of surface measurements of soil magnetic susceptibility was performed using selected parameters of a distribution of magnetic susceptibility in a soil profile. Validation was performed using selected geostatistical measures of cross-correlation. The geostatistical approach was compared with validation performed using the classic statistics. Measurements were performed at selected areas located in the Upper Silesian Industrial Area in Poland, and in the selected parts of Norway. In these areas soil magnetic susceptibility was measured on the soil surface using a MS2D Bartington device and in the soil profile using MS2C Bartington device. Additionally, soil samples were taken in order to perform chemical measurements. Acknowledgment The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013.
NASA Astrophysics Data System (ADS)
Goeckede, M.; Yadav, V.; Mueller, K. L.; Gourdji, S. M.; Michalak, A. M.; Law, B. E.
2011-12-01
We designed a framework to train biogeophysics-biogeochemistry process models using atmospheric inverse modeling, multiple databases characterizing biosphere-atmosphere exchange, and advanced geostatistics. Our main objective is to reduce uncertainties in carbon cycle and climate projections by exploring the full spectrum of process representation, data assimilation and statistical tools currently available. Incorporating multiple high-quality data sources like eddy-covariance flux databases or biometric inventories has the potential to produce a rigorous data-constrained process model implementation. However, representation errors may bias spatially explicit model output when upscaling to regional to global scales. Atmospheric inverse modeling can be used to validate the regional representativeness of the fluxes, but each piece of prior information from the surface databases limits the ability of the inverse model to characterize the carbon cycle from the perspective of the atmospheric observations themselves. The use of geostatistical inverse modeling (GIM) holds the potential to overcome these limitations, replacing rigid prior patterns with information on how flux fields are correlated across time and space, as well as ancillary environmental data related to the carbon fluxes. We present results from a regional scale data assimilation study that focuses on generating terrestrial CO2 fluxes at high spatial and temporal resolution in the Pacific Northwest United States. Our framework couples surface fluxes from different biogeochemistry process models to very high resolution atmospheric transport using mesoscale modeling (WRF) and Lagrangian Particle dispersion (STILT). We use GIM to interpret the spatiotemporal differences between bottom-up and top-down flux fields. GIM results make it possible to link those differences to input parameters and processes, strengthening model parameterization and process understanding. Results are compared against independent
NASA Astrophysics Data System (ADS)
Ritzi, Robert W.; Soltanian, Mohamad Reza
2015-12-01
In the method of deterministic geostatistics (sensu Isaaks and Srivastava, 1988), highly-resolved data sets are used to compute sample spatial-bivariate statistics within a deterministic framework. The general goal is to observe what real, highly resolved, sample spatial-bivariate correlation looks like when it is well-quantified in naturally-occurring sedimentary aquifers. Furthermore, it is to understand how this correlation structure, (i.e. shape and correlation range) is related to independent and physically quantifiable attributes of the sedimentary architecture. The approach has evolved among work by Rubin (1995, 2003), Barrash and Clemo (2002), Ritzi et al. (2004, 2007, 2013), Dai et al. (2005), and Ramanathan et al. (2010). In this evolution, equations for sample statistics have been developed which allow tracking the facies types at the heads and tails of lag vectors. The goal is to observe and thereby understand how aspects of the sedimentary architecture affect the well-supported sample statistics. The approach has been used to study heterogeneity at a number of sites, representing a variety of depositional environments, with highly resolved data sets. What have we learned? We offer and support an opinion that the single most important insight derived from these studies is that the structure of spatial-bivariate correlation is essentially the cross-transition probability structure, determined by the sedimentary architecture. More than one scale of hierarchical sedimentary architecture has been represented in these studies, and a hierarchy of cross-transition probability structures was found to define the correlation structure in all cases. This insight allows decomposing contributions from different scales of the sedimentary architecture, and has led to a more fundamental understanding of mass transport processes including mechanical dispersion of solutes within aquifers, and the time-dependent retardation of reactive solutes. These processes can now be
NASA Astrophysics Data System (ADS)
Gasch, Caley K.; Hengl, Tomislav; Gräler, Benedikt; Meyer, Hanna; Magney, Troy; Brown, David J.
2015-04-01
Dynamic soil data collected using automated sensor networks can facilitate our understanding of soil processes, but highly dimensional data may be difficult to analyze in a manner that incorporates correlation in properties through 3-dimensions and time (3D+T). We demonstrate two approaches to making continuous predictions of dynamic soil properties from fixed point observations. For this analysis, we used the Cook Farm data set, which includes hourly measurements of soil volumetric water content, temperature, and electrical conductivity at 42 points and five depths, collected over five years. We compare performance of two modeling frameworks. In the first framework we used random forest algorithms to fit a 3D+T regression model to make predictions of all three soil variables from 2- and 3-dimensional, temporal, and spatio-temporal covariates. In the second framework we developed a 3D+T kriging model after detrending the observations for depth-dependent seasonal effects. The results show that both models accurately predicted soil temperature, but the kriging model outperformed the regression model according to cross-validation; it explained 37%, 96%, and 16% of the variability in water content, temperature, and electrical conductivity respectively versus 34%, 93%, and 4% explained by the random forest model. The full random forest regression model had high goodness-of-fit for all variables, which was reduced in cross-validation. Temporal model components (i.e. day of the year) explained most of the variability in observations. The seamless predictions of 3D+T data produced from this analysis can assist in understanding soil processes and how they change through a season, under different land management scenarios, and how they relate to other environmental processes.
Cai, Li-mei; Ma, Jin; Zhou, Yong-zhang; Huang, Lan-chun; Dou, Lei; Zhang, Cheng-bo; Fu, Shan-ming
2008-12-01
One hundred and eighteen surface soil samples were collected from the Dongguan City, and analyzed for concentration of Cu, Zn, Ni, Cr, Pb, Cd, As, Hg, pH and OM. The spatial distribution and sources of soil heavy metals were studied using multivariate geostatistical methods and GIS technique. The results indicated concentrations of Cu, Zn, Ni, Pb, Cd and Hg were beyond the soil background content in Guangdong province, and especially concentrations of Pb, Cd and Hg were greatly beyond the content. The results of factor analysis group Cu, Zn, Ni, Cr and As in Factor 1, Pb and Hg in Factor 2 and Cd in Factor 3. The spatial maps based on geostatistical analysis show definite association of Factor 1 with the soil parent material, Factor 2 was mainly affected by industries. The spatial distribution of Factor 3 was attributed to anthropogenic influence. PMID:19256391
Common Randomness Principles of Secrecy
ERIC Educational Resources Information Center
Tyagi, Himanshu
2013-01-01
This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…
Quantifying randomness in real networks
NASA Astrophysics Data System (ADS)
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Quantifying randomness in real networks.
Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-20
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Quantifying randomness in real networks
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-01-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121
Quantum entanglement from random measurements
NASA Astrophysics Data System (ADS)
Tran, Minh Cong; Dakić, Borivoje; Arnault, François; Laskowski, Wiesław; Paterek, Tomasz
2015-11-01
We show that the expectation value of squared correlations measured along random local directions is an identifier of quantum entanglement in pure states, which can be directly experimentally assessed if two copies of the state are available. Entanglement can therefore be detected by parties who do not share a common reference frame and whose local reference frames, such as polarizers or Stern-Gerlach magnets, remain unknown. Furthermore, we also show that in every experimental run, access to only one qubit from the macroscopic reference is sufficient to identify entanglement, violate a Bell inequality, and, in fact, observe all phenomena observable with macroscopic references. Finally, we provide a state-independent entanglement witness solely in terms of random correlations and emphasize how data gathered for a single random measurement setting per party reliably detects entanglement. This is only possible due to utilized randomness and should find practical applications in experimental confirmation of multiphoton entanglement or space experiments.
76 FR 79204 - Random Drug Testing Rate for Covered Crewmembers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... SECURITY Coast Guard Random Drug Testing Rate for Covered Crewmembers AGENCY: Coast Guard, DHS. ACTION: Notice of minimum random drug testing rate. SUMMARY: The Coast Guard has set the calendar year 2012 minimum random drug testing rate at 50 percent of covered crewmembers. DATES: The minimum random...
76 FR 1448 - Random Drug Testing Rate for Covered Crewmembers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
... SECURITY Coast Guard Random Drug Testing Rate for Covered Crewmembers AGENCY: Coast Guard, DHS. ACTION: Notice of minimum random drug testing rate. SUMMARY: The Coast Guard has set the calendar year 2011 minimum random drug testing rate at 50 percent of covered crewmembers. DATES: The minimum random...
NASA Astrophysics Data System (ADS)
Tobar, I.; Lee, J.; Black, F. W.; Babamaaji, R. A.
2014-12-01
The Komadugu-Yobe River Basin in northeastern Nigeria is an important tributary of Lake Chad and has experienced significant changes in population density and land cover in recent decades. The present study focuses on the application of geostatistical methods to examine the land cover and population density dynamics in the river basin. The geostatistical methods include spatial autocorrelation, overlapping neighborhood statistics with Pearson's correlation coefficient, Moran's I index analysis, and indicator variogram analysis with rose diagram. The land cover and land use maps were constructed from USGS Landsat images and Globcover images from the European Space Agency. The target years of the analysis are 1970, 1986, 2000, 2005, and 2009. The calculation of net changes in land cover indicates significant variation in the changes of rainfed cropland, mosaic cropland, and grassland. Spatial autocorrelation analysis and Moran I index analysis showed that the distribution of land cover is highly clustered. A new GIS geostatistical tool was designed to calculate the overlapping neighborhood statistics with Pearson's correlation coefficient between the land use/land cover and population density datasets. The 10x10 neighborhood cell unit showed a clear correlation between the variables in certain zones of the study area. The ranges calculated from the indicator variograms of land use and land cover and population density showed that the cropland and sparse vegetation are most closely related to the spatial change of population density.
Allan, M.E.; Wilson, M.L.; Wightman, J. )
1996-01-01
The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.
Allan, M.E.; Wilson, M.L.; Wightman, J.
1996-12-31
The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity & permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic & petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.
How to Do Random Allocation (Randomization)
Shin, Wonshik
2014-01-01
Purpose To explain the concept and procedure of random allocation as used in a randomized controlled study. Methods We explain the general concept of random allocation and demonstrate how to perform the procedure easily and how to report it in a paper. PMID:24605197
GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA
In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...
NASA Astrophysics Data System (ADS)
Ly, S.; Charles, C.; Degré, A.
2011-07-01
Spatial interpolation of precipitation data is of great importance for hydrological modelling. Geostatistical methods (kriging) are widely applied in spatial interpolation from point measurement to continuous surfaces. The first step in kriging computation is the semi-variogram modelling which usually used only one variogram model for all-moment data. The objective of this paper was to develop different algorithms of spatial interpolation for daily rainfall on 1 km2 regular grids in the catchment area and to compare the results of geostatistical and deterministic approaches. This study leaned on 30-yr daily rainfall data of 70 raingages in the hilly landscape of the Ourthe and Ambleve catchments in Belgium (2908 km2). This area lies between 35 and 693 m in elevation and consists of river networks, which are tributaries of the Meuse River. For geostatistical algorithms, seven semi-variogram models (logarithmic, power, exponential, Gaussian, rational quadratic, spherical and penta-spherical) were fitted to daily sample semi-variogram on a daily basis. These seven variogram models were also adopted to avoid negative interpolated rainfall. The elevation, extracted from a digital elevation model, was incorporated into multivariate geostatistics. Seven validation raingages and cross validation were used to compare the interpolation performance of these algorithms applied to different densities of raingages. We found that between the seven variogram models used, the Gaussian model was the most frequently best fit. Using seven variogram models can avoid negative daily rainfall in ordinary kriging. The negative estimates of kriging were observed for convective more than stratiform rain. The performance of the different methods varied slightly according to the density of raingages, particularly between 8 and 70 raingages but it was much different for interpolation using 4 raingages. Spatial interpolation with the geostatistical and Inverse Distance Weighting (IDW) algorithms
NASA Astrophysics Data System (ADS)
Sarah, S.; Jeelani, Gh.; Ahmed, Shakeel
2011-06-01
This paper presents a study on Manasbal lake, which is one of the high altitude lakes in the Kashmir Valley, India. Eighteen water samples were analysed for major ions and trace elements to assess the variability of water quality of the lake for various purposes. Geostatistics, the theory of regionalized variables, was then used to enhance the dataset and estimate some missing spatial values. Results indicated that the concentration of major ions in the water samples in winter was higher than in summer. The scatter diagrams suggested the dominance of alkaline earths over the alkali elements. Three types of water were identified in the lake that are referred to as Ca-HCO3, Mg-HCO3 and hybrid types. The lake water was found to be controlled by rock-water interaction with carbonate lithology as a dominant source of the solutes. The major (Ca2 + , Mg2 + , Na + , K + , NO3 and {{HCO}}3-, CO3 and Cl) and trace elements of the lake water were within the World Health Organization standards, therefore the lake water was considered chemically safe for drinking purposes. Although NO3 concentration (ranging from 1.72 to 2 mg/L), is within the permissible limit and not very alarming, the gradually increasing trend is not acceptable. It is however, important to guard its spatio-temporal variability as the water is used for domestic as well as agricultural purposes. This study is significant as hydrogeological information on such high altitude lakes in India is scanty.
Goovaerts, Pierre
2009-01-01
This paper describes the combination of three-way contingency tables and geostatistics to visualize the non-linear impact of two putative covariates on individual-level health outcomes and test the significance of this impact, accounting for the pattern of spatial correlation and correcting for multiple testing. The methodology is used to explore the influence of distance to mammography clinics and census-tract poverty level on the rate of late-stage breast cancer diagnosis in three Michigan counties. Incidence rates are significantly lower than the area-wide mean (18.04%) mainly in affluent neighbourhoods [0-5% poverty], while higher incidences are mainly controlled by distance to clinics. The new simulation-based multiple testing correction is very flexible and less conservative than the traditional false discovery rate approach that results in a majority of tests becoming non-significant. Classes with significantly higher frequency of late-stage diagnosis often translate into geographic clusters that are not detected by the spatial scan statistic. PMID:19959392
NASA Astrophysics Data System (ADS)
Jahjah, Munzer; Ulivieri, Carlo
2004-02-01
Understanding the dynamics of land cover change has increasingly been recognized as one of the key research imperatives in global environmental change research. Scientists have developed and applied various methods in order to find and propose solutions for many environmental world problems. From 1986-1995 changes in Kenya coastal zone landcover, derived from the post-classification TM images, were significant with arid areas growing from 3% to 10%, woody areas decreased from 4% to 2%, herbaceous areas decreased from 25% to 20%, developed land increased from 2% to 3%. In order to generate the change probability map as a continuous surface using geostatistical method-ArcGIS, we used as an input the Generalized Linear Model (GLM) probability result. The results reveal the efficiency of the Probability-of-Change map (POC), especially if reference data are lacking, in indicating the possibility of having a change and its type in a determined area, taking advantage of the layer transparency of the GIS systems. Thus, the derived information supplies a good tool for the interpretation of the magnitude of the land cover changes and guides the final user directly to the areas of changes to understand and derive the possible interactions of human or natural processes.
Liu, Wen-Cheng; Yu, Hwa-Lung; Chung, Chung-En
2011-01-01
Concerns about the water quality in Yuan-Yang Lake (YYL), a shallow, subtropical alpine lake located in north-central Taiwan, has been rapidly increasing recently due to the natural and anthropogenic pollution. In order to understand the underlying physical and chemical processes as well as their associated spatial distribution in YYL, this study analyzes fourteen physico-chemical water quality parameters recorded at the eight sampling stations during 2008–2010 by using multivariate statistical techniques and a geostatistical method. Hierarchical clustering analysis (CA) is first applied to distinguish the three general water quality patterns among the stations, followed by the use of principle component analysis (PCA) and factor analysis (FA) to extract and recognize the major underlying factors contributing to the variations among the water quality measures. The spatial distribution of the identified major contributing factors is obtained by using a kriging method. Results show that four principal components i.e., nitrogen nutrients, meteorological factor, turbidity and nitrate factors, account for 65.52% of the total variance among the water quality parameters. The spatial distribution of principal components further confirms that nitrogen sources constitute an important pollutant contribution in the YYL. PMID:21695032
NASA Astrophysics Data System (ADS)
Laloy, Eric; Linde, Niklas; Jacques, Diederik; Mariethoz, Grégoire
2016-04-01
The sequential geostatistical resampling (SGR) algorithm is a Markov chain Monte Carlo (MCMC) scheme for sampling from possibly non-Gaussian, complex spatially-distributed prior models such as geologic facies or categorical fields. In this work, we highlight the limits of standard SGR for posterior inference of high-dimensional categorical fields with realistically complex likelihood landscapes and benchmark a parallel tempering implementation (PT-SGR). Our proposed PT-SGR approach is demonstrated using synthetic (error corrupted) data from steady-state flow and transport experiments in categorical 7575- and 10,000-dimensional 2D conductivity fields. In both case studies, every SGR trial gets trapped in a local optima while PT-SGR maintains an higher diversity in the sampled model states. The advantage of PT-SGR is most apparent in an inverse transport problem where the posterior distribution is made bimodal by construction. PT-SGR then converges towards the appropriate data misfit much faster than SGR and partly recovers the two modes. In contrast, for the same computational resources SGR does not fit the data to the appropriate error level and hardly produces a locally optimal solution that looks visually similar to one of the two reference modes. Although PT-SGR clearly surpasses SGR in performance, our results also indicate that using a small number (16-24) of temperatures (and thus parallel cores) may not permit complete sampling of the posterior distribution by PT-SGR within a reasonable computational time (less than 1-2 weeks).
NASA Astrophysics Data System (ADS)
He, X. L.; Sonnenborg, T. O.; Jørgensen, F.; Jensen, K. H.
2014-08-01
Multiple-point geostatistical simulation (MPS) has recently become popular in stochastic hydrogeology, primarily because of its capability to derive multivariate distributions from a training image (TI). However, its application in three-dimensional (3-D) simulations has been constrained by the difficulty of constructing a 3-D TI. The object-based unconditional simulation program TiGenerator may be a useful tool in this regard; yet the applicability of such parametric training images has not been documented in detail. Another issue in MPS is the integration of multiple geophysical data. The proper way to retrieve and incorporate information from high-resolution geophysical data is still under discussion. In this study, MPS simulation was applied to different scenarios regarding the TI and soft conditioning. By comparing their output from simulations of groundwater flow and probabilistic capture zone, TI from both sources (directly converted from high-resolution geophysical data and generated by TiGenerator) yields comparable results, even for the probabilistic capture zones, which are highly sensitive to the geological architecture. This study also suggests that soft conditioning in MPS is a convenient and efficient way of integrating secondary data such as 3-D airborne electromagnetic data (SkyTEM), but over-conditioning has to be avoided.
NASA Astrophysics Data System (ADS)
He, X.; Sonnenborg, T. O.; Jørgensen, F.; Jensen, K. H.
2013-09-01
Multiple-point geostatistic simulation (MPS) has recently become popular in stochastic hydrogeology, primarily because of its capability to derive multivariate distributions from the training image (TI). However, its application in three dimensional simulations has been constrained by the difficulty of constructing 3-D TI. The object-based TiGenerator may be a useful tool in this regard; yet the sensitivity of model predictions to the training image has not been documented. Another issue in MPS is the integration of multiple geophysical data. The best way to retrieve and incorporate information from high resolution geophysical data is still under discussion. This work shows that TI from TiGenerator delivers acceptable results when used for groundwater modeling, although the TI directly converted from high resolution geophysical data leads to better simulation. The model results also indicate that soft conditioning in MPS is a convenient and efficient way of integrating secondary data such as 3-D airborne electromagnetic data, but over conditioning has to be avoided.
NASA Astrophysics Data System (ADS)
Chahal, M. K.; Brown, D. J.; Brooks, E. S.; Campbell, C.; Cobos, D. R.; Vierling, L. A.
2012-12-01
Estimating soil moisture content continuously over space and time using geo-statistical techniques supports the refinement of process-based watershed hydrology models and the application of soil process models (e.g. biogeochemical models predicting greenhouse gas fluxes) to complex landscapes. In this study, we model soil profile volumetric moisture content for five agricultural fields with loess soils in the Palouse region of Eastern Washington and Northern Idaho. Using a combination of stratification and space-filling techniques, we selected 42 representative and distributed measurement locations in the Cook Agronomy Farm (Pullman, WA) and 12 locations each in four additional grower fields that span the precipitation gradient across the Palouse. At each measurement location, soil moisture was measured on an hourly basis at five different depths (30, 60, 90, 120, and 150 cm) using Decagon 5-TE/5-TM soil moisture sensors (Decagon Devices, Pullman, WA, USA). This data was collected over three years for the Cook Agronomy Farm and one year for each of the grower fields. In addition to ordinary kriging, we explored the correlation of volumetric water content with external, spatially exhaustive indices derived from terrain models, optical remote sensing imagery, and proximal soil sensing data (electromagnetic induction and VisNIR penetrometer)
Barabás, N; Goovaerts, P; Adriaens, P
2001-08-15
Contaminated sediment management is an urgent environmental and regulatory issue worldwide. Because remediation is expensive, sound quantitative assessments of uncertainty aboutthe spatial distribution of contaminants are critical, butthey are hampered bythe physical complexity of sediment environments. This paper describes the use of geostatistical modeling approaches to quantify uncertainty of 2,3,7,8-tetrachlorodibenzo-p-dioxin concentrations in Passaic River (New Jersey) sediments and to incorporate this information in decision-making processes, such as delineation of contaminated areas and additional sampling needs. First, coordinate transformation and analysis of three-dimensional semivariograms were used to describe and modelthe directional variability accounting forthe meandering course of the river. Then, indicator kriging was employed to provide models of local uncertainty at unsampled locations without requiring a prior transform (e.g. log-normal) of concentrations. Cross-validation results show that the use of probability thresholds leads to more efficient delineation of contaminated areas than a classification based on the exceedence of regulatory thresholds by concentration estimates. Depending on whether additional sampling aims at reducing prediction errors or misclassification rates, the variance of local probability distributions or a measure of the expected closeness to the regulatory threshold can be used to locate candidate locations. PMID:11529567
Yu, Guirui; Zheng, Zemei; Wang, Qiufeng; Fu, Yuling; Zhuang, Jie; Sun, Xiaomin; Wang, Yuesi
2010-08-15
Quantification of the spatiotemporal pattern of soil respiration (R(s)) at the regional scale can provide a theoretical basis and fundamental data for accurate evaluation of the global carbon budget. This study summarizes the R(s) data measured in China from 1995 to 2004. Based on the data, a new region-scale geostatistical model of soil respiration (GSMSR) was developed by modifying a global scale statistical model. The GSMSR model, which is driven by monthly air temperature, monthly precipitation, and soil organic carbon (SOC) density, can capture 64% of the spatiotemporal variability of soil R(s). We evaluated the spatiotemporal pattern of R(s) in China using the GSMSR model. The estimated results demonstrate that the annual R(s) in China ranged from 3.77 to 4.00 Pg C yr(-1) between 1995 and 2004, with an average value of 3.84 +/- 0.07 Pg C yr(-1), contributing 3.92%-4.87% to the global soil CO(2) emission. Annual R(s) rate of evergreen broadleaved forest ecosystem was 698 +/- 11 g C m(-2) yr(-1), significantly higher than that of grassland (439 +/- 7 g C m(-2) yr(-1)) and cropland (555 +/- 12 g C m(-2) yr(-1)). The contributions of grassland, cropland, and forestland ecosystems to the total R(s) in China were 48.38 +/- 0.35%, 22.19 +/- 0.18%, and 20.84 +/- 0.13%, respectively. PMID:20704202
Kara G. Eby
2010-08-01
At the Idaho National Laboratory (INL) Cs-137 concentrations above the U.S. Environmental Protection Agency risk-based threshold of 0.23 pCi/g may increase the risk of human mortality due to cancer. As a leader in nuclear research, the INL has been conducting nuclear activities for decades. Elevated anthropogenic radionuclide levels including Cs-137 are a result of atmospheric weapons testing, the Chernobyl accident, and nuclear activities occurring at the INL site. Therefore environmental monitoring and long-term surveillance of Cs-137 is required to evaluate risk. However, due to the large land area involved, frequent and comprehensive monitoring is limited. Developing a spatial model that predicts Cs-137 concentrations at unsampled locations will enhance the spatial characterization of Cs-137 in surface soils, provide guidance for an efficient monitoring program, and pinpoint areas requiring mitigation strategies. The predictive model presented herein is based on applied geostatistics using a Bayesian analysis of environmental characteristics across the INL site, which provides kriging spatial maps of both Cs-137 estimates and prediction errors. Comparisons are presented of two different kriging methods, showing that the use of secondary information (i.e., environmental characteristics) can provide improved prediction performance in some areas of the INL site.
NASA Astrophysics Data System (ADS)
Fabijańczyk, Piotr; Zawadzki, Jarosław; Wojtkowska, Małgorzata
2016-07-01
The article presents detailed geostatistical analysis of spatial distribution of lead and zinc concentration in water, suspension and bottom sediments of large, urban lake exposed to intensive anthropogenic pressure within a large city. Systematic chemical measurements were performed at eleven cross-sections located along Czerniakowskie Lake, the largest lake in Warsaw, the capital of Poland. During the summer, the lake is used as a public bathing area, therefore, to better evaluate human impacts, field measurements were carried out in high-use seasons. It was found that the spatial distributions of aqueous lead and zinc differ during the summer and autumn. In summer several Pb and Zn hot-spots were observed, while during autumn spatial distributions of Pb and Zn were rather homogenous throughout the entire lake. Large seasonal differences in spatial distributions of Pb and Zn were found in bottom sediments. Autumn concentrations of both heavy metals were ten times higher in comparison with summer values. Clear cross-correlations of Pb and Zn concentrations in water, suspension and bottom sediments suggest that both Pb and Zn came to Czerniakowskie Lake from the same source.
Conducting Randomized Controlled Trials with Offenders in an Administrative Setting
ERIC Educational Resources Information Center
Ahlin, Eileen M.
2015-01-01
Evaluation research conducted in agencies that sanction law violators is often challenging and due process may preclude evaluators from using experimental methods in traditional criminal justice agencies such as police, courts, and corrections. However, administrative agencies often deal with the same population but are not bound by due process…
Randomness and degrees of irregularity.
Pincus, S; Singer, B H
1996-01-01
The fundamental question "Are sequential data random?" arises in myriad contexts, often with severe data length constraints. Furthermore, there is frequently a critical need to delineate nonrandom sequences in terms of closeness to randomness--e.g., to evaluate the efficacy of therapy in medicine. We address both these issues from a computable framework via a quantification of regularity. ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points. An infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case. In the infinite sequence setting, we indicate how the "foundational" definition of independence in probability theory, and the definition of normality in number theory, reduce to limit theorems without rates of convergence, from which we utilize ApEn to address rates of convergence (of a deficit from maximal randomness), refining the aforementioned concepts in a computationally essential manner. Representative applications among many are indicated to assess (i) random number generation output; (ii) well-shuffled arrangements; and (iii) (the quality of) bootstrap replicates. PMID:11607637
BIFURCATIONS OF RANDOM DIFFERENTIAL EQUATIONS WITH BOUNDED NOISE ON SURFACES.
Homburg, Ale Jan; Young, Todd R
2010-03-01
In random differential equations with bounded noise minimal forward invariant (MFI) sets play a central role since they support stationary measures. We study the stability and possible bifurcations of MFI sets. In dimensions 1 and 2 we classify all minimal forward invariant sets and their codimension one bifurcations in bounded noise random differential equations. PMID:22211081
"Geo-statistics methods and neural networks in geophysical applications: A case study"
NASA Astrophysics Data System (ADS)
Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.
2008-12-01
The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.
Random broadcast on random geometric graphs
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
2009-01-01
In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.
NASA Astrophysics Data System (ADS)
Paz-Ferreiro, J.; Bertol, I.; Vidal Vázquez, E.
2008-07-01
Changes in soil surface microrelief with cumulative rainfall under different tillage systems and crop cover conditions were investigated in southern Brazil. Surface cover was none (fallow) or the crop succession maize followed by oats. Tillage treatments were: 1) conventional tillage on bare soil (BS), 2) conventional tillage (CT), 3) minimum tillage (MT) and 4) no tillage (NT) under maize and oats. Measurements were taken with a manual relief meter on small rectangular grids of 0.234 and 0.156 m2, throughout growing season of maize and oats, respectively. Each data set consisted of 200 point height readings, the size of the smallest cells being 3×5 cm during maize and 2×5 cm during oats growth periods. Random Roughness (RR), Limiting Difference (LD), Limiting Slope (LS) and two fractal parameters, fractal dimension (D) and crossover length (l) were estimated from the measured microtopographic data sets. Indices describing the vertical component of soil roughness such as RR, LD and l generally decreased with cumulative rain in the BS treatment, left fallow, and in the CT and MT treatments under maize and oats canopy. However, these indices were not substantially affected by cumulative rain in the NT treatment, whose surface was protected with previous crop residues. Roughness decay from initial values was larger in the BS treatment than in CT and MT treatments. Moreover, roughness decay generally tended to be faster under maize than under oats. The RR and LD indices decreased quadratically, while the l index decreased exponentially in the tilled, BS, CT and MT treatments. Crossover length was sensitive to differences in soil roughness conditions allowing a description of microrelief decay due to rainfall in the tilled treatments, although better correlations between cumulative rainfall and the most commonly used indices RR and LD were obtained. At the studied scale, parameters l and D have been found to be useful in interpreting the configuration properties of
Visual Basic programs for one, two or three-dimensional geostatistical analysis
NASA Astrophysics Data System (ADS)
Carr, James R.; Mela, Kenneth
1998-07-01
Two previously published FORTRAN-77 programs, FGAM and JCBLOK, are rewritten in Visual Basic 5.0 for 32-bit Windows 95/NT and educational applications. Each program is applicable to spatial data representing one, two or three-dimensions. Graphics are added for displaying computed variograms and color density slices of kriging results within the same windows used to launch the programs. Dynamic array allocation is automatically invoked by the programs without the need for a user to intervene, thus enabling efficient memory management independent of data set size. If analyzing one-dimensional strings of data (profiles), fractal dimensions are computed for four-lag increments of the variogram, thus enabling a scale-dependent analysis. Only the raw, spatial data need to be in a separate file because program options are set interactively using mouse click events motivated by the design of the window for each program. Simplified Geo-EAS input format is accommodated for these files, or generic files are accommodated having allowable record lengths up to 100 values per record.
Quantumness, Randomness and Computability
NASA Astrophysics Data System (ADS)
Solis, Aldo; Hirsch, Jorge G.
2015-06-01
Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.
Chen, Xingyuan; Murakami, Haruko; Hahn, Melanie S.; Hammond, Glenn E.; Rockhold, Mark L.; Zachara, John M.; Rubin, Yoram
2012-06-01
Tracer testing under natural or forced gradient flow holds the potential to provide useful information for characterizing subsurface properties, through monitoring, modeling and interpretation of the tracer plume migration in an aquifer. Non-reactive tracer experiments were conducted at the Hanford 300 Area, along with constant-rate injection tests and electromagnetic borehole flowmeter (EBF) profiling. A Bayesian data assimilation technique, the method of anchored distributions (MAD) [Rubin et al., 2010], was applied to assimilate the experimental tracer test data with the other types of data and to infer the three-dimensional heterogeneous structure of the hydraulic conductivity in the saturated zone of the Hanford formation. In this study, the Bayesian prior information on the underlying random hydraulic conductivity field was obtained from previous field characterization efforts using the constant-rate injection tests and the EBF data. The posterior distribution of the conductivity field was obtained by further conditioning the field on the temporal moments of tracer breakthrough curves at various observation wells. MAD was implemented with the massively-parallel three-dimensional flow and transport code PFLOTRAN to cope with the highly transient flow boundary conditions at the site and to meet the computational demands of MAD. A synthetic study proved that the proposed method could effectively invert tracer test data to capture the essential spatial heterogeneity of the three-dimensional hydraulic conductivity field. Application of MAD to actual field data shows that the hydrogeological model, when conditioned on the tracer test data, can reproduce the tracer transport behavior better than the field characterized without the tracer test data. This study successfully demonstrates that MAD can sequentially assimilate multi-scale multi-type field data through a consistent Bayesian framework.
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2016-03-01
This series addresses a fundamental issue in multiple-point statistical (MPS) simulation for generation of realizations of large-scale porous media. Past methods suffer from the fact that they generate discontinuities and patchiness in the realizations that, in turn, affect their flow and transport properties. Part I of this series addressed certain aspects of this fundamental issue, and proposed two ways of improving of one such MPS method, namely, the cross correlation-based simulation (CCSIM) method that was proposed by the authors. In the present paper, a new algorithm is proposed to further improve the quality of the realizations. The method utilizes the realizations generated by the algorithm introduced in Part I, iteratively removes any possible remaining discontinuities in them, and addresses the problem with honoring hard (quantitative) data, using an error map. The map represents the differences between the patterns in the training image (TI) and the current iteration of a realization. The resulting iterative CCSIM—the iCCSIM algorithm—utilizes a random path and the error map to identify the locations in the current realization in the iteration process that need further "repairing;" that is, those locations at which discontinuities may still exist. The computational time of the new iterative algorithm is considerably lower than one in which every cell of the simulation grid is visited in order to repair the discontinuities. Furthermore, several efficient distance functions are introduced by which one extracts effectively key information from the TIs. To increase the quality of the realizations and extracting the maximum amount of information from the TIs, the distance functions can be used simultaneously. The performance of the iCCSIM algorithm is studied using very complex 2-D and 3-D examples, including those that are process-based. Comparison is made between the quality and accuracy of the results with those generated by the original CCSIM
McKay, M.D.; Howell, J.A.; Whiteman, D.E.; Duran, B.A.; Jackson, C.K.; Wecksung, G.W.; Bement, T.R.; Pirkle, F.A.
1982-03-01
During the period covered by this report, we proposed a method of comparing aerial and ground data as a means of assessing the quality of aerial data. We also compared two methods of partitioning count rates among several isotopes. Time-series analysis and analysis of variance were considered as tools for using aerial radiometric data to aid in designing ground-based sampling experiments. Several methods of computing covariance matrices were compared for use with very large data sets. A study showed a potential for using aerial radiometric data to rank quadrangles according to the Department of Energy's estimated uranium inventories. A discriminant analysis code was transferred to Grand Junction, and several statistics short courses were presented there. Recommended cluster analysis procedures were developed and applied to aerial radiometric data. Reports and papers were prepared on topics such as outlier detection, percentile estimation, discriminant analysis, and statistical package comparison.
Modis, K. Sideri, D.
2013-06-15
Integrating geological properties, such as relative positions and proportions of different hydrofacies, is of highest importance in order to render realistic geological patterns. Sequential indicator simulation (SIS) and Plurigaussian simulation (PS) are alternative methods for conceptual and deterministic modeling for the characterization of hydrofacies distribution. In this work, we studied the spatial differentiation of hydrofacies in the alluvial aquifer system of West Thessaly basin in Greece. For this, we applied both SIS and PS techniques to an extensive set of borehole data from that basin. Histograms of model versus experimental hydrofacies proportions and indicative cross sections were plotted in order to validate the results. The PS technique was shown to be more effective in reproducing the spatial characteristics of the different hydrofacies and their distribution across the study area. In addition, the permeability differentiations reflected in the PS model are in accordance to known heterogeneities of the aquifer capacity.
NASA Astrophysics Data System (ADS)
De Lucia, Marco; Kühn, Michael
2015-04-01
The 3D imaging of porous media through micro tomography allows the characterization of porous space and mineral abundances with unprecedented resolution. Such images can be used to perform computational determination of permeability and to obtain a realistic measure of the mineral surfaces exposed to fluid flow and thus to chemical interactions. However, the volume of the plugs that can be analysed with such detail is in the order of 1 cm3, so that their representativity at a larger scale, i.e. as needed for reactive transport modelling at Darcy scale, is questionable at best. In fact, the fine scale heterogeneity (from plug to plug at few cm distance within the same core) would originate substantially different readings of the investigated properties. Therefore, a comprehensive approach including the spatial variability and heterogeneity at the micro- and plug scale needs to be adopted to gain full advantage from the high resolution images in view of the upscaling to Darcy scale. In the framework of the collaborative project H2STORE, micro-CT imaging of different core samples from potential H2-storage sites has been performed by partners at TU Clausthal and Jena University before and after treatment with H2/CO2 mixtures in pressurized autoclaves. We present here the workflow which has been implemented to extract the relevant features from the available data concerning the heterogeneity of the medium at the microscopic and plug scale and to correlate the observed chemical reactions and changes in the porous structure with the geometrical features of the medium. First, a multivariate indicator-based geostatistical model for the microscopic structure of the plugs has been built and fitted to the available images. This involved the implementation of exploratory analysis algorithms such as experimental indicator variograms and cross-variograms. The implemented methods are able to efficiently deal with images in the order of 10003 voxels making use of parallelization
Directed random walk with random restarts: The Sisyphus random walk