Sample records for spatial random fields

  1. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  2. Spatial Distribution of Phase Singularities in Optical Random Vector Waves.

    PubMed

    De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L

    2016-08-26

    Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.

  3. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  4. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  5. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  6. Random field assessment of nanoscopic inhomogeneity of bone

    PubMed Central

    Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu

    2010-01-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128

  7. Propagation of terahertz pulses in random media.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2004-02-15

    We describe measurements of single-cycle terahertz pulse propagation in a random medium. The unique capabilities of terahertz time-domain spectroscopy permit the characterization of a multiply scattered field with unprecedented spatial and temporal resolution. With these results, we can develop a framework for understanding the statistics of broadband laser speckle. Also, the ability to extract information on the phase of the field opens up new possibilities for characterizing multiply scattered waves. We illustrate this with a simple example, which involves computing a time-windowed temporal correlation between fields measured at different spatial locations. This enables the identification of individual scattering events, and could lead to a new method for imaging in random media.

  8. Random field assessment of nanoscopic inhomogeneity of bone.

    PubMed

    Dong, X Neil; Luo, Qing; Sparkman, Daniel M; Millwater, Harry R; Wang, Xiaodu

    2010-12-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to represent the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Clustering, randomness, and regularity in cloud fields. 4. Stratocumulus cloud fields

    NASA Astrophysics Data System (ADS)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-07-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (>900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  10. Clustering, randomness, and regularity in cloud fields. 4: Stratocumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-01-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (more than 900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  11. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  12. Visibility graphs of random scalar fields and spatial data

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Iacovacci, Jacopo

    2017-07-01

    We extend the family of visibility algorithms to map scalar fields of arbitrary dimension into graphs, enabling the analysis of spatially extended data structures as networks. We introduce several possible extensions and provide analytical results on the topological properties of the graphs associated to different types of real-valued matrices, which can be understood as the high and low disorder limits of real-valued scalar fields. In particular, we find a closed expression for the degree distribution of these graphs associated to uncorrelated random fields of generic dimension. This result holds independently of the field's marginal distribution and it directly yields a statistical randomness test, applicable in any dimension. We showcase its usefulness by discriminating spatial snapshots of two-dimensional white noise from snapshots of a two-dimensional lattice of diffusively coupled chaotic maps, a system that generates high dimensional spatiotemporal chaos. The range of potential applications of this combinatorial framework includes image processing in engineering, the description of surface growth in material science, soft matter or medicine, and the characterization of potential energy surfaces in chemistry, disordered systems, and high energy physics. An illustration on the applicability of this method for the classification of the different stages involved in carcinogenesis is briefly discussed.

  13. Random scalar fields and hyperuniformity

    NASA Astrophysics Data System (ADS)

    Ma, Zheng; Torquato, Salvatore

    2017-06-01

    Disordered many-particle hyperuniform systems are exotic amorphous states of matter that lie between crystals and liquids. Hyperuniform systems have attracted recent attention because they are endowed with novel transport and optical properties. Recently, the hyperuniformity concept has been generalized to characterize two-phase media, scalar fields, and random vector fields. In this paper, we devise methods to explicitly construct hyperuniform scalar fields. Specifically, we analyze spatial patterns generated from Gaussian random fields, which have been used to model the microwave background radiation and heterogeneous materials, the Cahn-Hilliard equation for spinodal decomposition, and Swift-Hohenberg equations that have been used to model emergent pattern formation, including Rayleigh-Bénard convection. We show that the Gaussian random scalar fields can be constructed to be hyperuniform. We also numerically study the time evolution of spinodal decomposition patterns and demonstrate that they are hyperuniform in the scaling regime. Moreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg equation are effectively hyperuniform. We show that thresholding (level-cutting) a hyperuniform Gaussian random field to produce a two-phase random medium tends to destroy the hyperuniformity of the progenitor scalar field. We then propose guidelines to achieve effectively hyperuniform two-phase media derived from thresholded non-Gaussian fields. Our investigation paves the way for new research directions to characterize the large-structure spatial patterns that arise in physics, chemistry, biology, and ecology. Moreover, our theoretical results are expected to guide experimentalists to synthesize new classes of hyperuniform materials with novel physical properties via coarsening processes and using state-of-the-art techniques, such as stereolithography and 3D printing.

  14. A spatial scaling relationship for soil moisture in a semiarid landscape, using spatial scaling relationships for pedology

    NASA Astrophysics Data System (ADS)

    Willgoose, G. R.; Chen, M.; Cohen, S.; Saco, P. M.; Hancock, G. R.

    2013-12-01

    In humid areas it is generally considered that soil moisture scales spatially according to the wetness index of the landscape. This scaling arises from lateral flow downslope of ground water within the soil zone. However, in semi-arid and drier regions, this lateral flow is small and fluxes are dominated by vertical flows driven by infiltration and evapotranspiration. Thus, in the absence of runon processes, soil moisture at a location is more driven by local factors such as soil and vegetation properties at that location rather than upstream processes draining to that point. The 'apparent' spatial randomness of soil and vegetation properties generally suggests that soil moisture for semi-arid regions is spatially random. In this presentation a new analysis of neutron probe data during summer from the Tarrawarra site near Melbourne, Australia shows persistent spatial organisation of soil moisture over several years. This suggests a link between permanent features of the catchment (e.g. soil properties) and soil moisture distribution, even though the spatial pattern of soil moisture during the 4 summers monitored appears spatially random. This and other data establishes a prima facie case that soil variations drive spatial variation in soil moisture. Accordingly, we used a previously published spatial scaling relationship for soil properties derived using the mARM pedogenesis model to simulate the spatial variation of soil grading. This soil grading distribution was used in the Rosetta pedotransfer model to derive a spatial distribution of soil functional properties (e.g. saturated hydraulic conductivity, porosity). These functional properties were then input into the HYDRUS-1D soil moisture model and soil moisture simulated for 3 years at daily resolution. The HYDRUS model used had previously been calibrated to field observed soil moisture data at our SASMAS field site. The scaling behaviour of soil moisture derived from this modelling will be discussed and compared with observed data from our SASMAS field sites.

  15. Assessing the significance of pedobarographic signals using random field theory.

    PubMed

    Pataky, Todd C

    2008-08-07

    Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.

  16. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  17. A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.

    PubMed

    Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José

    2016-08-01

    Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.

  18. Determination of the Spatial Distribution in Hydraulic Conductivity Using Genetic Algorithm Optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, A.; Lee, J. H.; Kitanidis, P. K.

    2016-12-01

    Heterogeneity in hydraulic conductivity (K) impacts the transport and fate of contaminants in subsurface as well as design and operation of managed aquifer recharge (MAR) systems. Recently, improvements in computational resources and availability of big data through electrical resistivity tomography (ERT) and remote sensing have provided opportunities to better characterize the subsurface. Yet, there is need to improve prediction and evaluation methods in order to obtain information from field measurements for better field characterization. In this study, genetic algorithm optimization, which has been widely used in optimal aquifer remediation designs, was used to determine the spatial distribution of K. A hypothetical 2 km by 2 km aquifer was considered. A genetic algorithm library, PGAPack, was linked with a fast Fourier transform based random field generator as well as a groundwater flow and contaminant transport simulation model (BIO2D-KE). The objective of the optimization model was to minimize the total squared error between measured and predicted field values. It was assumed measured K values were available through ERT. Performance of genetic algorithm in predicting the distribution of K was tested for different cases. In the first one, it was assumed that observed K values were evaluated using the random field generator only as the forward model. In the second case, as well as K-values obtained through ERT, measured head values were incorporated into evaluation in which BIO2D-KE and random field generator were used as the forward models. Lastly, tracer concentrations were used as additional information in the optimization model. Initial results indicated enhanced performance when random field generator and BIO2D-KE are used in combination in predicting the spatial distribution in K.

  19. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  20. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  1. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, Brian D.; Brothers, Laura L.; Barnhardt, Walter A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25 km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6 m and mean diameter is 84.8 m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools.

  2. Tensor Minkowski Functionals for random fields on the sphere

    NASA Astrophysics Data System (ADS)

    Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom

    2017-12-01

    We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.

  3. Connectivity ranking of heterogeneous random conductivity models

    NASA Astrophysics Data System (ADS)

    Rizzo, C. B.; de Barros, F.

    2017-12-01

    To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.

  4. Soil variability in engineering applications

    NASA Astrophysics Data System (ADS)

    Vessia, Giovanna

    2014-05-01

    Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.

  5. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  6. Random technique to encode complex valued holograms with on axis reconstruction onto phase-only displays.

    PubMed

    Luis Martínez Fuentes, Jose; Moreno, Ignacio

    2018-03-05

    A new technique for encoding the amplitude and phase of diffracted fields in digital holography is proposed. It is based on a random spatial multiplexing of two phase-only diffractive patterns. The first one is the phase information of the intended pattern, while the second one is a diverging optical element whose purpose is the control of the amplitude. A random number determines the choice between these two diffractive patterns at each pixel, and the amplitude information of the desired field governs its discrimination threshold. This proposed technique is computationally fast and does not require iterative methods, and the complex field reconstruction appears on axis. We experimentally demonstrate this new encoding technique with holograms implemented onto a flicker-free phase-only spatial light modulator (SLM), which allows the axial generation of such holograms. The experimental verification includes the phase measurement of generated patterns with a phase-shifting polarization interferometer implemented in the same experimental setup.

  7. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  8. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  9. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    NASA Astrophysics Data System (ADS)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  10. Clustering, randomness and regularity in cloud fields. I - Theoretical considerations. II - Cumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.

    1992-01-01

    The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.

  11. Compensation of temporal and spatial dispersion for multiphoton acousto-optic laser-scanning microscopy

    NASA Astrophysics Data System (ADS)

    Iyer, Vijay; Saggau, Peter

    2003-10-01

    In laser-scanning microscopy, acousto-optic (AO) deflection provides a means to quickly position a laser beam to random locations throughout the field-of-view. Compared to conventional laser-scanning using galvanometer-driven mirrors, this approach increases the frame rate and signal-to-noise ratio, and reduces time spent illuminating sites of no interest. However, random-access AO scanning has not yet been combined with multi-photon microscopy, primarily because the femtosecond laser pulses employed are subject to significant amounts of both spatial and temporal dispersion upon propagation through common AO materials. Left uncompensated, spatial dispersion reduces the microscope"s spatial resolution while temporal dispersion reduces the multi-photon excitation efficacy. In previous work, we have demonstrated, 1) the efficacy of a single diffraction grating scheme which reduces the spatial dispersion at least 3-fold throughout the field-of-view, and 2) the use of a novel stacked-prism pre-chirper for compensating the temporal dispersion of a pair of AODs using a shorter mechanical path length (2-4X) than standard prism-pair arrangements. In this work, we demonstrate for the first time the use of these compensation approaches with a custom-made large-area slow-shear TeO2 AOD specifically suited for the development of a high-resolution 2-D random-access AO scanning multi-photon laser-scanning microscope (AO-MPLSM).

  12. Statistical simulation of ensembles of precipitation fields for data assimilation applications

    NASA Astrophysics Data System (ADS)

    Haese, Barbara; Hörning, Sebastian; Chwala, Christian; Bárdossy, András; Schalge, Bernd; Kunstmann, Harald

    2017-04-01

    The simulation of the hydrological cycle by models is an indispensable tool for a variety of environmental challenges such as climate prediction, water resources management, or flood forecasting. One of the crucial variables within the hydrological system, and accordingly one of the main drivers for terrestrial hydrological processes, is precipitation. A correct reproduction of the spatio-temporal distribution of precipitation is crucial for the quality and performance of hydrological applications. In our approach we stochastically generate precipitation fields conditioned on various precipitation observations. Rain gauges provide high-quality information for a specific measurement point, but their spatial representativeness is often rare. Microwave links, e. g. from commercial cellular operators, on the other hand can be used to estimate line integrals of near-surface rainfall information. They provide a very dense observational system compared to rain gauges. A further prevalent source of precipitation information are weather radars, which provide rainfall pattern informations. In our approach we derive precipitation fields, which are conditioned on combinations of these different observation types. As method to generate precipitation fields we use the random mixing method. Following this method a precipitation field is received as a linear combination of unconditional spatial random fields, where the spatial dependence structure is described by copulas. The weights of the linear combination are chosen in the way that the observations and the spatial structure of precipitation are reproduced. One main advantage of the random mixing method is the opportunity to consider linear and non-linear constraints. For a demonstration of the method we use virtual observations generated from a virtual reality of the Neckar catchment. These virtual observations mimic advantages and disadvantages of real observations. This virtual data set allows us to evaluate simulated precipitation fields in a very detailed manner as well as to quantify uncertainties which are conveyed by measurement inaccuracies. In a further step we use real observations as a basis for the generation of precipitation fields. The resulting ensembles of precipitation fields are used for example for data assimilation applications or as input data for hydrological models.

  13. Characterizing individual scattering events by measuring the amplitude and phase of the electric field diffusing through a random medium.

    PubMed

    Jian, Zhongping; Pearce, Jeremy; Mittleman, Daniel M

    2003-07-18

    We describe observations of the amplitude and phase of an electric field diffusing through a three-dimensional random medium, using terahertz time-domain spectroscopy. These measurements are spatially resolved with a resolution smaller than the speckle spot size and temporally resolved with a resolution better than one optical cycle. By computing correlation functions between fields measured at different positions and with different temporal delays, it is possible to obtain information about individual scattering events experienced by the diffusing field. This represents a new method for characterizing a multiply scattered wave.

  14. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    PubMed

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  15. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  16. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  17. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  18. Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs

    NASA Astrophysics Data System (ADS)

    Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur

    2018-03-01

    A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.

  19. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  20. Power spectral ensity of markov texture fields

    NASA Technical Reports Server (NTRS)

    Shanmugan, K. S.; Holtzman, J. C.

    1984-01-01

    Texture is an important image characteristic. A variety of spatial domain techniques were proposed for extracting and utilizing textural features for segmenting and classifying images. for the most part, these spatial domain techniques are ad hos in nature. A markov random field model for image texture is discussed. A frequency domain description of image texture is derived in terms of the power spectral density. This model is used for designing optimum frequency domain filters for enhancing, restoring and segmenting images based on their textural properties.

  1. Random laser illumination: an ideal source for biomedical polarization imaging?

    NASA Astrophysics Data System (ADS)

    Carvalho, Mariana T.; Lotay, Amrit S.; Kenny, Fiona M.; Girkin, John M.; Gomes, Anderson S. L.

    2016-03-01

    Imaging applications increasingly require light sources with high spectral density (power over spectral bandwidth. This has led in many cases to the replacement of conventional thermal light sources with bright light-emitting diodes (LEDs), lasers and superluminescent diodes. Although lasers and superluminescent diodes appear to be ideal light sources due to their narrow bandwidth and power, however, in the case of full-field imaging, their spatial coherence leads to coherent artefacts, such as speckle, that corrupt the image. LEDs, in contrast, have lower spatial coherence and thus seem the natural choice, but they have low spectral density. Random Lasers are an unconventional type of laser that can be engineered to provide low spatial coherence with high spectral density. These characteristics makes them potential sources for biological imaging applications where specific absorption and reflection are the characteristics required for state of the art imaging. In this work, a Random Laser (RL) is used to demonstrate speckle-free full-field imaging for polarization-dependent imaging in an epi-illumination configuration. We compare LED and RL illumination analysing the resulting images demonstrating that the RL illumination produces an imaging system with higher performance (image quality and spectral density) than that provided by LEDs.

  2. Zipf's law from scale-free geometry.

    PubMed

    Lin, Henry W; Loeb, Abraham

    2016-03-01

    The spatial distribution of people exhibits clustering across a wide range of scales, from household (∼10(-2) km) to continental (∼10(4) km) scales. Empirical data indicate simple power-law scalings for the size distribution of cities (known as Zipf's law) and the population density fluctuations as a function of scale. Using techniques from random field theory and statistical physics, we show that these power laws are fundamentally a consequence of the scale-free spatial clustering of human populations and the fact that humans inhabit a two-dimensional surface. In this sense, the symmetries of scale invariance in two spatial dimensions are intimately connected to urban sociology. We test our theory by empirically measuring the power spectrum of population density fluctuations and show that the logarithmic slope α=2.04 ± 0.09, in excellent agreement with our theoretical prediction α=2. The model enables the analytic computation of many new predictions by importing the mathematical formalism of random fields.

  3. Spatial characteristics of observed precipitation fields: A catalog of summer storms in Arizona, Volume 1

    NASA Technical Reports Server (NTRS)

    Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodrigues-Iturbe, I.

    1986-01-01

    Eight years of summer raingage observations are analyzed for a dense, 93 gage, network operated by the U. S. Department of Agriculture, Agricultural Research Service, in their 150 sq km Walnut Gulch catchment near Tucson, Arizona. Storms are defined by the total depths collected at each raingage during the noon to noon period for which there was depth recorded at any of the gages. For each of the resulting 428 storms, the 93 gage depths are interpolated onto a dense grid and the resulting random field is anlyzed. Presented are: storm depth isohyets at 2 mm contour intervals, first three moments of point storm depth, spatial correlation function, spatial variance function, and the spatial distribution of total rainstorm depth.

  4. Random vectors and spatial analysis by geostatistics for geotechnical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, D.S.

    1987-08-01

    Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less

  5. Spatial Temporal Mathematics at Scale: An Innovative and Fully Developed Paradigm to Boost Math Achievement among All Learners

    ERIC Educational Resources Information Center

    Rutherford, Teomara; Kibrick, Melissa; Burchinal, Margaret; Richland, Lindsey; Conley, AnneMarie; Osborne, Keara; Schneider, Stephanie; Duran, Lauren; Coulson, Andrew; Antenore, Fran; Daniels, Abby; Martinez, Michael E.

    2010-01-01

    This paper describes the background, methodology, preliminary findings, and anticipated future directions of a large-scale multi-year randomized field experiment addressing the efficacy of ST Math [Spatial-Temporal Math], a fully-developed math curriculum that uses interactive animated software. ST Math's unique approach minimizes the use of…

  6. Correlated randomness: Some examples of exotic statistical physics

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene

    2005-05-01

    One challenge of biology, medicine, and economics is that the systems treated by these sciences have no perfect metronome in time and no perfect spatial architecture -- crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. To understand this `miracle', one might consider placing aside the human tendency to see the universe as a machine. Instead, one might address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at many spatial and temporal patterns in biology, medicine, and economics. Inspired by principles developed by statistical physics over the past 50 years -- scale invariance and universality -- we review some recent applications of correlated randomness to fields that might startle Boltzmann if he were alive today.

  7. Shaping complex microwave fields in reverberating media with binary tunable metasurfaces

    PubMed Central

    Kaina, Nadège; Dupré, Matthieu; Lerosey, Geoffroy; Fink, Mathias

    2014-01-01

    In this article we propose to use electronically tunable metasurfaces as spatial microwave modulators. We demonstrate that like spatial light modulators, which have been recently proved to be ideal tools for controlling light propagation through multiple scattering media, spatial microwave modulators can efficiently shape in a passive way complex existing microwave fields in reverberating environments with a non-coherent energy feedback. Unlike in free space, we establish that a binary-only phase state tunable metasurface allows a very good control over the waves, owing to the random nature of the electromagnetic fields in these complex media. We prove in an everyday reverberating medium, that is, a typical office room, that a small spatial microwave modulator placed on the walls can passively increase the wireless transmission between two antennas by an order of magnitude, or on the contrary completely cancel it. Interestingly and contrary to free space, we show that this results in an isotropic shaped microwave field around the receiving antenna, which we attribute again to the reverberant nature of the propagation medium. We expect that spatial microwave modulators will be interesting tools for fundamental physics and will have applications in the field of wireless communications. PMID:25331498

  8. A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales

    NASA Astrophysics Data System (ADS)

    Elliott, Frank W.; Majda, Andrew J.

    1995-03-01

    A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.

  9. Unsupervised change detection of multispectral images based on spatial constraint chi-squared transform and Markov random field model

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli

    2016-10-01

    Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.

  10. Theory and generation of conditional, scalable sub-Gaussian random fields

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2016-03-01

    Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.

  11. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks

    PubMed Central

    Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-01-01

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152

  12. Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.

    PubMed

    Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal

    2017-11-08

    Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .

  13. Spatial network surrogates for disentangling complex system structure from spatial embedding of nodes

    NASA Astrophysics Data System (ADS)

    Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.

    2016-04-01

    Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.

  14. A multi-source precipitation approach to fill gaps over a radar precipitation field

    NASA Astrophysics Data System (ADS)

    Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.

    2012-12-01

    Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.

  15. Analysis of stratocumulus cloud fields using LANDSAT imagery: Size distributions and spatial separations

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1990-01-01

    Stratocumulus cloud fields in the FIRE IFO region are analyzed using LANDSAT Thematic Mapper imagery. Structural properties such as cloud cell size distribution, cell horizontal aspect ratio, fractional coverage and fractal dimension are determined. It is found that stratocumulus cloud number densities are represented by a power law. Cell horizontal aspect ratio has a tendency to increase at large cell sizes, and cells are bi-fractal in nature. Using LANDSAT Multispectral Scanner imagery for twelve selected stratocumulus scenes acquired during previous years, similar structural characteristics are obtained. Cloud field spatial organization also is analyzed. Nearest-neighbor spacings are fit with a number of functions, with Weibull and Gamma distributions providing the best fits. Poisson tests show that the spatial separations are not random. Second order statistics are used to examine clustering.

  16. Bayesian spatial prediction of the site index in the study of the Missouri Ozark Forest Ecosystem Project

    Treesearch

    Xiaoqian Sun; Zhuoqiong He; John Kabrick

    2008-01-01

    This paper presents a Bayesian spatial method for analysing the site index data from the Missouri Ozark Forest Ecosystem Project (MOFEP). Based on ecological background and availability, we select three variables, the aspect class, the soil depth and the land type association as covariates for analysis. To allow great flexibility of the smoothness of the random field,...

  17. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  18. Experimental demonstration of spatially coherent beam combining using optical parametric amplification.

    PubMed

    Kurita, Takashi; Sueda, Keiichi; Tsubakimoto, Koji; Miyanaga, Noriaki

    2010-07-05

    We experimentally demonstrated coherent beam combining using optical parametric amplification with a nonlinear crystal pumped by random-phased multiple-beam array of the second harmonic of a Nd:YAG laser at 10-Hz repetition rate. In the proof-of-principle experiment, the phase jump between two pump beams was precisely controlled by a motorized actuator. For the demonstration of multiple-beam combining a random phase plate was used to create random-phased beamlets as a pump pulse. Far-field patterns of the pump, the signal, and the idler indicated that the spatially coherent signal beams were obtained on both cases. This approach allows scaling of the intensity of optical parametric chirped pulse amplification up to the exa-watt level while maintaining diffraction-limited beam quality.

  19. Single laser beam of spatial coherence from an array of GaAs lasers - Free-running mode

    NASA Technical Reports Server (NTRS)

    Philipp-Rutz, E. M.

    1975-01-01

    Spatially coherent radiation from a monolithic array of three GaAs lasers in a free-running mode is reported. The lasers, with their mirror faces antireflection coated, are operated in an external optical cavity built of spherical lenses and plane mirrors. The spatially coherent-beam formation makes use of the Fourier-transformation property of the internal lenses. Transverse mode control is accomplished by a spatial filter. The optical cavity is similar to that used for the phase-controlled mode of spatially coherent-beam formation; only the spatial filters are different. In the far field (when restored by an external lens), the intensities of the lasers in the array are concentrated in a single laser beam of spatial coherence, without any grating lobes. The far-field distribution of the laser array in the free-running mode differs significantly from the interference pattern of the phase-controlled mode. The modulation characteristics of the optical waveforms of the two modes are also quite different because modulation is related to the interaction of the spatial filter with the longitudinal modes of the laser array within the optical cavity. The modulation of the optical waveform of the free-running mode is nonperiodic, confirming that the fluctuations of the optical fields of the lasers are random.

  20. Assessing the significance of global and local correlations under spatial autocorrelation: a nonparametric approach.

    PubMed

    Viladomat, Júlia; Mazumder, Rahul; McInturff, Alex; McCauley, Douglas J; Hastie, Trevor

    2014-06-01

    We propose a method to test the correlation of two random fields when they are both spatially autocorrelated. In this scenario, the assumption of independence for the pair of observations in the standard test does not hold, and as a result we reject in many cases where there is no effect (the precision of the null distribution is overestimated). Our method recovers the null distribution taking into account the autocorrelation. It uses Monte-Carlo methods, and focuses on permuting, and then smoothing and scaling one of the variables to destroy the correlation with the other, while maintaining at the same time the initial autocorrelation. With this simulation model, any test based on the independence of two (or more) random fields can be constructed. This research was motivated by a project in biodiversity and conservation in the Biology Department at Stanford University. © 2014, The International Biometric Society.

  1. Landscape-scale consequences of differential tree mortality from catastrophic wind disturbance in the Amazon.

    PubMed

    Rifai, Sami W; Urquiza Muñoz, José D; Negrón-Juárez, Robinson I; Ramírez Arévalo, Fredy R; Tello-Espinoza, Rodil; Vanderwel, Mark C; Lichstein, Jeremy W; Chambers, Jeffrey Q; Bohlman, Stephanie A

    2016-10-01

    Wind disturbance can create large forest blowdowns, which greatly reduces live biomass and adds uncertainty to the strength of the Amazon carbon sink. Observational studies from within the central Amazon have quantified blowdown size and estimated total mortality but have not determined which trees are most likely to die from a catastrophic wind disturbance. Also, the impact of spatial dependence upon tree mortality from wind disturbance has seldom been quantified, which is important because wind disturbance often kills clusters of trees due to large treefalls killing surrounding neighbors. We examine (1) the causes of differential mortality between adult trees from a 300-ha blowdown event in the Peruvian region of the northwestern Amazon, (2) how accounting for spatial dependence affects mortality predictions, and (3) how incorporating both differential mortality and spatial dependence affect the landscape level estimation of necromass produced from the blowdown. Standard regression and spatial regression models were used to estimate how stem diameter, wood density, elevation, and a satellite-derived disturbance metric influenced the probability of tree death from the blowdown event. The model parameters regarding tree characteristics, topography, and spatial autocorrelation of the field data were then used to determine the consequences of non-random mortality for landscape production of necromass through a simulation model. Tree mortality was highly non-random within the blowdown, where tree mortality rates were highest for trees that were large, had low wood density, and were located at high elevation. Of the differential mortality models, the non-spatial models overpredicted necromass, whereas the spatial model slightly underpredicted necromass. When parameterized from the same field data, the spatial regression model with differential mortality estimated only 7.5% more dead trees across the entire blowdown than the random mortality model, yet it estimated 51% greater necromass. We suggest that predictions of forest carbon loss from wind disturbance are sensitive to not only the underlying spatial dependence of observations, but also the biological differences between individuals that promote differential levels of mortality. © 2016 by the Ecological Society of America.

  2. Binary pseudo-random patterned structures for modulation transfer function calibration and resolution characterization of a full-field transmission soft x-ray microscope

    DOE PAGES

    Yashchuk, V. V.; Fischer, P. J.; Chan, E. R.; ...

    2015-12-09

    We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope's MTF, tests with the BPRML sample can be used to fine tune the instrument's focal distance. Finally, our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  3. Modal energy analysis for mechanical systems excited by spatially correlated loads

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Fei, Qingguo; Li, Yanbin; Wu, Shaoqing; Chen, Qiang

    2018-10-01

    MODal ENergy Analysis (MODENA) is an energy-based method, which is proposed to deal with vibroacoustic problems. The performance of MODENA on the energy analysis of a mechanical system under spatially correlated excitation is investigated. A plate/cavity coupling system excited by a pressure field is studied in a numerical example, in which four kinds of pressure fields are involved, which include the purely random pressure field, the perfectly correlated pressure field, the incident diffuse field, and the turbulent boundary layer pressure fluctuation. The total energies of subsystems differ to reference solution only in the case of purely random pressure field and only for the non-excited subsystem (the cavity). A deeper analysis on the scale of modal energy is further conducted via another numerical example, in which two structural modes excited by correlated forces are coupled with one acoustic mode. A dimensionless correlation strength factor is proposed to determine the correlation strength between modal forces. Results show that the error on modal energy increases with the increment of the correlation strength factor. A criterion is proposed to establish a link between the error and the correlation strength factor. According to the criterion, the error is negligible when the correlation strength is weak, in this situation the correlation strength factor is less than a critical value.

  4. Structured Spatial Modeling and Mapping of Domestic Violence Against Women of Reproductive Age in Rwanda.

    PubMed

    Habyarimana, Faustin; Zewotir, Temesgen; Ramroop, Shaun

    2018-03-01

    The main objective of this study was to assess the risk factors and spatial correlates of domestic violence against women of reproductive age in Rwanda. A structured spatial approach was used to account for the nonlinear nature of some covariates and the spatial variability on domestic violence. The nonlinear effect was modeled through second-order random walk, and the structured spatial effect was modeled through Gaussian Markov Random Fields specified as an intrinsic conditional autoregressive model. The data from the Rwanda Demographic and Health Survey 2014/2015 were used as an application. The findings of this study revealed that the risk factors of domestic violence against women are the wealth quintile of the household, the size of the household, the husband or partner's age, the husband or partner's level of education, ownership of the house, polygamy, the alcohol consumption status of the husband or partner, the woman's perception of wife-beating attitude, and the use of contraceptive methods. The study also highlighted the significant spatial variation of domestic violence against women at district level.

  5. Effect of aberration on the acoustic field in tissue harmonic imaging (THI)

    NASA Astrophysics Data System (ADS)

    Jing, Yuan; Cleveland, Robin

    2003-10-01

    A numerical simulation was used to study the impact of an aberrating layer on the generation of the fundamental and second-harmonic (SH) field in a tissue harmonic imaging scenario. The simulation used a three-dimensional time-domain code for solving the KZK equation and accounted for arbitrary spatial variations in all acoustic properties. The aberration effect was modeled by assuming that the tissue consisted of two layers where the interface has a spatial variation C that acted like an effective phase screen. Initial experiments were carried out with sinusoidal-shaped interfaces. The sinusoidal interface produced grating lobes which were at least 6 dB larger for the fundamental signal than the SH. The energy outside of the main lobe was found to increase linearly as the amplitude of the interface variation increased. The location of the grating lobes was affected by the spatial period on the interface variation. The inhomogeneous nature of tissue was modeled with an interface with a random spatial variation. With the random interface the average sidelobe level for the fundamental was -30 dB whereas the SH had an average sidelobe level of -36 dB. [Work supported by the NSF through the Center for Subsurface Sensing and Imaging Systems.

  6. A continuous time random walk model for Darcy-scale anomalous transport in heterogeneous porous media.

    NASA Astrophysics Data System (ADS)

    Comolli, Alessandro; Hakoun, Vivien; Dentz, Marco

    2017-04-01

    Achieving the understanding of the process of solute transport in heterogeneous porous media is of crucial importance for several environmental and social purposes, ranging from aquifers contamination and remediation, to risk assessment in nuclear waste repositories. The complexity of this aim is mainly ascribable to the heterogeneity of natural media, which can be observed at all the scales of interest, from pore scale to catchment scale. In fact, the intrinsic heterogeneity of porous media is responsible for the arising of the well-known non-Fickian footprints of transport, including heavy-tailed breakthrough curves, non-Gaussian spatial density profiles and the non-linear growth of the mean squared displacement. Several studies investigated the processes through which heterogeneity impacts the transport properties, which include local modifications to the advective-dispersive motion of solutes, mass exchanges between some mobile and immobile phases (e.g. sorption/desorption reactions or diffusion into solid matrix) and spatial correlation of the flow field. In the last decades, the continuous time random walk (CTRW) model has often been used to describe solute transport in heterogenous conditions and to quantify the impact of point heterogeneity, spatial correlation and mass transfer on the average transport properties [1]. Open issues regarding this approach are the possibility to relate measurable properties of the medium to the parameters of the model, as well as its capability to provide predictive information. In a recent work [2] the authors have shed new light on understanding the relationship between Lagrangian and Eulerian dynamics as well as on their evolution from arbitrary initial conditions. On the basis of these results, we derive a CTRW model for the description of Darcy-scale transport in d-dimensional media characterized by spatially random permeability fields. The CTRW approach models particle velocities as a spatial Markov process, which is characterized by a velocity transition probability and the steady state velocity distribution. These are related to the Eulerian velocity distribution and the distribution and spatial organization of hydraulic conductivity. The CTRW model is used for the prediction of transport data (particle dispersion and breakthrough curves) from direct numerical flow and transport simulations in heterogeneous hydraulic conductivity fields. References: [1] Comolli, A., Hidalgo, J. J., Moussey, C., & Dentz, M. (2016). Non-Fickian Transport Under Heterogeneous Advection and Mobile-Immobile Mass Transfer. Transport in Porous Media, 1-25. [2] Dentz, M., Kang, P. K., Comolli, A., Le Borgne, T., & Lester, D. R. (2016). Continuous time random walks for the evolution of Lagrangian velocities. Physical Review Fluids, 1(7), 074004.

  7. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    NASA Astrophysics Data System (ADS)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

  8. Polarized vortices in optical speckle field: observation of rare polarization singularities.

    PubMed

    Dupont, Jan; Orlik, Xavier

    2015-03-09

    Using a recent method able to characterize the polarimetry of a random field with high polarimetric and spatial accuracy even near places of destructive interference, we study polarized optical vortices at a scale below the transverse correlation width of a speckle field. We perform high accuracy polarimetric measurements of known singularities described with an half-integer topological index and we study rare integer index singularities which have, to our knowledge, never been observed in a speckle field.

  9. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  10. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  11. Spatial analysis of the distribution of Spodoptera frugiperda (J.E. Smith) (Lepidoptera: Noctuidae) and losses in maize crop productivity using geostatistics.

    PubMed

    Farias, Paulo R S; Barbosa, José C; Busoli, Antonio C; Overal, William L; Miranda, Vicente S; Ribeiro, Susane M

    2008-01-01

    The fall armyworm, Spodoptera frugiperda (J.E. Smith), is one of the chief pests of maize in the Americas. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling methods, determining actual and potential crop losses, and adopting precise agricultural techniques. In São Paulo state, Brazil, a maize field was sampled at weekly intervals, from germination through harvest, for caterpillar densities, using quadrates. In each of 200 quadrates, 10 plants were sampled per week. Harvest weights were obtained in the field for each quadrate, and ear diameters and lengths were also sampled (15 ears per quadrate) and used to estimate potential productivity of the quadrate. Geostatistical analyses of caterpillar densities showed greatest ranges for small caterpillars when semivariograms were adjusted for a spherical model that showed greatest fit. As the caterpillars developed in the field, their spatial distribution became increasingly random, as shown by a model adjusted to a straight line, indicating a lack of spatial dependence among samples. Harvest weight and ear length followed the spherical model, indicating the existence of spatial variability of the production parameters in the maize field. Geostatistics shows promise for the application of precise methods in the integrated control of pests.

  12. Space-Time Data Fusion

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel

    2011-01-01

    Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.

  13. Influence of Embedded Inhomogeneities on the Spectral Ratio of the Horizontal Components of a Random Field of Rayleigh Waves

    NASA Astrophysics Data System (ADS)

    Tsukanov, A. A.; Gorbatnikov, A. V.

    2018-01-01

    Study of the statistical parameters of the Earth's random microseismic field makes it possible to obtain estimates of the properties and structure of the Earth's crust and upper mantle. Different approaches are used to observe and process the microseismic records, which are divided into several groups of passive seismology methods. Among them are the well-known methods of surface-wave tomography, the spectral H/ V ratio of the components in the surface wave, and microseismic sounding, currently under development, which uses the spectral ratio V/ V 0 of the vertical components between pairs of spatially separated stations. In the course of previous experiments, it became clear that these ratios are stable statistical parameters of the random field that do not depend on the properties of microseism sources. This paper proposes to expand the mentioned approach and study the possibilities for using the ratio of the horizontal components H 1/ H 2 of the microseismic field. Numerical simulation was used to study the influence of an embedded velocity inhomogeneity on the spectral ratio of the horizontal components of the random field of fundamental Rayleigh modes, based on the concept that the Earth's microseismic field is represented by these waves in a significant part of the frequency spectrum.

  14. Monte Carlo modeling of spatial coherence: free-space diffraction

    PubMed Central

    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.

    2008-01-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  15. Gene expression based mouse brain parcellation using Markov random field regularized non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Haynor, David R.; Thompson, Carol L.; Lein, Ed; Hawrylycz, Michael

    2009-02-01

    Understanding the geography of genetic expression in the mouse brain has opened previously unexplored avenues in neuroinformatics. The Allen Brain Atlas (www.brain-map.org) (ABA) provides genome-wide colorimetric in situ hybridization (ISH) gene expression images at high spatial resolution, all mapped to a common three-dimensional 200μm3 spatial framework defined by the Allen Reference Atlas (ARA) and is a unique data set for studying expression based structural and functional organization of the brain. The goal of this study was to facilitate an unbiased data-driven structural partitioning of the major structures in the mouse brain. We have developed an algorithm that uses nonnegative matrix factorization (NMF) to perform parts based analysis of ISH gene expression images. The standard NMF approach and its variants are limited in their ability to flexibly integrate prior knowledge, in the context of spatial data. In this paper, we introduce spatial connectivity as an additional regularization in NMF decomposition via the use of Markov Random Fields (mNMF). The mNMF algorithm alternates neighborhood updates with iterations of the standard NMF algorithm to exploit spatial correlations in the data. We present the algorithm and show the sub-divisions of hippocampus and somatosensory-cortex obtained via this approach. The results are compared with established neuroanatomic knowledge. We also highlight novel gene expression based sub divisions of the hippocampus identified by using the mNMF algorithm.

  16. Mapping Health Data: Improved Privacy Protection With Donut Method Geomasking

    PubMed Central

    Hampton, Kristen H.; Fitch, Molly K.; Allshouse, William B.; Doherty, Irene A.; Gesink, Dionne C.; Leone, Peter A.; Serre, Marc L.; Miller, William C.

    2010-01-01

    A major challenge in mapping health data is protecting patient privacy while maintaining the spatial resolution necessary for spatial surveillance and outbreak identification. A new adaptive geomasking technique, referred to as the donut method, extends current methods of random displacement by ensuring a user-defined minimum level of geoprivacy. In donut method geomasking, each geocoded address is relocated in a random direction by at least a minimum distance, but less than a maximum distance. The authors compared the donut method with current methods of random perturbation and aggregation regarding measures of privacy protection and cluster detection performance by masking multiple disease field simulations under a range of parameters. Both the donut method and random perturbation performed better than aggregation in cluster detection measures. The performance of the donut method in geoprivacy measures was at least 42.7% higher and in cluster detection measures was less than 4.8% lower than that of random perturbation. Results show that the donut method provides a consistently higher level of privacy protection with a minimal decrease in cluster detection performance, especially in areas where the risk to individual geoprivacy is greatest. PMID:20817785

  17. Mapping health data: improved privacy protection with donut method geomasking.

    PubMed

    Hampton, Kristen H; Fitch, Molly K; Allshouse, William B; Doherty, Irene A; Gesink, Dionne C; Leone, Peter A; Serre, Marc L; Miller, William C

    2010-11-01

    A major challenge in mapping health data is protecting patient privacy while maintaining the spatial resolution necessary for spatial surveillance and outbreak identification. A new adaptive geomasking technique, referred to as the donut method, extends current methods of random displacement by ensuring a user-defined minimum level of geoprivacy. In donut method geomasking, each geocoded address is relocated in a random direction by at least a minimum distance, but less than a maximum distance. The authors compared the donut method with current methods of random perturbation and aggregation regarding measures of privacy protection and cluster detection performance by masking multiple disease field simulations under a range of parameters. Both the donut method and random perturbation performed better than aggregation in cluster detection measures. The performance of the donut method in geoprivacy measures was at least 42.7% higher and in cluster detection measures was less than 4.8% lower than that of random perturbation. Results show that the donut method provides a consistently higher level of privacy protection with a minimal decrease in cluster detection performance, especially in areas where the risk to individual geoprivacy is greatest.

  18. Approximation of Confidence Limits on Sample Semivariograms From Single Realizations of Spatially Correlated Random Fields

    NASA Astrophysics Data System (ADS)

    Shafer, J. M.; Varljen, M. D.

    1990-08-01

    A fundamental requirement for geostatistical analyses of spatially correlated environmental data is the estimation of the sample semivariogram to characterize spatial correlation. Selecting an underlying theoretical semivariogram based on the sample semivariogram is an extremely important and difficult task that is subject to a great deal of uncertainty. Current standard practice does not involve consideration of the confidence associated with semivariogram estimates, largely because classical statistical theory does not provide the capability to construct confidence limits from single realizations of correlated data, and multiple realizations of environmental fields are not found in nature. The jackknife method is a nonparametric statistical technique for parameter estimation that may be used to estimate the semivariogram. When used in connection with standard confidence procedures, it allows for the calculation of closely approximate confidence limits on the semivariogram from single realizations of spatially correlated data. The accuracy and validity of this technique was verified using a Monte Carlo simulation approach which enabled confidence limits about the semivariogram estimate to be calculated from many synthetically generated realizations of a random field with a known correlation structure. The synthetically derived confidence limits were then compared to jackknife estimates from single realizations with favorable results. Finally, the methodology for applying the jackknife method to a real-world problem and an example of the utility of semivariogram confidence limits were demonstrated by constructing confidence limits on seasonal sample variograms of nitrate-nitrogen concentrations in shallow groundwater in an approximately 12-mi2 (˜30 km2) region in northern Illinois. In this application, the confidence limits on sample semivariograms from different time periods were used to evaluate the significance of temporal change in spatial correlation. This capability is quite important as it can indicate when a spatially optimized monitoring network would need to be reevaluated and thus lead to more robust monitoring strategies.

  19. Stochastical analysis of surfactant-enhanced remediation of denser-than-water nonaqueous phase liquid (DNAPL)-contaminated soils.

    PubMed

    Zhang, Renduo; Wood, A Lynn; Enfield, Carl G; Jeong, Seung-Woo

    2003-01-01

    Stochastical analysis was performed to assess the effect of soil spatial variability and heterogeneity on the recovery of denser-than-water nonaqueous phase liquids (DNAPL) during the process of surfactant-enhanced remediation. UTCHEM, a three-dimensional, multicomponent, multiphase, compositional model, was used to simulate water flow and chemical transport processes in heterogeneous soils. Soil spatial variability and heterogeneity were accounted for by considering the soil permeability as a spatial random variable and a geostatistical method was used to generate random distributions of the permeability. The randomly generated permeability fields were incorporated into UTCHEM to simulate DNAPL transport in heterogeneous media and stochastical analysis was conducted based on the simulated results. From the analysis, an exponential relationship between average DNAPL recovery and soil heterogeneity (defined as the standard deviation of log of permeability) was established with a coefficient of determination (r2) of 0.991, which indicated that DNAPL recovery decreased exponentially with increasing soil heterogeneity. Temporal and spatial distributions of relative saturations in the water phase, DNAPL, and microemulsion in heterogeneous soils were compared with those in homogeneous soils and related to soil heterogeneity. Cleanup time and uncertainty to determine DNAPL distributions in heterogeneous soils were also quantified. The study would provide useful information to design strategies for the characterization and remediation of nonaqueous phase liquid-contaminated soils with spatial variability and heterogeneity.

  20. Critical exponents for diluted resistor networks

    NASA Astrophysics Data System (ADS)

    Stenull, O.; Janssen, H. K.; Oerding, K.

    1999-05-01

    An approach by Stephen [Phys. Rev. B 17, 4444 (1978)] is used to investigate the critical properties of randomly diluted resistor networks near the percolation threshold by means of renormalized field theory. We reformulate an existing field theory by Harris and Lubensky [Phys. Rev. B 35, 6964 (1987)]. By a decomposition of the principal Feynman diagrams, we obtain diagrams which again can be interpreted as resistor networks. This interpretation provides for an alternative way of evaluating the Feynman diagrams for random resistor networks. We calculate the resistance crossover exponent φ up to second order in ɛ=6-d, where d is the spatial dimension. Our result φ=1+ɛ/42+4ɛ2/3087 verifies a previous calculation by Lubensky and Wang, which itself was based on the Potts-model formulation of the random resistor network.

  1. SAR Image Change Detection Based on Fuzzy Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Huang, G.; Zhao, Z.

    2018-04-01

    Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.

  2. Local Diversity and Fine-Scale Organization of Receptive Fields in Mouse Visual Cortex

    PubMed Central

    Histed, Mark H.; Yurgenson, Sergey

    2011-01-01

    Many thousands of cortical neurons are activated by any single sensory stimulus, but the organization of these populations is poorly understood. For example, are neurons in mouse visual cortex—whose preferred orientations are arranged randomly—organized with respect to other response properties? Using high-speed in vivo two-photon calcium imaging, we characterized the receptive fields of up to 100 excitatory and inhibitory neurons in a 200 μm imaged plane. Inhibitory neurons had nonlinearly summating, complex-like receptive fields and were weakly tuned for orientation. Excitatory neurons had linear, simple receptive fields that can be studied with noise stimuli and system identification methods. We developed a wavelet stimulus that evoked rich population responses and yielded the detailed spatial receptive fields of most excitatory neurons in a plane. Receptive fields and visual responses were locally highly diverse, with nearby neurons having largely dissimilar receptive fields and response time courses. Receptive-field diversity was consistent with a nearly random sampling of orientation, spatial phase, and retinotopic position. Retinotopic positions varied locally on average by approximately half the receptive-field size. Nonetheless, the retinotopic progression across the cortex could be demonstrated at the scale of 100 μm, with a magnification of ∼10 μm/°. Receptive-field and response similarity were in register, decreasing by 50% over a distance of 200 μm. Together, the results indicate considerable randomness in local populations of mouse visual cortical neurons, with retinotopy as the principal source of organization at the scale of hundreds of micrometers. PMID:22171051

  3. Hierarchical Bayesian spatial models for alcohol availability, drug "hot spots" and violent crime.

    PubMed

    Zhu, Li; Gorman, Dennis M; Horel, Scott

    2006-12-07

    Ecologic studies have shown a relationship between alcohol outlet densities, illicit drug use and violence. The present study examined this relationship in the City of Houston, Texas, using a sample of 439 census tracts. Neighborhood sociostructural covariates, alcohol outlet density, drug crime density and violent crime data were collected for the year 2000, and analyzed using hierarchical Bayesian models. Model selection was accomplished by applying the Deviance Information Criterion. The counts of violent crime in each census tract were modelled as having a conditional Poisson distribution. Four neighbourhood explanatory variables were identified using principal component analysis. The best fitted model was selected as the one considering both unstructured and spatial dependence random effects. The results showed that drug-law violation explained a greater amount of variance in violent crime rates than alcohol outlet densities. The relative risk for drug-law violation was 2.49 and that for alcohol outlet density was 1.16. Of the neighbourhood sociostructural covariates, males of age 15 to 24 showed an effect on violence, with a 16% decrease in relative risk for each increase the size of its standard deviation. Both unstructured heterogeneity random effect and spatial dependence need to be included in the model. The analysis presented suggests that activity around illicit drug markets is more strongly associated with violent crime than is alcohol outlet density. Unique among the ecological studies in this field, the present study not only shows the direction and magnitude of impact of neighbourhood sociostructural covariates as well as alcohol and illicit drug activities in a neighbourhood, it also reveals the importance of applying hierarchical Bayesian models in this research field as both spatial dependence and heterogeneity random effects need to be considered simultaneously.

  4. Direct Simulation of Extinction in a Slab of Spherical Particles

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Mishchenko, Michael I.

    2013-01-01

    The exact multiple sphere superposition method is used to calculate the coherent and incoherent contributions to the ensemble-averaged electric field amplitude and Poynting vector in systems of randomly positioned nonabsorbing spherical particles. The target systems consist of cylindrical volumes, with radius several times larger than length, containing spheres with positional configurations generated by a Monte Carlo sampling method. Spatially dependent values for coherent electric field amplitude, coherent energy flux, and diffuse energy flux, are calculated by averaging of exact local field and flux values over multiple configurations and over spatially independent directions for fixed target geometry, sphere properties, and sphere volume fraction. Our results reveal exponential attenuation of the coherent field and the coherent energy flux inside the particulate layer and thereby further corroborate the general methodology of the microphysical radiative transfer theory. An effective medium model based on plane wave transmission and reflection by a plane layer is used to model the dependence of the coherent electric field on particle packing density. The effective attenuation coefficient of the random medium, computed from the direct simulations, is found to agree closely with effective medium theories and with measurements. In addition, the simulation results reveal the presence of a counter-propagating component to the coherent field, which arises due to the internal reflection of the main coherent field component by the target boundary. The characteristics of the diffuse flux are compared to, and found to be consistent with, a model based on the diffusion approximation of the radiative transfer theory.

  5. Impact of Uncertainty on the Porous Media Description in the Subsurface Transport Analysis

    NASA Astrophysics Data System (ADS)

    Darvini, G.; Salandin, P.

    2008-12-01

    In the modelling of flow and transport phenomena in naturally heterogeneous media, the spatial variability of hydraulic properties, typically the hydraulic conductivity, is generally described by use of a variogram of constant sill and spatial correlation. While some analyses reported in the literature discuss of spatial inhomogeneity related to a trend in the mean hydraulic conductivity, the effect in the flow and transport due to an inexact definition of spatial statistical properties of media as far as we know had never taken into account. The relevance of this topic is manifest, and it is related to the uncertainty in the definition of spatial moments of hydraulic log-conductivity from an (usually) little number of data, as well as to the modelling of flow and transport processes by the Monte Carlo technique, whose numerical fields have poor ergodic properties and are not strictly statistically homogeneous. In this work we investigate the effects related to mean log-conductivity (logK) field behaviours different from the constant one due to different sources of inhomogeneity as: i) a deterministic trend; ii) a deterministic sinusoidal pattern and iii) a random behaviour deriving from the hierarchical sedimentary architecture of porous formations and iv) conditioning procedure on available measurements of the hydraulic conductivity. These mean log-conductivity behaviours are superimposed to a correlated weakly fluctuating logK field. The time evolution of the spatial moments of the plume driven by a statistically inhomogeneous steady state random velocity field is analyzed in a 2-D finite domain by taking into account different sizes of injection area. The problem is approached by both a classical Monte Carlo procedure and SFEM (stochastic finite element method). By the latter the moments are achieved by space-time integration of the velocity field covariance structure derived according to the first- order Taylor series expansion. Two different goals are foreseen: 1) from the results it will be possible to distinguish the contribute in the plume dispersion of the uncertainty in the statistics of the medium hydraulic properties in all the cases considered, and 2) we will try to highlight the loss of performances that seems to affect the first-order approaches in the transport phenomena that take place in hierarchical architecture of porous formations.

  6. Statistical Analysis of Small-Scale Magnetic Flux Emergence Patterns: A Useful Subsurface Diagnostic?

    NASA Astrophysics Data System (ADS)

    Lamb, Derek A.

    2016-10-01

    While sunspots follow a well-defined pattern of emergence in space and time, small-scale flux emergence is assumed to occur randomly at all times in the quiet Sun. HMI's full-disk coverage, high cadence, spatial resolution, and duty cycle allow us to probe that basic assumption. Some case studies of emergence suggest that temporal clustering on spatial scales of 50-150 Mm may occur. If clustering is present, it could serve as a diagnostic of large-scale subsurface magnetic field structures. We present the results of a manual survey of small-scale flux emergence events over a short time period, and a statistical analysis addressing the question of whether these events show spatio-temporal behavior that is anything other than random.

  7. Simulation study of localization of electromagnetic waves in two-dimensional random dipolar systems.

    PubMed

    Wang, Ken Kang-Hsin; Ye, Zhen

    2003-12-01

    We study the propagation and scattering of electromagnetic waves by random arrays of dipolar cylinders in a uniform medium. A set of self-consistent equations, incorporating all orders of multiple scattering of the electromagnetic waves, is derived from first principles and then solved numerically for electromagnetic fields. For certain ranges of frequencies, spatially localized electromagnetic waves appear in such a simple but realistic disordered system. Dependence of localization on the frequency, radiation damping, and filling factor is shown. The spatial behavior of the total, coherent, and diffusive waves is explored in detail, and found to comply with a physical intuitive picture. A phase diagram characterizing localization is presented, in agreement with previous investigations on other systems.

  8. Modelling of Space-Time Soil Moisture in Savannas and its Relation to Vegetation Patterns

    NASA Astrophysics Data System (ADS)

    Rodriguez-Iturbe, I.; Mohanty, B.; Chen, Z.

    2017-12-01

    A physically derived space-time representation of the soil moisture field is presented. It includes the incorporation of a "jitter" process acting over the space-time soil moisture field and accounting for the short distance heterogeneities in topography, soil, and vegetation characteristics. The modelling scheme allows for the representation of spatial random fluctuations of soil moisture at small spatial scales and reproduces quite well the space-time correlation structure of soil moisture from a field study in Oklahoma. It is shown that the islands of soil moisture above different thresholds have sizes which follow power distributions over an extended range of scales. A discussion is provided about the possible links of this feature with the observed power law distributions of the clusters of trees in savannas.

  9. Experimental Study of the Effect of the Initial Spectrum Width on the Statistics of Random Wave Groups

    NASA Astrophysics Data System (ADS)

    Shemer, L.; Sergeeva, A.

    2009-12-01

    The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.

  10. Application of spatial Poisson process models to air mass thunderstorm rainfall

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.

    1987-01-01

    Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.

  11. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  12. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  13. Scale-invariant puddles in graphene: Geometric properties of electron-hole distribution at the Dirac point.

    PubMed

    Najafi, M N; Nezhadhaghighi, M Ghasemi

    2017-03-01

    We characterize the carrier density profile of the ground state of graphene in the presence of particle-particle interaction and random charged impurity in zero gate voltage. We provide detailed analysis on the resulting spatially inhomogeneous electron gas, taking into account the particle-particle interaction and the remote Coulomb disorder on an equal footing within the Thomas-Fermi-Dirac theory. We present some general features of the carrier density probability measure of the graphene sheet. We also show that, when viewed as a random surface, the electron-hole puddles at zero chemical potential show peculiar self-similar statistical properties. Although the disorder potential is chosen to be Gaussian, we show that the charge field is non-Gaussian with unusual Kondev relations, which can be regarded as a new class of two-dimensional random-field surfaces. Using Schramm-Loewner (SLE) evolution, we numerically demonstrate that the ungated graphene has conformal invariance and the random zero-charge density contours are SLE_{κ} with κ=1.8±0.2, consistent with c=-3 conformal field theory.

  14. Collision Models for Particle Orbit Code on SSX

    NASA Astrophysics Data System (ADS)

    Fisher, M. W.; Dandurand, D.; Gray, T.; Brown, M. R.; Lukin, V. S.

    2011-10-01

    Coulomb collision models are being developed and incorporated into the Hamiltonian particle pushing code (PPC) for applications to the Swarthmore Spheromak eXperiment (SSX). A Monte Carlo model based on that of Takizuka and Abe [JCP 25, 205 (1977)] performs binary collisions between test particles and thermal plasma field particles randomly drawn from a stationary Maxwellian distribution. A field-based electrostatic fluctuation model scatters particles from a spatially uniform random distribution of positive and negative spherical potentials generated throughout the plasma volume. The number, radii, and amplitude of these potentials are chosen to mimic the correct particle diffusion statistics without the use of random particle draws or collision frequencies. An electromagnetic fluctuating field model will be presented, if available. These numerical collision models will be benchmarked against known analytical solutions, including beam diffusion rates and Spitzer resistivity, as well as each other. The resulting collisional particle orbit models will be used to simulate particle collection with electrostatic probes in the SSX wind tunnel, as well as particle confinement in typical SSX fields. This work has been supported by US DOE, NSF and ONR.

  15. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    NASA Astrophysics Data System (ADS)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  16. Random field theory to interpret the spatial variability of lacustrine soils

    NASA Astrophysics Data System (ADS)

    Russo, Savino; Vessia, Giovanna

    2015-04-01

    The lacustrine soils are quaternary soils, dated from Pleistocene to Holocene periods, generated in low-energy depositional environments and characterized by soil mixture of clays, sands and silts with alternations of finer and coarser grain size layers. They are often met at shallow depth filling several tens of meters of tectonic or erosive basins typically placed in internal Appenine areas. The lacustrine deposits are often locally interbedded by detritic soils resulting from the failure of surrounding reliefs. Their heterogeneous lithology is associated with high spatial variability of physical and mechanical properties both along horizontal and vertical directions. The deterministic approach is still commonly adopted to accomplish the mechanical characterization of these heterogeneous soils where undisturbed sampling is practically not feasible (if the incoherent fraction is prevalent) or not spatially representative (if the cohesive fraction prevails). The deterministic approach consists on performing in situ tests, like Standard Penetration Tests (SPT) or Cone Penetration Tests (CPT) and deriving design parameters through "expert judgment" interpretation of the measure profiles. These readings of tip and lateral resistances (Rp and RL respectively) are almost continuous but highly variable in soil classification according to Schmertmann (1978). Thus, neglecting the spatial variability cannot be the best strategy to estimated spatial representative values of physical and mechanical parameters of lacustrine soils to be used for engineering applications. Hereafter, a method to draw the spatial variability structure of the aforementioned measure profiles is presented. It is based on the theory of the Random Fields (Vanmarcke 1984) applied to vertical readings of Rp measures from mechanical CPTs. The proposed method relies on the application of the regression analysis, by which the spatial mean trend and fluctuations about this trend are derived. Moreover, the scale of fluctuation is calculated to measure the maximum length beyond which profiles of measures are independent. The spatial mean trend can be used to identify "quasi-homogeneous" soil layers where the standard deviation and the scale of fluctuation can be calculated. In this study, five Rp profiles performed in the lacustrine deposits of the high River Pescara Valley have been analyzed. There, silty clay deposits with thickness ranging from a few meters to about 60m, and locally rich in sands and peats, are investigated. In this study, vertical trends of Rp profiles have been derived to be converted into design parameter mean trends. Furthermore, the variability structure derived from Rp readings can be propagated to design parameters to calculate the "characteristic values" requested by the European building codes. References Schmertmann J.H. 1978. Guidelines for Cone Penetration Test, Performance and Design. Report No. FHWA-TS-78-209, U.S. Department of Transportation, Washington, D.C., pp. 145. Vanmarcke E.H. 1984. Random Fields, analysis and synthesis. Cambridge (USA): MIT Press.

  17. Towards the Development of a More Accurate Monitoring Procedure for Invertebrate Populations, in the Presence of an Unknown Spatial Pattern of Population Distribution in the Field

    PubMed Central

    Petrovskaya, Natalia B.; Forbes, Emily; Petrovskii, Sergei V.; Walters, Keith F. A.

    2018-01-01

    Studies addressing many ecological problems require accurate evaluation of the total population size. In this paper, we revisit a sampling procedure used for the evaluation of the abundance of an invertebrate population from assessment data collected on a spatial grid of sampling locations. We first discuss how insufficient information about the spatial population density obtained on a coarse sampling grid may affect the accuracy of an evaluation of total population size. Such information deficit in field data can arise because of inadequate spatial resolution of the population distribution (spatially variable population density) when coarse grids are used, which is especially true when a strongly heterogeneous spatial population density is sampled. We then argue that the average trap count (the quantity routinely used to quantify abundance), if obtained from a sampling grid that is too coarse, is a random variable because of the uncertainty in sampling spatial data. Finally, we show that a probabilistic approach similar to bootstrapping techniques can be an efficient tool to quantify the uncertainty in the evaluation procedure in the presence of a spatial pattern reflecting a patchy distribution of invertebrates within the sampling grid. PMID:29495513

  18. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  19. Resonance oscillations of nonreciprocal long-range van der Waals forces between atoms in electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Sherkunov, Yury

    2018-03-01

    We study theoretically the van der Waals interaction between two atoms out of equilibrium with an isotropic electromagnetic field. We demonstrate that at large interatomic separations, the van der Waals forces are resonant, spatially oscillating, and nonreciprocal due to resonance absorption and emission of virtual photons. We suggest that the van der Waals forces can be controlled and manipulated by tuning the spectrum of artificially created random light.

  20. Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less

  1. Joint simulation of stationary grade and non-stationary rock type for quantifying geological uncertainty in a copper deposit

    NASA Astrophysics Data System (ADS)

    Maleki, Mohammad; Emery, Xavier

    2017-12-01

    In mineral resources evaluation, the joint simulation of a quantitative variable, such as a metal grade, and a categorical variable, such as a rock type, is challenging when one wants to reproduce spatial trends of the rock type domains, a feature that makes a stationarity assumption questionable. To address this problem, this work presents methodological and practical proposals for jointly simulating a grade and a rock type, when the former is represented by the transform of a stationary Gaussian random field and the latter is obtained by truncating an intrinsic random field of order k with Gaussian generalized increments. The proposals concern both the inference of the model parameters and the construction of realizations conditioned to existing data. The main difficulty is the identification of the spatial correlation structure, for which a semi-automated algorithm is designed, based on a least squares fitting of the data-to-data indicator covariances and grade-indicator cross-covariances. The proposed models and algorithms are applied to jointly simulate the copper grade and the rock type in a Chilean porphyry copper deposit. The results show their ability to reproduce the gradual transitions of the grade when crossing a rock type boundary, as well as the spatial zonation of the rock type.

  2. Bayesian analysis of zero inflated spatiotemporal HIV/TB child mortality data through the INLA and SPDE approaches: Applied to data observed between 1992 and 2010 in rural North East South Africa

    NASA Astrophysics Data System (ADS)

    Musenge, Eustasius; Chirwa, Tobias Freeman; Kahn, Kathleen; Vounatsou, Penelope

    2013-06-01

    Longitudinal mortality data with few deaths usually have problems of zero-inflation. This paper presents and applies two Bayesian models which cater for zero-inflation, spatial and temporal random effects. To reduce the computational burden experienced when a large number of geo-locations are treated as a Gaussian field (GF) we transformed the field to a Gaussian Markov Random Fields (GMRF) by triangulation. We then modelled the spatial random effects using the Stochastic Partial Differential Equations (SPDEs). Inference was done using a computationally efficient alternative to Markov chain Monte Carlo (MCMC) called Integrated Nested Laplace Approximation (INLA) suited for GMRF. The models were applied to data from 71,057 children aged 0 to under 10 years from rural north-east South Africa living in 15,703 households over the years 1992-2010. We found protective effects on HIV/TB mortality due to greater birth weight, older age and more antenatal clinic visits during pregnancy (adjusted RR (95% CI)): 0.73(0.53;0.99), 0.18(0.14;0.22) and 0.96(0.94;0.97) respectively. Therefore childhood HIV/TB mortality could be reduced if mothers are better catered for during pregnancy as this can reduce mother-to-child transmissions and contribute to improved birth weights. The INLA and SPDE approaches are computationally good alternatives in modelling large multilevel spatiotemporal GMRF data structures.

  3. Spatial analysis of storm depths from an Arizona raingage network

    NASA Technical Reports Server (NTRS)

    Fennessey, N. M.; Eagleson, P. S.; Qinliang, W.; Rodriguez-Iturbe, I.

    1986-01-01

    Eight years of summer rainstorm observations are analyzed by a dense network of 93 raingages operated by the U.S. Department of Agriculture, Agricultural Research Service, in the 150 km Walnut Gulch experimental catchment near Tucson, Arizona. Storms are defined by the total depths collected at each raingage during the noon-to-noon period for which there was depth recorded at any of the gages. For each of the resulting 428 storm days, the gage depths are interpolated onto a dense grid and the resulting random field analyzed to obtain moments, isohyetal plots, spatial correlation function, variance function, and the spatial distribution of storm depth.

  4. A technique for evaluating the influence of spatial sampling on the determination of global mean total columnar ozone

    NASA Technical Reports Server (NTRS)

    Tolson, R. H.

    1981-01-01

    A technique is described for providing a means of evaluating the influence of spatial sampling on the determination of global mean total columnar ozone. A finite number of coefficients in the expansion are determined, and the truncated part of the expansion is shown to contribute an error to the estimate, which depends strongly on the spatial sampling and is relatively insensitive to data noise. First and second order statistics are derived for each term in a spherical harmonic expansion which represents the ozone field, and the statistics are used to estimate systematic and random errors in the estimates of total ozone.

  5. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  6. Genetic Variability and Distribution of Mating Type Alleles in Field Populations of Leptosphaeria maculans from France

    PubMed Central

    Gout, Lilian; Eckert, Maria; Rouxel, Thierry; Balesdent, Marie-Hélène

    2006-01-01

    Leptosphaeria maculans is the most ubiquitous fungal pathogen of Brassica crops and causes the devastating stem canker disease of oilseed rape worldwide. We used minisatellite markers to determine the genetic structure of L. maculans in four field populations from France. Isolates were collected at three different spatial scales (leaf, 2-m2 field plot, and field) enabling the evaluation of spatial distribution of the mating type alleles and of genetic variability within and among field populations. Within each field population, no gametic disequilibrium between the minisatellite loci was detected and the mating type alleles were present at equal frequencies. Both sexual and asexual reproduction occur in the field, but the genetic structure of these populations is consistent with annual cycles of randomly mating sexual reproduction. All L. maculans field populations had a high level of gene diversity (H = 0.68 to 0.75) and genotypic diversity. Within each field population, the number of genotypes often was very close to the number of isolates. Analysis of molecular variance indicated that >99.5% of the total genetic variability was distributed at a small spatial scale, i.e., within 2-m2 field plots. Population differentiation among the four field populations was low (GST < 0.02), suggesting a high degree of gene exchange between these populations. The high gene flow evidenced here in French populations of L. maculans suggests a rapid countrywide diffusion of novel virulence alleles whenever novel resistance sources are used. PMID:16391041

  7. Using GIS to generate spatially balanced random survey designs for natural resource applications.

    PubMed

    Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B

    2007-07-01

    Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.

  8. Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators

    NASA Astrophysics Data System (ADS)

    Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.

    2015-11-01

    A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.

  9. THE EFFECTS OF SPATIAL SMOOTHING ON SOLAR MAGNETIC HELICITY PARAMETERS AND THE HEMISPHERIC HELICITY SIGN RULE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ocker, Stella Koch; Petrie, Gordon, E-mail: socker@oberlin.edu, E-mail: gpetrie@nso.edu

    The hemispheric preference for negative/positive helicity to occur in the northern/southern solar hemisphere provides clues to the causes of twisted, flaring magnetic fields. Previous studies on the hemisphere rule may have been affected by seeing from atmospheric turbulence. Using Hinode /SOT-SP data spanning 2006–2013, we studied the effects of two spatial smoothing tests that imitate atmospheric seeing: noise reduction by ignoring pixel values weaker than the estimated noise threshold, and Gaussian spatial smoothing. We studied in detail the effects of atmospheric seeing on the helicity distributions across various field strengths for active regions (ARs) NOAA 11158 and NOAA 11243, in addition tomore » studying the average helicities of 179 ARs with and without smoothing. We found that, rather than changing trends in the helicity distributions, spatial smoothing modified existing trends by reducing random noise and by regressing outliers toward the mean, or removing them altogether. Furthermore, the average helicity parameter values of the 179 ARs did not conform to the hemisphere rule: independent of smoothing, the weak-vertical-field values tended to be negative in both hemispheres, and the strong-vertical-field values tended to be positive, especially in the south. We conclude that spatial smoothing does not significantly affect the overall statistics for space-based data, and thus seeing from atmospheric turbulence seems not to have significantly affected previous studies’ ground-based results on the hemisphere rule.« less

  10. A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking

    PubMed Central

    Shafiee, Mohammad Javad; Azimifar, Zohreh; Wong, Alexander

    2015-01-01

    In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering. PMID:26313943

  11. Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images

    NASA Astrophysics Data System (ADS)

    Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred

    2011-11-01

    Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.

  12. Entanglement dynamics in random media

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.; Zarro, C. A. D.

    2017-12-01

    We study how the entanglement dynamics between two-level atoms is impacted by random fluctuations of the light cone. In our model the two-atom system is envisaged as an open system coupled with an electromagnetic field in the vacuum state. We employ the quantum master equation in the Born-Markov approximation in order to describe the completely positive time evolution of the atomic system. We restrict our investigations to the situation in which the atoms are coupled individually to two spatially separated cavities, one of which displays the emergence of light-cone fluctuations. In such a disordered cavity, we assume that the coefficients of the Klein-Gordon equation are random functions of the spatial coordinates. The disordered medium is modeled by a centered, stationary, and Gaussian process. We demonstrate that disorder has the effect of slowing down the entanglement decay. We conjecture that in a strong-disorder environment the mean life of entangled states can be enhanced in such a way as to almost completely suppress quantum nonlocal decoherence.

  13. Channel doping concentration and cell program state dependence on random telegraph noise spatial and statistical distribution in 30 nm NAND flash memory

    NASA Astrophysics Data System (ADS)

    Tomita, Toshihiro; Miyaji, Kousuke

    2015-04-01

    The dependence of spatial and statistical distribution of random telegraph noise (RTN) in a 30 nm NAND flash memory on channel doping concentration NA and cell program state Vth is comprehensively investigated using three-dimensional Monte Carlo device simulation considering random dopant fluctuation (RDF). It is found that single trap RTN amplitude ΔVth is larger at the center of the channel region in the NAND flash memory, which is closer to the jellium (uniform) doping results since NA is relatively low to suppress junction leakage current. In addition, ΔVth peak at the center of the channel decreases in the higher Vth state due to the current concentration at the shallow trench isolation (STI) edges induced by the high vertical electrical field through the fringing capacitance between the channel and control gate. In such cases, ΔVth distribution slope λ cannot be determined by only considering RDF and single trap.

  14. Generation of dense plume fingers in saturated-unsaturated homogeneous porous media

    NASA Astrophysics Data System (ADS)

    Cremer, Clemens J. M.; Graf, Thomas

    2015-02-01

    Flow under variable-density conditions is widespread, occurring in geothermal reservoirs, at waste disposal sites or due to saltwater intrusion. The migration of dense plumes typically results in the formation of vertical plume fingers which are known to be triggered by material heterogeneity or by variations in source concentration that causes the density variation. Using a numerical groundwater model, six perturbation methods are tested under saturated and unsaturated flow conditions to mimic heterogeneity and concentration variations on the pore scale in order to realistically generate dense fingers. A laboratory-scale sand tank experiment is numerically simulated, and the perturbation methods are evaluated by comparing plume fingers obtained from the laboratory experiment with numerically simulated fingers. Dense plume fingering for saturated flow can best be reproduced with a spatially random, time-constant perturbation of the solute source. For unsaturated flow, a spatially and temporally random noise of solute concentration or a random conductivity field adequately simulate plume fingering.

  15. Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.

    PubMed

    Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi

    2016-12-01

    Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.

  16. Spatial-spectral blood cell classification with microscopic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng

    2017-10-01

    Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.

  17. Characteristic-eddy decomposition of turbulence in a channel

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Moser, Robert D.

    1989-01-01

    Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.

  18. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  19. Fractional Stochastic Field Theory

    NASA Astrophysics Data System (ADS)

    Honkonen, Juha

    2018-02-01

    Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.

  20. Introducing Perception and Modelling of Spatial Randomness in Classroom

    ERIC Educational Resources Information Center

    De Nóbrega, José Renato

    2017-01-01

    A strategy to facilitate understanding of spatial randomness is described, using student activities developed in sequence: looking at spatial patterns, simulating approximate spatial randomness using a grid of equally-likely squares, using binomial probabilities for approximations and predictions and then comparing with given Poisson…

  1. The use of crop rotation for mapping soil organic content in farmland

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Song, Min; Zhu, A.-Xing; Qin, Chengzhi

    2017-04-01

    Most of the current digital soil mapping uses natural environmental covariates. However, human activities have significantly impacted the development of soil properties since half a century, and therefore become an important factor affecting soil spatial variability. Many researches have done field experiments to show how soil properties are impacted and changed by human activities, however, spatial variation data of human activities as environmental covariates have been rarely used in digital soil mapping. In this paper, we took crop rotation as an example of agricultural activities, and explored its effectiveness in characterizing and mapping the spatial variability of soil. The cultivated area of Xuanzhou city and Langxi County in Anhui Province was chosen as the study area. Three main crop rotations,including double-rice, wheat-rice,and oilseed rape-cotton were observed through field investigation in 2010. The spatial distribution of the three crop rotations in the study area was obtained by multi-phase remote sensing image interpretation using a supervised classification method. One-way analysis of variance (ANOVA) for topsoil organic content in the three crop rotation groups was performed. Factor importance of seven natural environmental covariates, crop rotation, Land use and NDVI were generated by variable importance criterion of Random Forest. Different combinations of environmental covariates were selected according to the importance rankings of environmental covariates for predicting SOC using Random Forest and Soil Landscape Inference Model (SOLIM). A cross validation was generated to evaluated the mapping accuracies. The results showed that there were siginificant differences of topsoil organic content among the three crop rotation groups. The crop rotation is more important than parent material, land use or NDVI according to the importance ranking calculated by Random Forest. In addition, crop rotation improved the mapping accuracy, especially for the flat clutivated area. This study demonstrates the usefulness of human activities in digital soil mapping and thus indicates the necessity for human activity factors in digital soil mapping studies.

  2. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  3. Spatial heterogeneity study of vegetation coverage at Heihe River Basin

    NASA Astrophysics Data System (ADS)

    Wu, Lijuan; Zhong, Bo; Guo, Liyu; Zhao, Xiangwei

    2014-11-01

    Spatial heterogeneity of the animal-landscape system has three major components: heterogeneity of resource distributions in the physical environment, heterogeneity of plant tissue chemistry, heterogeneity of movement modes by the animal. Furthermore, all three different types of heterogeneity interact each other and can either reinforce or offset one another, thereby affecting system stability and dynamics. In previous studies, the study areas are investigated by field sampling, which costs a large amount of manpower. In addition, uncertain in sampling affects the quality of field data, which leads to unsatisfactory results during the entire study. In this study, remote sensing data is used to guide the sampling for research on heterogeneity of vegetation coverage to avoid errors caused by randomness of field sampling. Semi-variance and fractal dimension analysis are used to analyze the spatial heterogeneity of vegetation coverage at Heihe River Basin. The spherical model with nugget is used to fit the semivariogram of vegetation coverage. Based on the experiment above, it is found, (1)there is a strong correlation between vegetation coverage and distance of vegetation populations within the range of 0-28051.3188m at Heihe River Basin, but the correlation loses suddenly when the distance greater than 28051.3188m. (2)The degree of spatial heterogeneity of vegetation coverage at Heihe River Basin is medium. (3)Spatial distribution variability of vegetation occurs mainly on small scales. (4)The degree of spatial autocorrelation is 72.29% between 25% and 75%, which means that spatial correlation of vegetation coverage at Heihe River Basin is medium high.

  4. Analysis of the spatio-temporal distribution of Eurygaster integriceps (Hemiptera: Scutelleridae) by using spatial analysis by distance indices and geostatistics.

    PubMed

    Karimzadeh, R; Hejazi, M J; Helali, H; Iranipour, S; Mohammadi, S A

    2011-10-01

    Eurygaster integriceps Puton (Hemiptera: Scutelleridae) is the most serious insect pest of wheat (Triticum aestivum L.) and barley (Hordeum vulgare L.) in Iran. In this study, spatio-temporal distribution of this pest was determined in wheat by using spatial analysis by distance indices (SADIE) and geostatistics. Global positioning and geographic information systems were used for spatial sampling and mapping the distribution of this insect. The study was conducted for three growing seasons in Gharamalek, an agricultural region to the west of Tabriz, Iran. Weekly sampling began when E. integriceps adults migrated to wheat fields from overwintering sites and ended when the new generation adults appeared at the end of season. The adults were sampled using 1- by 1-m quadrat and distance-walk methods. A sweep net was used for sampling the nymphs, and five 180° sweeps were considered as the sampling unit. The results of spatial analyses by using geostatistics and SADIE indicated that E. integriceps adults were clumped after migration to fields and had significant spatial dependency. The second- and third-instar nymphs showed aggregated spatial structure in the middle of growing season. At the end of the season, population distribution changed toward random or regular patterns; and fourth and fifth instars had weaker spatial structure compared with younger nymphs. In Iran, management measures for E. integriceps in wheat fields are mainly applied against overwintering adults, as well as second and third instars. Because of the aggregated distribution of these life stages, site-specific spraying of chemicals is feasible in managing E. integriceps.

  5. Bayesian spatial transformation models with applications in neuroimaging data

    PubMed Central

    Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.

    2013-01-01

    Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143

  6. Numerical Generation of Dense Plume Fingers in Unsaturated Homogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Cremer, C.; Graf, T.

    2012-04-01

    In nature, the migration of dense plumes typically results in the formation of vertical plume fingers. Flow direction in fingers is downwards, which is counterbalanced by upwards flow of less dense fluid between fingers. In heterogeneous media, heterogeneity itself is known to trigger the formation of fingers. In homogeneous media, however, fingers are also created even if all grains had the same diameter. The reason is that pore-scale heterogeneity leading to different flow velocities also exists in homogeneous media due to two effects: (i) Grains of identical size may randomly arrange differently, e.g. forming tetrahedrons, hexahedrons or octahedrons. Each arrangement creates pores of varying diameter, thus resulting in different average flow velocities. (ii) Random variations of solute concentration lead to varying buoyancy effects, thus also resulting in different velocities. As a continuation of previously made efforts to incorporate pore-scale heterogeneity into fully saturated soil such that dense fingers are realistically generated (Cremer and Graf, EGU Assembly, 2011), the current paper extends the research scope from saturated to unsaturated soil. Perturbation methods are evaluated by numerically re-simulating a laboratory-scale experiment of plume transport in homogeneous unsaturated sand (Simmons et al., Transp. Porous Media, 2002). The following 5 methods are being discussed: (i) homogeneous sand, (ii) initial perturbation of solute concentration, (iii) spatially random, time-constant perturbation of solute source, (iv) spatially and temporally random noise of simulated solute concentration, and (v) random K-field that introduces physically insignificant but numerically significant heterogeneity. Results demonstrate that, as opposed to saturated flow, perturbing the solute source will not result in plume fingering. This is because the location of the perturbed source (domain top) and the location of finger generation (groundwater surface) do not coincide. Alternatively, similar to saturated flow, applying either a random concentration noise (iv) or a random K-field (v) generates realistic plume fingering. Future work will focus on the generation mechanisms of plume finger splitting.

  7. Tropical forest carbon balance: effects of field- and satellite-based mortality regimes on the dynamics and the spatial structure of Central Amazon forest biomass

    NASA Astrophysics Data System (ADS)

    Di Vittorio, Alan V.; Negrón-Juárez, Robinson I.; Higuchi, Niro; Chambers, Jeffrey Q.

    2014-03-01

    Debate continues over the adequacy of existing field plots to sufficiently capture Amazon forest dynamics to estimate regional forest carbon balance. Tree mortality dynamics are particularly uncertain due to the difficulty of observing large, infrequent disturbances. A recent paper (Chambers et al 2013 Proc. Natl Acad. Sci. 110 3949-54) reported that Central Amazon plots missed 9-17% of tree mortality, and here we address ‘why’ by elucidating two distinct mortality components: (1) variation in annual landscape-scale average mortality and (2) the frequency distribution of the size of clustered mortality events. Using a stochastic-empirical tree growth model we show that a power law distribution of event size (based on merged plot and satellite data) is required to generate spatial clustering of mortality that is consistent with forest gap observations. We conclude that existing plots do not sufficiently capture losses because their placement, size, and longevity assume spatially random mortality, while mortality is actually distributed among differently sized events (clusters of dead trees) that determine the spatial structure of forest canopies.

  8. Waterbodies Extraction from LANDSAT8-OLI Imagery Using Awater Indexs-Guied Stochastic Fully-Connected Conditional Random Field Model and the Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, X.; Xu, L.

    2018-04-01

    One of the most important applications of remote sensing classification is water extraction. The water index (WI) based on Landsat images is one of the most common ways to distinguish water bodies from other land surface features. But conventional WI methods take into account spectral information only form a limited number of bands, and therefore the accuracy of those WI methods may be constrained in some areas which are covered with snow/ice, clouds, etc. An accurate and robust water extraction method is the key to the study at present. The support vector machine (SVM) using all bands spectral information can reduce for these classification error to some extent. Nevertheless, SVM which barely considers spatial information is relatively sensitive to noise in local regions. Conditional random field (CRF) which considers both spatial information and spectral information has proven to be able to compensate for these limitations. Hence, in this paper, we develop a systematic water extraction method by taking advantage of the complementarity between the SVM and a water index-guided stochastic fully-connected conditional random field (SVM-WIGSFCRF) to address the above issues. In addition, we comprehensively evaluate the reliability and accuracy of the proposed method using Landsat-8 operational land imager (OLI) images of one test site. We assess the method's performance by calculating the following accuracy metrics: Omission Errors (OE) and Commission Errors (CE); Kappa coefficient (KP) and Total Error (TE). Experimental results show that the new method can improve target detection accuracy under complex and changeable environments.

  9. Tropical land use land cover mapping in Pará (Brazil) using discriminative Markov random fields and multi-temporal TerraSAR-X data

    NASA Astrophysics Data System (ADS)

    Hagensieker, Ron; Roscher, Ribana; Rosentreter, Johannes; Jakimow, Benjamin; Waske, Björn

    2017-12-01

    Remote sensing satellite data offer the unique possibility to map land use land cover transformations by providing spatially explicit information. However, detection of short-term processes and land use patterns of high spatial-temporal variability is a challenging task. We present a novel framework using multi-temporal TerraSAR-X data and machine learning techniques, namely discriminative Markov random fields with spatio-temporal priors, and import vector machines, in order to advance the mapping of land cover characterized by short-term changes. Our study region covers a current deforestation frontier in the Brazilian state Pará with land cover dominated by primary forests, different types of pasture land and secondary vegetation, and land use dominated by short-term processes such as slash-and-burn activities. The data set comprises multi-temporal TerraSAR-X imagery acquired over the course of the 2014 dry season, as well as optical data (RapidEye, Landsat) for reference. Results show that land use land cover is reliably mapped, resulting in spatially adjusted overall accuracies of up to 79% in a five class setting, yet limitations for the differentiation of different pasture types remain. The proposed method is applicable on multi-temporal data sets, and constitutes a feasible approach to map land use land cover in regions that are affected by high-frequent temporal changes.

  10. Conjunctions between motion and disparity are encoded with the same spatial resolution as disparity alone.

    PubMed

    Allenmark, Fredrik; Read, Jenny C A

    2012-10-10

    Neurons in cortical area MT respond well to transparent streaming motion in distinct depth planes, such as caused by observer self-motion, but do not contain subregions excited by opposite directions of motion. We therefore predicted that spatial resolution for transparent motion/disparity conjunctions would be limited by the size of MT receptive fields, just as spatial resolution for disparity is limited by the much smaller receptive fields found in primary visual cortex, V1. We measured this using a novel "joint motion/disparity grating," on which human observers detected motion/disparity conjunctions in transparent random-dot patterns containing dots streaming in opposite directions on two depth planes. Surprisingly, observers showed the same spatial resolution for these as for pure disparity gratings. We estimate the limiting receptive field diameter at 11 arcmin, similar to V1 and much smaller than MT. Higher internal noise for detecting joint motion/disparity produces a slightly lower high-frequency cutoff of 2.5 cycles per degree (cpd) versus 3.3 cpd for disparity. This suggests that information on motion/disparity conjunctions is available in the population activity of V1 and that this information can be decoded for perception even when it is invisible to neurons in MT.

  11. The contribution of competition to tree mortality in old-growth coniferous forests

    USGS Publications Warehouse

    Das, A.; Battles, J.; Stephenson, N.L.; van Mantgem, P.J.

    2011-01-01

    Competition is a well-documented contributor to tree mortality in temperate forests, with numerous studies documenting a relationship between tree death and the competitive environment. Models frequently rely on competition as the only non-random mechanism affecting tree mortality. However, for mature forests, competition may cease to be the primary driver of mortality.We use a large, long-term dataset to study the importance of competition in determining tree mortality in old-growth forests on the western slope of the Sierra Nevada of California, U.S.A. We make use of the comparative spatial configuration of dead and live trees, changes in tree spatial pattern through time, and field assessments of contributors to an individual tree's death to quantify competitive effects.Competition was apparently a significant contributor to tree mortality in these forests. Trees that died tended to be in more competitive environments than trees that survived, and suppression frequently appeared as a factor contributing to mortality. On the other hand, based on spatial pattern analyses, only three of 14 plots demonstrated compelling evidence that competition was dominating mortality. Most of the rest of the plots fell within the expectation for random mortality, and three fit neither the random nor the competition model. These results suggest that while competition is often playing a significant role in tree mortality processes in these forests it only infrequently governs those processes. In addition, the field assessments indicated a substantial presence of biotic mortality agents in trees that died.While competition is almost certainly important, demographics in these forests cannot accurately be characterized without a better grasp of other mortality processes. In particular, we likely need a better understanding of biotic agents and their interactions with one another and with competition. ?? 2011.

  12. Local Geostatistical Models and Big Data in Hydrological and Ecological Applications

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2015-04-01

    The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property helps to overcome a significant computational bottleneck of geostatistical models due to the poor scaling of the matrix inversion [4,5]. We present applications to real and simulated data sets, including the Walker lake data, and we investigate the SLI performance using various statistical cross validation measures. References [1] T. Hofmann, B. Schlkopf, A.J. Smola, Annals of Statistics, 36, 1171-1220 (2008). [2] D. T. Hristopulos, SIAM Journal on Scientific Computing, 24(6): 2125-2162 (2003). [3] D. T. Hristopulos and S. N. Elogne, IEEE Transactions on Signal Processing, 57(9): 3475-3487 (2009) [4] G. Jona Lasinio, G. Mastrantonio, and A. Pollice, Statistical Methods and Applications, 22(1):97-112 (2013) [5] Sun, Y., B. Li, and M. G. Genton (2012). Geostatistics for large datasets. In: Advances and Challenges in Space-time Modelling of Natural Events, Lecture Notes in Statistics, pp. 55-77. Springer, Berlin-Heidelberg.

  13. Application of an Extended Parabolic Equation to the Calculation of the Mean Field and the Transverse and Longitudinal Mutual Coherence Functions Within Atmospheric Turbulence

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2005-01-01

    Solutions are derived for the generalized mutual coherence function (MCF), i.e., the second order moment, of a random wave field propagating through a random medium within the context of the extended parabolic equation. Here, "generalized" connotes the consideration of both the transverse as well as the longitudinal second order moments (with respect to the direction of propagation). Such solutions will afford a comparison between the results of the parabolic equation within the pararaxial approximation and those of the wide-angle extended theory. To this end, a statistical operator method is developed which gives a general equation for an arbitrary spatial statistical moment of the wave field. The generality of the operator method allows one to obtain an expression for the second order field moment in the direction longitudinal to the direction of propagation. Analytical solutions to these equations are derived for the Kolmogorov and Tatarskii spectra of atmospheric permittivity fluctuations within the Markov approximation.

  14. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  15. The Importance of Spatial Reasoning Skills in Undergraduate Geology Students and the Effect of Weekly Spatial Skill Trainings

    NASA Astrophysics Data System (ADS)

    Gold, Anne; Pendergast, Philip; Stempien, Jennifer; Ormand, Carol

    2016-04-01

    Spatial reasoning is a key skill for student success in STEM disciplines in general and for students in geosciences in particular. However, spatial reasoning is neither explicitly trained, nor evenly distributed, among students and by gender. This uneven playing field allows some students to perform geoscience tasks easily while others struggle. A lack of spatial reasoning skills has been shown to be a barrier to success in the geosciences, and for STEM disciplines in general. Addressing spatial abilities early in the college experience might therefore be effective in retaining students, especially females, in STEM disciplines. We have developed and implemented a toolkit for testing and training undergraduate student spatial reasoning skills in the classroom. In the academic year 2014/15, we studied the distribution of spatial abilities in more than 700 undergraduate Geology students from 4 introductory and 2 upper level courses. Following random assignment, four treatment groups received weekly online training and intermittent hands-on trainings in spatial thinking while four control groups only participated in a pre- and a posttest of spatial thinking skills. In this presentation we summarize our results and describe the distribution of spatial skills in undergraduate students enrolled in geology courses. We first discuss the factors that best account for differences in baseline spatial ability levels, including general intelligence (using standardized test scores as a proxy), major, video gaming, and other childhood play experiences, which help to explain the gender gap observed in most research. We found a statistically significant improvement of spatial thinking still with large effect sizes for the students who received the weekly trainings. Self-report data further shows that students improve their spatial thinking skills and report that their improved spatial thinking skills increase their performance in geoscience courses. We conclude by discussing the effects of the training modules on development of spatial skills, which helps to shed light on what types of interventions may be useful in leveling the playing field for students going into the geosciences and other STEM fields.

  16. Spatial filtering precedes motion detection.

    PubMed

    Morgan, M J

    1992-01-23

    When we perceive motion on a television or cinema screen, there must be some process that allows us to track moving objects over time: if not, the result would be a conflicting mass of motion signals in all directions. A possible mechanism, suggested by studies of motion displacement in spatially random patterns, is that low-level motion detectors have a limited spatial range, which ensures that they tend to be stimulated over time by the same object. This model predicts that the direction of displacement of random patterns cannot be detected reliably above a critical absolute displacement value (Dmax) that is independent of the size or density of elements in the display. It has been inferred that Dmax is a measure of the size of motion detectors in the visual pathway. Other studies, however, have shown that Dmax increases with element size, in which case the most likely interpretation is that Dmax depends on the probability of false matches between pattern elements following a displacement. These conflicting accounts are reconciled here by showing that Dmax is indeed determined by the spacing between the elements in the pattern, but only after fine detail has been removed by a physiological prefiltering stage: the filter required to explain the data has a similar size to the receptive field of neurons in the primate magnocellular pathway. The model explains why Dmax can be increased by removing high spatial frequencies from random patterns, and simplifies our view of early motion detection.

  17. A simplified analytical random walk model for proton dose calculation

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.

  18. Remote Sensing, Sampling and Simulation Applications in Analyses of Insect Dispersion and Abundance in Cotton

    Treesearch

    J. L. Willers; J. M. McKinion; J. N. Jenkins

    2006-01-01

    Simulation was employed to create stratified simple random samples of different sample unit sizes to represent tarnished plant bug abundance at different densities within various habitats of simulated cotton fields. These samples were used to investigate dispersion patterns of this cotton insect. It was found that the assessment of spatial pattern varied as a function...

  19. Caustics and Rogue Waves in an Optical Sea.

    PubMed

    Mathis, Amaury; Froehly, Luc; Toenger, Shanti; Dias, Frédéric; Genty, Goëry; Dudley, John M

    2015-08-06

    There are many examples in physics of systems showing rogue wave behaviour, the generation of high amplitude events at low probability. Although initially studied in oceanography, rogue waves have now been seen in many other domains, with particular recent interest in optics. Although most studies in optics have focussed on how nonlinearity can drive rogue wave emergence, purely linear effects have also been shown to induce extreme wave amplitudes. In this paper, we report a detailed experimental study of linear rogue waves in an optical system, using a spatial light modulator to impose random phase structure on a coherent optical field. After free space propagation, different random intensity patterns are generated, including partially-developed speckle, a broadband caustic network, and an intermediate pattern with characteristics of both speckle and caustic structures. Intensity peaks satisfying statistical criteria for rogue waves are seen especially in the case of the caustic network, and are associated with broader spatial spectra. In addition, the electric field statistics of the intermediate pattern shows properties of an "optical sea" with near-Gaussian statistics in elevation amplitude, and trough-to-crest statistics that are near-Rayleigh distributed but with an extended tail where a number of rogue wave events are observed.

  20. Caustics and Rogue Waves in an Optical Sea

    PubMed Central

    Mathis, Amaury; Froehly, Luc; Toenger, Shanti; Dias, Frédéric; Genty, Goëry; Dudley, John M.

    2015-01-01

    There are many examples in physics of systems showing rogue wave behaviour, the generation of high amplitude events at low probability. Although initially studied in oceanography, rogue waves have now been seen in many other domains, with particular recent interest in optics. Although most studies in optics have focussed on how nonlinearity can drive rogue wave emergence, purely linear effects have also been shown to induce extreme wave amplitudes. In this paper, we report a detailed experimental study of linear rogue waves in an optical system, using a spatial light modulator to impose random phase structure on a coherent optical field. After free space propagation, different random intensity patterns are generated, including partially-developed speckle, a broadband caustic network, and an intermediate pattern with characteristics of both speckle and caustic structures. Intensity peaks satisfying statistical criteria for rogue waves are seen especially in the case of the caustic network, and are associated with broader spatial spectra. In addition, the electric field statistics of the intermediate pattern shows properties of an “optical sea” with near-Gaussian statistics in elevation amplitude, and trough-to-crest statistics that are near-Rayleigh distributed but with an extended tail where a number of rogue wave events are observed. PMID:26245864

  1. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories

    NASA Astrophysics Data System (ADS)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  2. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    PubMed

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  3. Bayesian spatial transformation models with applications in neuroimaging data.

    PubMed

    Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G

    2013-12-01

    The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.

  4. Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyong; Zhou, Feng; Xu, Lin; Meng, Deyu; Xu, Zongben; Paisley, John

    2018-05-01

    This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent (SGD) and update the class labels of all pixel vectors using an alpha-expansion min-cut-based algorithm. Compared with other state-of-the-art methods, the proposed classification method achieves better performance on one synthetic dataset and two benchmark HSI datasets in a number of experimental settings.

  5. Stand-alone scattering optical device using holographic photopolymer (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Park, Jongchan; Lee, KyeoReh; Park, YongKeun

    2016-03-01

    When a light propagates through highly disordered medium, its optical parameters such as amplitude, phase and polarization states are completely scrambled because of multiple scattering events. Since the multiple scattering is a fundamental optical process that contains extremely high degrees of freedom, optical information of a transmitted light is totally mingled. Until recently, the presence of multiple scattering in an inhomogeneous medium is considered as a major obstacle when manipulating a light transmitting through the medium. However, a recent development of wavefront shaping techniques enable us to control the propagation of light through turbid media; a light transmitting through a turbid medium can be effectively controlled by modulating the spatial profile of the incident light using spatial light modulator. In this work, stand-alone scattering optical device is proposed; a holographic photopolymer film, which is much economic compared to the other digital spatial light modulators, is used to record and reconstruct permanent wavefront to generate optical field behind a scattering medium. By employing our method, arbitrary optical field can be generated since the scattering medium completely mixes all the optical parameters which allow us to access all the optical information only by modulating spatial phase profile of the impinging wavefront. The method is experimentally demonstrated in both the far-field and near-field regime where it shows promising fidelity and stability. The proposed stand-alone scattering optical device will opens up new avenues for exploiting the randomness inherent in disordered medium.

  6. Spatial Variability of Snowpack Properties On Small Slopes

    NASA Astrophysics Data System (ADS)

    Pielmeier, C.; Kronholm, K.; Schneebeli, M.; Schweizer, J.

    The spatial variability of alpine snowpacks is created by a variety of parameters like deposition, wind erosion, sublimation, melting, temperature, radiation and metamor- phism of the snow. Spatial variability is thought to strongly control the avalanche initi- ation and failure propagation processes. Local snowpack measurements are currently the basis for avalanche warning services and there exist contradicting hypotheses about the spatial continuity of avalanche active snow layers and interfaces. Very little about the spatial variability of the snowpack is known so far, therefore we have devel- oped a systematic and objective method to measure the spatial variability of snowpack properties, layering and its relation to stability. For a complete coverage, the analysis of the spatial variability has to entail all scales from mm to km. In this study the small to medium scale spatial variability is investigated, i.e. the range from centimeters to tenths of meters. During the winter 2000/2001 we took systematic measurements in lines and grids on a flat snow test field with grid distances from 5 cm to 0.5 m. Fur- thermore, we measured systematic grids with grid distances between 0.5 m and 2 m in undisturbed flat fields and on small slopes above the tree line at the Choerbschhorn, in the region of Davos, Switzerland. On 13 days we measured the spatial pattern of the snowpack stratigraphy with more than 110 snow micro penetrometer measure- ments at slopes and flat fields. Within this measuring grid we placed 1 rutschblock and 12 stuffblock tests to measure the stability of the snowpack. With the large num- ber of measurements we are able to use geostatistical methods to analyse the spatial variability of the snowpack. Typical correlation lengths are calculated from semivari- ograms. Discerning the systematic trends from random spatial variability is analysed using statistical models. Scale dependencies are shown and recurring scaling patterns are outlined. The importance of the small and medium scale spatial variability for the larger (kilometer) scale spatial variability as well as for the avalanche formation are discussed. Finally, an outlook on spatial models for the snowpack variability is given.

  7. Determining the soil hydraulic conductivity by means of a field scale internal drainage

    NASA Astrophysics Data System (ADS)

    Severino, Gerardo; Santini, Alessandro; Sommella, Angelo

    2003-03-01

    Spatial variations of water content in large extents soils (vadose zone) are highly affected by the natural heterogeneity of the porous medium. This implies that the magnitude of the hydraulic properties, especially the conductivity, varies in an irregular manner with scale. Determining mean values of hydraulic properties will not suffice to accurately quantify water flow in the vadose zone. At field scale proper field measurements have to be carried out, similar to standard laboratory methods that also characterize the spatial variability of the hydraulic properties. Toward this aim an internal drainage test has been conducted at Ponticelli site near Naples (Italy) where water content and pressure head were monitored at 50 locations of a 2×50 m 2 plot. The present paper illustrates a method to quantify the mean value and the spatial variability of the hydraulic parameters needed to calibrate the soil conductivity curve at field scale (hereafter defined as field scale hydraulic conductivity). A stochastic model that regards the hydraulic parameters as random space functions (RSFs) is derived by adopting the stream tube approach of Dagan and Bresler (1979). Owing to the randomness of the hydraulic parameters, even the water content θ will be a RSF whose mean value (hereafter termed field scale water content) is obtained as an ensemble average over all the realizations of a local analytical solution of Richards' equation. It is shown that the most frequent data collection should be carried out in the initial stage of the internal drainage experiment, when the most significant changes in water content occur. The model parameters are obtained by a standard least square optimization procedure using water content data at a certain depth (z=30 cm) for several times ( t=5, 24, 48, 96, 144, 216, 312, 408, 576, 744, 912 h). The reliability of the proposed method is then evaluated by comparing the predicted water content with observations at different depths ( z=45, 60, 75, and 90 cm). The calibration procedure is further verified by comparing the cumulative distribution of measured water content at different times with corresponding distribution obtained from the calibrated model.

  8. Enhanced backscattering through a deep random phase screen

    NASA Astrophysics Data System (ADS)

    Jakeman, E.

    1988-10-01

    The statistical properties of radiation scattered by a system consisting of a plane mirror placed in the Fresnel region behind a smoothly varying deep random-phase screen with off-axis beam illumination are studied. It is found that two mechanisms cause enhanced scattering around the backward direction, according to the mirror position with respect to the focusing plane of the screen. In all of the plane mirror geometries considered, the scattered field remains a complex Gaussian process with a spatial coherence function identical to that expected for a single screen, and a speckle size smaller than the width of backscatter enhancement.

  9. Fermi problem in disordered systems

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.; de Mello, H. R.; Zarro, C. A. D.

    2017-10-01

    We revisit the Fermi two-atom problem in the framework of disordered systems. In our model, we consider a two-qubit system linearly coupled with a quantum massless scalar field. We analyze the energy transfer between the qubits under different experimental perspectives. In addition, we assume that the coefficients of the Klein-Gordon equation are random functions of the spatial coordinates. The disordered medium is modeled by a centered, stationary, and Gaussian process. We demonstrate that the classical notion of causality emerges only in the wave zone in the presence of random fluctuations of the light cone. Possible repercussions are discussed.

  10. Fluid Physics Under a Stochastic Acceleration Field

    NASA Technical Reports Server (NTRS)

    Vinals, Jorge

    2001-01-01

    The research summarized in this report has involved a combined theoretical and computational study of fluid flow that results from the random acceleration environment present onboard space orbiters, also known as g-jitter. We have focused on a statistical description of the observed g-jitter, on the flows that such an acceleration field can induce in a number of experimental configurations of interest, and on extending previously developed methodology to boundary layer flows. Narrow band noise has been shown to describe many of the features of acceleration data collected during space missions. The scale of baroclinically induced flows when the driving acceleration is random is not given by the Rayleigh number. Spatially uniform g-jitter induces additional hydrodynamic forces among suspended particles in incompressible fluids. Stochastic modulation of the control parameter shifts the location of the onset of an oscillatory instability. Random vibration of solid boundaries leads to separation of boundary layers. Steady streaming ahead of a modulated solid-melt interface enhances solute transport, and modifies the stability boundaries of a planar front.

  11. KC-135 aero-optical turbulent boundary layer/shear layer experiment revisited

    NASA Technical Reports Server (NTRS)

    Craig, J.; Allen, C.

    1987-01-01

    The aero-optical effects associated with propagating a laser beam through both an aircraft turbulent boundary layer and artificially generated shear layers are examined. The data present comparisons from observed optical performance with those inferred from aerodynamic measurements of unsteady density and correlation lengths within the same random flow fields. Using optical instrumentation with tens of microsecond temporal resolution through a finite aperture, optical performance degradation was determined and contrasted with the infinite aperture time averaged aerodynamic measurement. In addition, the optical data were artificially clipped to compare to theoretical scaling calculations. Optical instrumentation consisted of a custom Q switched Nd:Yag double pulsed laser, and a holographic camera which recorded the random flow field in a double pass, double pulse mode. Aerodynamic parameters were measured using hot film anemometer probes and a five hole pressure probe. Each technique is described with its associated theoretical basis for comparison. The effects of finite aperture and spatial and temporal frequencies of the random flow are considered.

  12. A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.

    PubMed

    Mascaro, Joseph; Asner, Gregory P; Knapp, David E; Kennedy-Bowdoin, Ty; Martin, Roberta E; Anderson, Christopher; Higgins, Mark; Chadwick, K Dana

    2014-01-01

    Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag"), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1) when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.

  13. A Tale of Two “Forests”: Random Forest Machine Learning Aids Tropical Forest Carbon Mapping

    PubMed Central

    Mascaro, Joseph; Asner, Gregory P.; Knapp, David E.; Kennedy-Bowdoin, Ty; Martin, Roberta E.; Anderson, Christopher; Higgins, Mark; Chadwick, K. Dana

    2014-01-01

    Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including—in the latter case—x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called “out-of-bag”), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha−1 when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation. PMID:24489686

  14. A comparison of adaptive sampling designs and binary spatial models: A simulation study using a census of Bromus inermis

    USGS Publications Warehouse

    Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa

    2013-01-01

    Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.

  15. Porous media flux sensitivity to pore-scale geostatistics: A bottom-up approach

    NASA Astrophysics Data System (ADS)

    Di Palma, P. R.; Guyennon, N.; Heße, F.; Romano, E.

    2017-04-01

    Macroscopic properties of flow through porous media can be directly computed by solving the Navier-Stokes equations at the scales related to the actual flow processes, while considering the porous structures in an explicit way. The aim of this paper is to investigate the effects of the pore-scale spatial distribution on seepage velocity through numerical simulations of 3D fluid flow performed by the lattice Boltzmann method. To this end, we generate multiple random Gaussian fields whose spatial correlation follows an assigned semi-variogram function. The Exponential and Gaussian semi-variograms are chosen as extreme-cases of correlation for short distances and statistical properties of the resulting porous media (indicator field) are described using the Matèrn covariance model, with characteristic lengths of spatial autocorrelation (pore size) varying from 2% to 13% of the linear domain. To consider the sensitivity of the modeling results to the geostatistical representativeness of the domain as well as to the adopted resolution, porous media have been generated repetitively with re-initialized random seeds and three different resolutions have been tested for each resulting realization. The main difference among results is observed between the two adopted semi-variograms, indicating that the roughness (short distances autocorrelation) is the property mainly affecting the flux. However, computed seepage velocities show additionally a wide variability (about three orders of magnitude) for each semi-variogram model in relation to the assigned correlation length, corresponding to pore sizes. The spatial resolution affects more the results for short correlation lengths (i.e., small pore sizes), resulting in an increasing underestimation of the seepage velocity with the decreasing correlation length. On the other hand, results show an increasing uncertainty as the correlation length approaches the domain size.

  16. High-brightness laser imaging with tunable speckle reduction enabled by electroactive micro-optic diffusers.

    PubMed

    Farrokhi, Hamid; Rohith, Thazhe Madam; Boonruangkan, Jeeranan; Han, Seunghwoi; Kim, Hyunwoong; Kim, Seung-Woo; Kim, Young-Jin

    2017-11-10

    High coherence of lasers is desirable in high-speed, high-resolution, and wide-field imaging. However, it also causes unavoidable background speckle noise thus degrades the image quality in traditional microscopy and more significantly in interferometric quantitative phase imaging (QPI). QPI utilizes optical interference for high-precision measurement of the optical properties where the speckle can severely distort the information. To overcome this, we demonstrated a light source system having a wide tunability in the spatial coherence over 43% by controlling the illumination angle, scatterer's size, and the rotational speed of an electroactive-polymer rotational micro-optic diffuser. Spatially random phase modulation was implemented for the lower speckle imaging with over a 50% speckle reduction without a significant degradation in the temporal coherence. Our coherence control technique will provide a unique solution for a low-speckle, full-field, and coherent imaging in optically scattering media in the fields of healthcare sciences, material sciences and high-precision engineering.

  17. Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.

    1992-12-01

    During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards regularity. For clouds less than 1 km in diameter, the average nearest-neighbor distance is equal to 3-7 cloud diameters. For larger clouds, the ratio of cloud nearest-neighbor distance to cloud diameter increases sharply with increasing cloud diameter. This demonstrates that large clouds inhibit the growth of other large clouds in their vicinity. Nevertheless, this leads to random distributions of large clouds, not regularity.

  18. Fractal planetary rings: Energy inequalities and random field model

    NASA Astrophysics Data System (ADS)

    Malyarenko, Anatoliy; Ostoja-Starzewski, Martin

    2017-12-01

    This study is motivated by a recent observation, based on photographs from the Cassini mission, that Saturn’s rings have a fractal structure in radial direction. Accordingly, two questions are considered: (1) What Newtonian mechanics argument in support of such a fractal structure of planetary rings is possible? (2) What kinematics model of such fractal rings can be formulated? Both challenges are based on taking planetary rings’ spatial structure as being statistically stationary in time and statistically isotropic in space, but statistically nonstationary in space. An answer to the first challenge is given through an energy analysis of circular rings having a self-generated, noninteger-dimensional mass distribution [V. E. Tarasov, Int. J. Mod Phys. B 19, 4103 (2005)]. The second issue is approached by taking the random field of angular velocity vector of a rotating particle of the ring as a random section of a special vector bundle. Using the theory of group representations, we prove that such a field is completely determined by a sequence of continuous positive-definite matrix-valued functions defined on the Cartesian square F2 of the radial cross-section F of the rings, where F is a fat fractal.

  19. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  20. Recognizing suspicious activities in infrared imagery using appearance-based features and the theory of hidden conditional random fields for outdoor perimeter surveillance

    NASA Astrophysics Data System (ADS)

    Rogotis, Savvas; Palaskas, Christos; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros

    2015-11-01

    This work aims to present an extended framework for automatically recognizing suspicious activities in outdoor perimeter surveilling systems based on infrared video processing. By combining size-, speed-, and appearance-based features, like the local phase quantization and the histograms of oriented gradients, actions of small duration are recognized and used as input, along with spatial information, for modeling target activities using the theory of hidden conditional random fields (HCRFs). HCRFs are used to classify an observation sequence into the most appropriate activity label class, thus discriminating high-risk activities like trespassing from zero risk activities, such as loitering outside the perimeter. The effectiveness of this approach is demonstrated with experimental results in various scenarios that represent suspicious activities in perimeter surveillance systems.

  1. Prediction of hourly PM2.5 using a space-time support vector regression model

    NASA Astrophysics Data System (ADS)

    Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang

    2018-05-01

    Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.

  2. Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport

    NASA Astrophysics Data System (ADS)

    Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike

    2017-04-01

    Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.

  3. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  4. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  5. Features of the energy structure of acoustic fields in the ocean with two-dimensional random inhomogeneities

    NASA Astrophysics Data System (ADS)

    Gulin, O. E.; Yaroshchuk, I. O.

    2017-03-01

    The paper is devoted to the analytic study and numerical simulation of mid-frequency acoustic signal propagation in a two-dimensional inhomogeneous random shallow-water medium. The study was carried out by the cross section method (local modes). We present original theoretical estimates for the behavior of the average acoustic field intensity and show that at different distances, the features of propagation loss behavior are determined by the intensity of fluctuations and their horizontal scale and depend on the initial regular parameters, such as the emission frequency and size of sound losses in the bottom. We establish analytically that for the considered waveguide and sound frequency parameters, mode coupling effect has a local character and weakly influences the statistics. We establish that the specific form of the spatial spectrum of sound velocity inhomogeneities for the statistical patterns of the field intensity is insignificant during observations in the range of shallow-water distances of practical interest.

  6. Reciprocity in spatial evolutionary public goods game on double-layered network

    NASA Astrophysics Data System (ADS)

    Kim, Jinho; Yook, Soon-Hyung; Kim, Yup

    2016-08-01

    Spatial evolutionary games have mainly been studied on a single, isolated network. However, in real world systems, many interaction topologies are not isolated but many different types of networks are inter-connected to each other. In this study, we investigate the spatial evolutionary public goods game (SEPGG) on double-layered random networks (DRN). Based on the mean-field type arguments and numerical simulations, we find that SEPGG on DRN shows very rich interesting phenomena, especially, depending on the size of each layer, intra-connectivity, and inter-connected couplings, the network reciprocity of SEPGG on DRN can be drastically enhanced through the inter-connected coupling. Furthermore, SEPGG on DRN can provide a more general framework which includes the evolutionary dynamics on multiplex networks and inter-connected networks at the same time.

  7. Reciprocity in spatial evolutionary public goods game on double-layered network

    PubMed Central

    Kim, Jinho; Yook, Soon-Hyung; Kim, Yup

    2016-01-01

    Spatial evolutionary games have mainly been studied on a single, isolated network. However, in real world systems, many interaction topologies are not isolated but many different types of networks are inter-connected to each other. In this study, we investigate the spatial evolutionary public goods game (SEPGG) on double-layered random networks (DRN). Based on the mean-field type arguments and numerical simulations, we find that SEPGG on DRN shows very rich interesting phenomena, especially, depending on the size of each layer, intra-connectivity, and inter-connected couplings, the network reciprocity of SEPGG on DRN can be drastically enhanced through the inter-connected coupling. Furthermore, SEPGG on DRN can provide a more general framework which includes the evolutionary dynamics on multiplex networks and inter-connected networks at the same time. PMID:27503801

  8. Measuring Ethnic Preferences in Bosnia and Herzegovina with Mobile Advertising

    PubMed Central

    Weidmann, Nils B.

    2016-01-01

    We present a field experiment that uses geo-referenced smartphone advertisements to measure ethnic preferences at a highly disaggregated level. Different types of banners advertising a vote matching tool are randomly displayed to mobile Internet users in Bosnia and Herzegovina, while recording their spatial coordinates. Differences in the response (click) rate to different ethnic cues on these banners are used to measure temporal and spatial variation in ethnic preferences among the population of Bosnia and Herzegovina. Our study lays out the theoretical and practical underpinnings of this technology and discusses its potential for future applications, but also highlights limitations of this approach. PMID:28005924

  9. Measuring Ethnic Preferences in Bosnia and Herzegovina with Mobile Advertising.

    PubMed

    Nisser, Annerose; Weidmann, Nils B

    2016-01-01

    We present a field experiment that uses geo-referenced smartphone advertisements to measure ethnic preferences at a highly disaggregated level. Different types of banners advertising a vote matching tool are randomly displayed to mobile Internet users in Bosnia and Herzegovina, while recording their spatial coordinates. Differences in the response (click) rate to different ethnic cues on these banners are used to measure temporal and spatial variation in ethnic preferences among the population of Bosnia and Herzegovina. Our study lays out the theoretical and practical underpinnings of this technology and discusses its potential for future applications, but also highlights limitations of this approach.

  10. Pulsed field gradients in simulations of one- and two-dimensional NMR spectra.

    PubMed

    Meresi, G H; Cuperlovic, M; Palke, W E; Gerig, J T

    1999-03-01

    A method for the inclusion of the effects of z-axis pulsed field gradients in computer simulations of an arbitrary pulsed NMR experiment with spin (1/2) nuclei is described. Recognizing that the phase acquired by a coherence following the application of a z-axis pulsed field gradient bears a fixed relation to its order and the spatial position of the spins in the sample tube, the sample is regarded as a collection of volume elements, each phase-encoded by a characteristic, spatially dependent precession frequency. The evolution of the sample's density matrix is thus obtained by computing the evolution of the density matrix for each volume element. Following the last gradient pulse, these density matrices are combined to form a composite density matrix which evolves through the rest of the experiment to yield the observable signal. This approach is implemented in a program which includes capabilities for rigorous inclusion of spin relaxation by dipole-dipole, chemical shift anisotropy, and random field mechanisms, plus the effects of arbitrary RF fields. Mathematical procedures for accelerating these calculations are described. The approach is illustrated by simulations of representative one- and two-dimensional NMR experiments. Copyright 1999 Academic Press.

  11. Bayesian estimation of the transmissivity spatial structure from pumping test data

    NASA Astrophysics Data System (ADS)

    Demir, Mehmet Taner; Copty, Nadim K.; Trinchero, Paolo; Sanchez-Vila, Xavier

    2017-06-01

    Estimating the statistical parameters (mean, variance, and integral scale) that define the spatial structure of the transmissivity or hydraulic conductivity fields is a fundamental step for the accurate prediction of subsurface flow and contaminant transport. In practice, the determination of the spatial structure is a challenge because of spatial heterogeneity and data scarcity. In this paper, we describe a novel approach that uses time drawdown data from multiple pumping tests to determine the transmissivity statistical spatial structure. The method builds on the pumping test interpretation procedure of Copty et al. (2011) (Continuous Derivation method, CD), which uses the time-drawdown data and its time derivative to estimate apparent transmissivity values as a function of radial distance from the pumping well. A Bayesian approach is then used to infer the statistical parameters of the transmissivity field by combining prior information about the parameters and the likelihood function expressed in terms of radially-dependent apparent transmissivities determined from pumping tests. A major advantage of the proposed Bayesian approach is that the likelihood function is readily determined from randomly generated multiple realizations of the transmissivity field, without the need to solve the groundwater flow equation. Applying the method to synthetically-generated pumping test data, we demonstrate that, through a relatively simple procedure, information on the spatial structure of the transmissivity may be inferred from pumping tests data. It is also shown that the prior parameter distribution has a significant influence on the estimation procedure, given the non-uniqueness of the estimation procedure. Results also indicate that the reliability of the estimated transmissivity statistical parameters increases with the number of available pumping tests.

  12. Spatial statistical analysis of basal stem root disease under natural field epidemic of oil palm

    NASA Astrophysics Data System (ADS)

    Kamu, Assis; Phin, Chong Khim; Seman, Idris Abu; Wan, Hoong Hak; Mun, Ho Chong

    2015-02-01

    Oil palm or scientifically known as Elaeis guineensis Jacq. is the most important commodity crop in Malaysia and has greatly contributed to the economy growth of the country. As far as disease is concerned in the industry, Basal Stem Rot (BSR) caused by Ganoderma boninence remains the most important disease. BSR disease is the most widely studied with information available for oil palm disease in Malaysia. However, there is still limited study on the spatial as well as temporal pattern or distribution of the disease especially under natural field epidemic condition in oil palm plantation. The objective of this study is to spatially identify the pattern of BSR disease under natural field epidemic using two geospatial analytical techniques, which are quadrat analysis for the first order properties of partial pattern analysis and nearest-neighbor analysis (NNA) for the second order properties of partial pattern analysis. Two study sites were selected with different age of tree. Both sites are located in Tawau, Sabah and managed by the same company. The results showed that at least one of the point pattern analysis used which is NNA (i.e. the second order properties of partial pattern analysis) has confirmed the disease is complete spatial randomness. This suggests the spread of the disease is not from tree to tree and the age of palm does not play a significance role in determining the spatial pattern of the disease. From the spatial pattern of the disease, it would help in the disease management program and for the industry in the future. The statistical modelling is expected to help in identifying the right model to estimate the yield loss of oil palm due to BSR disease in the future.

  13. Reducing Spatial Uncertainty Through Attentional Cueing Improves Contrast Sensitivity in Regions of the Visual Field With Glaucomatous Defects

    PubMed Central

    Phu, Jack; Kalloniatis, Michael; Khuu, Sieu K.

    2018-01-01

    Purpose Current clinical perimetric test paradigms present stimuli randomly to various locations across the visual field (VF), inherently introducing spatial uncertainty, which reduces contrast sensitivity. In the present study, we determined the extent to which spatial uncertainty affects contrast sensitivity in glaucoma patients by minimizing spatial uncertainty through attentional cueing. Methods Six patients with open-angle glaucoma and six healthy subjects underwent laboratory-based psychophysical testing to measure contrast sensitivity at preselected locations at two eccentricities (9.5° and 17.5°) with two stimulus sizes (Goldmann sizes III and V) under different cueing conditions: 1, 2, 4, or 8 points verbally cued. Method of Constant Stimuli and a single-interval forced-choice procedure were used to generate frequency of seeing (FOS) curves at locations with and without VF defects. Results At locations with VF defects, cueing minimizes spatial uncertainty and improves sensitivity under all conditions. The effect of cueing was maximal when one point was cued, and rapidly diminished when more points were cued (no change to baseline with 8 points cued). The slope of the FOS curve steepened with reduced spatial uncertainty. Locations with normal sensitivity in glaucomatous eyes had similar performance to that of healthy subjects. There was a systematic increase in uncertainty with the depth of VF loss. Conclusions Sensitivity measurements across the VF are negatively affected by spatial uncertainty, which increases with greater VF loss. Minimizing uncertainty can improve sensitivity at locations of deficit. Translational Relevance Current perimetric techniques introduce spatial uncertainty and may therefore underestimate sensitivity in regions of VF loss. PMID:29600116

  14. Spatial networks

    NASA Astrophysics Data System (ADS)

    Barthélemy, Marc

    2011-02-01

    Complex systems are very often organized under the form of networks where nodes and edges are embedded in space. Transportation and mobility networks, Internet, mobile phone networks, power grids, social and contact networks, and neural networks, are all examples where space is relevant and where topology alone does not contain all the information. Characterizing and understanding the structure and the evolution of spatial networks is thus crucial for many different fields, ranging from urbanism to epidemiology. An important consequence of space on networks is that there is a cost associated with the length of edges which in turn has dramatic effects on the topological structure of these networks. We will thoroughly explain the current state of our understanding of how the spatial constraints affect the structure and properties of these networks. We will review the most recent empirical observations and the most important models of spatial networks. We will also discuss various processes which take place on these spatial networks, such as phase transitions, random walks, synchronization, navigation, resilience, and disease spread.

  15. Effects of ignition location models on the burn patterns of simulated wildfires

    USGS Publications Warehouse

    Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.

    2011-01-01

    Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.

  16. Passive scalar entrainment and mixing in a forced, spatially-developing mixing layer

    NASA Technical Reports Server (NTRS)

    Lowery, P. S.; Reynolds, W. C.; Mansour, N. N.

    1987-01-01

    Numerical simulations are performed for the forced, spatially-developing plane mixing layer in two and three dimensions. Transport of a passive scalar field is included in the computation. This, together with the allowance for spatial development in the simulations, affords the opportunity for study of the asymmetric entrainment of irrotational fluid into the layer. The inclusion of a passive scalar field provides a means for simulating the effect of this entrainment asymmetry on the generation of 'products' from a 'fast' chemical reaction. Further, the three-dimensional simulations provide useful insight into the effect of streamwise structures on these entrainment and 'fast' reaction processes. Results from a two-dimensional simulation indicate 1.22 parts high-speed fluid are entrained for every one part low-speed fluid. Inclusion of streamwise vortices at the inlet plane of a three-dimensional simulation indicate a further increase in asymmetric entrainment - 1.44:1. Results from a final three-dimensional simulation are presented. In this case, a random velocity perturbation is imposed at the inlet plane. The results indicate the 'natural' development of the large spanwise structures characteristic of the mixing layer.

  17. Moving from spatially segregated to transparent motion: a modelling approach

    PubMed Central

    Durant, Szonya; Donoso-Barrera, Alejandra; Tan, Sovira; Johnston, Alan

    2005-01-01

    Motion transparency, in which patterns of moving elements group together to give the impression of lacy overlapping surfaces, provides an important challenge to models of motion perception. It has been suggested that we perceive transparent motion when the shape of the velocity histogram of the stimulus is bimodal. To investigate this further, random-dot kinematogram motion sequences were created to simulate segregated (perceptually spatially separated) and transparent (perceptually overlapping) motion. The motion sequences were analysed using the multi-channel gradient model (McGM) to obtain the speed and direction at every pixel of each frame of the motion sequences. The velocity histograms obtained were found to be quantitatively similar and all were bimodal. However, the spatial and temporal properties of the velocity field differed between segregated and transparent stimuli. Transparent stimuli produced patches of rightward and leftward motion that varied in location over time. This demonstrates that we can successfully differentiate between these two types of motion on the basis of the time varying local velocity field. However, the percept of motion transparency cannot be based simply on the presence of a bimodal velocity histogram. PMID:17148338

  18. A spatial error model with continuous random effects and an application to growth convergence

    NASA Astrophysics Data System (ADS)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  19. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    PubMed

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  20. Random waves in the brain: Symmetries and defect generation in the visual cortex

    NASA Astrophysics Data System (ADS)

    Schnabel, M.; Kaschube, M.; Löwel, S.; Wolf, F.

    2007-06-01

    How orientation maps in the visual cortex of the brain develop is a matter of long standing debate. Experimental and theoretical evidence suggests that their development represents an activity-dependent self-organization process. Theoretical analysis [1] exploring this hypothesis predicted that maps at an early developmental stage are realizations of Gaussian random fields exhibiting a rigorous lower bound for their densities of topological defects, called pinwheels. As a consequence, lower pinwheel densities, if observed in adult animals, are predicted to develop through the motion and annihilation of pinwheel pairs. Despite of being valid for a large class of developmental models this result depends on the symmetries of the models and thus of the predicted random field ensembles. In [1] invariance of the orientation map's statistical properties under independent space rotations and orientation shifts was assumed. However, full rotation symmetry appears to be broken by interactions of cortical neurons, e.g. selective couplings between groups of neurons with collinear orientation preferences [2]. A recently proposed new symmetry, called shift-twist symmetry [3], stating that spatial rotations have to occur together with orientation shifts in order to be an appropriate symmetry transformation, is more consistent with this organization. Here we generalize our random field approach to this important symmetry class. We propose a new class of shift-twist symmetric Gaussian random fields and derive the general correlation functions of this ensemble. It turns out that despite strong effects of the shift-twist symmetry on the structure of the correlation functions and on the map layout the lower bound on the pinwheel densities remains unaffected, predicting pinwheel annihilation in systems with low pinwheel densities.

  1. Covariance analyses of satellite-derived mesoscale wind fields

    NASA Technical Reports Server (NTRS)

    Maddox, R. A.; Vonder Haar, T. H.

    1979-01-01

    Statistical structure functions have been computed independently for nine satellite-derived mesoscale wind fields that were obtained on two different days. Small cumulus clouds were tracked at 5 min intervals, but since these clouds occurred primarily in the warm sectors of midlatitude cyclones the results cannot be considered representative of the circulations within cyclones in general. The field structure varied considerably with time and was especially affected if mesoscale features were observed. The wind fields on the 2 days studied were highly anisotropic with large gradients in structure occurring approximately normal to the mean flow. Structure function calculations for the combined set of satellite winds were used to estimate random error present in the fields. It is concluded for these data that the random error in vector winds derived from cumulus cloud tracking using high-frequency satellite data is less than 1.75 m/s. Spatial correlation functions were also computed for the nine data sets. Normalized correlation functions were considerably different for u and v components and decreased rapidly as data point separation increased for both components. The correlation functions for transverse and longitudinal components decreased less rapidly as data point separation increased.

  2. Ultrafast random-access scanning in two-photon microscopy using acousto-optic deflectors.

    PubMed

    Salomé, R; Kremer, Y; Dieudonné, S; Léger, J-F; Krichevsky, O; Wyart, C; Chatenay, D; Bourdieu, L

    2006-06-30

    Two-photon scanning microscopy (TPSM) is a powerful tool for imaging deep inside living tissues with sub-cellular resolution. The temporal resolution of TPSM is however strongly limited by the galvanometric mirrors used to steer the laser beam. Fast physiological events can therefore only be followed by scanning repeatedly a single line within the field of view. Because acousto-optic deflectors (AODs) are non-mechanical devices, they allow access at any point within the field of view on a microsecond time scale and are therefore excellent candidates to improve the temporal resolution of TPSM. However, the use of AOD-based scanners with femtosecond pulses raises several technical difficulties. In this paper, we describe an all-digital TPSM setup based on two crossed AODs. It includes in particular an acousto-optic modulator (AOM) placed at 45 degrees with respect to the AODs to pre-compensate for the large spatial distortions of femtosecond pulses occurring in the AODs, in order to optimize the spatial resolution and the fluorescence excitation. Our setup allows recording from freely selectable point-of-interest at high speed (1kHz). By maximizing the time spent on points of interest, random-access TPSM (RA-TPSM) constitutes a promising method for multiunit recordings with millisecond resolution in biological tissues.

  3. Modelling past land use using archaeological and pollen data

    NASA Astrophysics Data System (ADS)

    Pirzamanbein, Behnaz; Lindström, johan; Poska, Anneli; Gaillard-Lemdahl, Marie-José

    2016-04-01

    Accurate maps of past land use are necessary for studying the impact of anthropogenic land-cover changes on climate and biodiversity. We develop a Bayesian hierarchical model to reconstruct the land use using Gaussian Markov random fields. The model uses two observations sets: 1) archaeological data, representing human settlements, urbanization and agricultural findings; and 2) pollen-based land estimates of the three land-cover types Coniferous forest, Broadleaved forest and Unforested/Open land. The pollen based estimates are obtained from the REVEALS model, based on pollen counts from lakes and bogs. Our developed model uses the sparse pollen-based estimations to reconstruct the spatial continuous cover of three land cover types. Using the open-land component and the archaeological data, the extent of land-use is reconstructed. The model is applied on three time periods - centred around 1900 CE, 1000 and, 4000 BCE over Sweden for which both pollen-based estimates and archaeological data are available. To estimate the model parameters and land use, a block updated Markov chain Monte Carlo (MCMC) algorithm is applied. Using the MCMC posterior samples uncertainties in land-use predictions are computed. Due to lack of good historic land use data, model results are evaluated by cross-validation. Keywords. Spatial reconstruction, Gaussian Markov random field, Fossil pollen records, Archaeological data, Human land-use, Prediction uncertainty

  4. Stochastic inflation lattice simulations - Ultra-large scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.

    1991-01-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.

  5. Continuous time quantum random walks in free space

    NASA Astrophysics Data System (ADS)

    Eichelkraut, Toni; Vetter, Christian; Perez-Leija, Armando; Christodoulides, Demetrios; Szameit, Alexander

    2014-05-01

    We show theoretically and experimentally that two-dimensional continuous time coherent random walks are possible in free space, that is, in the absence of any external potential, by properly tailoring the associated initial wave function. These effects are experimentally demonstrated using classical paraxial light. Evidently, the usage of classical beams to explore the dynamics of point-like quantum particles is possible since both phenomena are mathematically equivalent. This in turn makes our approach suitable for the realization of random walks using different quantum particles, including electrons and photons. To study the spatial evolution of a wavefunction theoretically, we consider the one-dimensional paraxial wave equation (i∂z +1/2 ∂x2) Ψ = 0 . Starting with the initially localized wavefunction Ψ (x , 0) = exp [ -x2 / 2σ2 ] J0 (αx) , one can show that the evolution of such Gaussian-apodized Bessel envelopes within a region of validity resembles the probability pattern of a quantum walker traversing a uniform lattice. In order to generate the desired input-field in our experimental setting we shape the amplitude and phase of a collimated light beam originating from a classical HeNe-Laser (633 nm) utilizing a spatial light modulator.

  6. Assessing the role of spatial correlations during collective cell spreading

    PubMed Central

    Treloar, Katrina K.; Simpson, Matthew J.; Binder, Benjamin J.; McElwain, D. L. Sean; Baker, Ruth E.

    2014-01-01

    Spreading cell fronts are essential features of development, repair and disease processes. Many mathematical models used to describe the motion of cell fronts, such as Fisher's equation, invoke a mean–field assumption which implies that there is no spatial structure, such as cell clustering, present. Here, we examine the presence of spatial structure using a combination of in vitro circular barrier assays, discrete random walk simulations and pair correlation functions. In particular, we analyse discrete simulation data using pair correlation functions to show that spatial structure can form in a spreading population of cells either through sufficiently strong cell–to–cell adhesion or sufficiently rapid cell proliferation. We analyse images from a circular barrier assay describing the spreading of a population of MM127 melanoma cells using the same pair correlation functions. Our results indicate that the spreading melanoma cell populations remain very close to spatially uniform, suggesting that the strength of cell–to–cell adhesion and the rate of cell proliferation are both sufficiently small so as not to induce any spatial patterning in the spreading populations. PMID:25026987

  7. Species extinction thresholds in the face of spatially correlated periodic disturbance.

    PubMed

    Liao, Jinbao; Ying, Zhixia; Hiebeler, David E; Wang, Yeqiao; Takada, Takenori; Nijs, Ivan

    2015-10-20

    The spatial correlation of disturbance is gaining attention in landscape ecology, but knowledge is still lacking on how species traits determine extinction thresholds under spatially correlated disturbance regimes. Here we develop a pair approximation model to explore species extinction risk in a lattice-structured landscape subject to aggregated periodic disturbance. Increasing disturbance extent and frequency accelerated population extinction irrespective of whether dispersal was local or global. Spatial correlation of disturbance likewise increased species extinction risk, but only for local dispersers. This indicates that models based on randomly simulated disturbances (e.g., mean-field or non-spatial models) may underestimate real extinction rates. Compared to local dispersal, species with global dispersal tolerated more severe disturbance, suggesting that the spatial correlation of disturbance favors long-range dispersal from an evolutionary perspective. Following disturbance, intraspecific competition greatly enhanced the extinction risk of distance-limited dispersers, while it surprisingly did not influence the extinction thresholds of global dispersers, apart from decreasing population density to some degree. As species respond differently to disturbance regimes with different spatiotemporal properties, different regimes may accommodate different species.

  8. Identification of Volcanic Landforms and Processes on Earth and Mars using Geospatial Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Fagents, S. A.; Hamilton, C. W.

    2009-12-01

    Nearest neighbor (NN) analysis enables the identification of landforms using non-morphological parameters and can be useful for constraining the geological processes contributing to observed patterns of spatial distribution. Explosive interactions between lava and water can generate volcanic rootless cone (VRC) groups that are well suited to geospatial analyses because they consist of a large number of landforms that share a common formation mechanism. We have applied NN analysis tools to quantitatively compare the spatial distribution of VRCs in the Laki lava flow in Iceland to analogous landforms in the Tartarus Colles Region of eastern Elysium Planitia, Mars. Our results show that rootless eruption sites on both Earth and Mars exhibit systematic variations in spatial organization that are related to variations in the distribution of resources (lava and water) at different scales. Field observations in Iceland reveal that VRC groups are composite structures formed by the emplacement of chronologically and spatially distinct domains. Regionally, rootless cones cluster into groups and domains, but within domains NN distances exhibit random to repelled distributions. This suggests that on regional scales VRCs cluster in locations that contain sufficient resources, whereas on local scales rootless eruption sites tend to self-organize into distributions that maximize the utilization of limited resources (typically groundwater). Within the Laki lava flow, near-surface water is abundant and pre-eruption topography appears to exert the greatest control on both lava inundation regions and clustering of rootless eruption sites. In contrast, lava thickness appears to be the controlling factor in the formation of rootless eruption sites in the Tartarus Colles Region. A critical lava thickness may be required to initiate rootless eruptions on Mars because the lava flows must contain sufficient heat for transferred thermal energy to reach the underlying cryosphere and volatilize buried ground ice. In both environments, the spatial distribution of rootless eruption sites on local scales may either be random, which indicates that rootless eruption sites form independently of one another, or repelled, which implies resource limitation. Where competition for limited groundwater causes rootless eruption sites to develop greater than random NN separation, rootless eruption sites can be modeled as a system of pumping wells that extract water from a shared aquifer, thereby generating repelled distributions due to non-initiation or early cessation of rootless explosive activity at sites with insufficient access to groundwater. Thus statistical NN analyses can be combined with field observations and remote sensing to obtain information about self-organization processes within geological systems and the effects of environmental resource limitation on the spatial distribution of volcanic landforms. NN analyses may also be used to quantitatively compare the spatial distribution of landforms in different planetary environments and for supplying non-morphological evidence to discriminate between feature identities and geological formation mechanisms.

  9. Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foye, Kevin C.; Soong, Te-Yang

    2012-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the wastemore » mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this specific example, relative density, which can be determined through field measurements, was selected as the field quality control parameter for waste placement. This technique can be extended to include a rigorous performance-based methodology using other parameters (void space criteria, debris-soil mix ratio, pre-loading, etc.). As shown in this example, each parameter range, or sets of parameter ranges can be selected such that they can result in an acceptable, long-term differential settlement according to the probabilistic model. The methodology can also be used to re-evaluate the long-term differential settlement behavior at closed land disposal facilities to identify, if any, problematic facilities so that remedial action (e.g., reinforcement of upper and intermediate waste layers) can be implemented. Considering the inherent spatial variability in waste and earth materials and the need for engineers to apply sound quantitative practices to engineering analysis, it is important to apply the available probabilistic techniques to problems of differential settlement. One such method to implement probability-based differential settlement analyses for the design of landfill final covers has been presented. The design evaluation technique presented is one tool to bridge the gap from deterministic practice to probabilistic practice. (authors)« less

  10. Sample design effects in landscape genetics

    USGS Publications Warehouse

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  11. Controlling the stability of nonlinear optical modes via electromagnetically induced transparency

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Liang, Yi-zeng; Lin, Ji; Li, Hui-jun

    2018-02-01

    We propose a scheme to generate and stabilize the high-dimensional spatial solitons via electromagnetically induced transparency (EIT). The system we consider is a resonant atomic ensemble having Λ configuration. We illustrate that under EIT conditions the equation satisfied by the probe field envelope is reduced to a saturable nonlinear Schrödinger equation with the trapping potential, provided by a far-detuned laser field and a random magnetic field. We present high-dimensional soliton solutions exhibiting many interesting characteristics, including diversity (i.e., many different types of soliton solutions can be found, including bright, ring multipole bright, ring multipole defect mode, multiring bright, multiring defect mode, and vortices solitons), the phase transition between bright soliton and higher-order defect modes (i.e., the phase transition can be realized by controlling the nonlinear coefficient or the intensity of the trapping potential), and stability (i.e., various solitons can be stabilized by the Gaussian potential provided by the far detuned laser field, or the random potential provided by the magnetic field). We also find that some solitons are the extension of the linear eigenmode, whereas others entirely derive from the role of nonlinearity. Compared with previous studies, we not only show the diverse soliton solutions in the same system but also find the boundary of the phase transition for the type of solitons. In addition, we present the possibility of using the random potential to stabilize various solitons and vortices.

  12. Crustal evolution inferred from Apollo magnetic measurements

    NASA Technical Reports Server (NTRS)

    Dyal, P.; Daily, W. D.; Vanyan, L. L.

    1978-01-01

    Magnetic field and solar wind plasma density measurements were analyzed to determine the scale size characteristics of remanent fields at the Apollo 12, 15, and 16 landing sites. Theoretical model calculations of the field-plasma interaction, involving diffusion of the remanent field into the solar plasma, were compared to the data. The information provided by all these experiments shows that remanent fields over most of the lunar surface are characterized by spatial variations as small as a few kilometers. Large regions (50 to 100 km) of the lunar crust were probably uniformly magnetized during early crustal evolution. Bombardment and subsequent gardening of the upper layers of these magnetized regions left randomly oriented, smaller scale (5 to 10 km) magnetic sources close to the surface. The larger scale size fields of magnitude approximately 0.1 gammas are measured by the orbiting subsatellite experiments and the small scale sized remanent fields of magnitude approximately 100 gammas are measured by the surface experiments.

  13. Cooperation for volunteering and partially random partnerships

    NASA Astrophysics Data System (ADS)

    Szabó, György; Vukov, Jeromos

    2004-03-01

    Competition among cooperative, defective, and loner strategies is studied by considering an evolutionary prisoner’s dilemma game for different partnerships. In this game each player can adopt one of its coplayer’s strategy with a probability depending on the difference of payoffs coming from games with the corresponding coplayers. Our attention is focused on the effects of annealed and quenched randomness in the partnership for fixed number of coplayers. It is shown that only the loners survive if the four coplayers are chosen randomly (mean-field limit). On the contrary, on the square lattice all the three strategies are maintained by the cyclic invasions resulting in a self-organizing spatial pattern. If the fixed partnership is described by a regular small-world structure then a homogeneous oscillation occurs in the population dynamics when the measure of quenched randomness exceeds a threshold value. Similar behavior with higher sensitivity to the randomness is found if temporary partners are substituted for the standard ones with some probability at each step of iteration.

  14. Multispectral scanner system parameter study and analysis software system description, volume 2

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.

    1978-01-01

    The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.

  15. Spatial Factors in the Integration of Speed Information

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)

    1995-01-01

    We reported that, for a 21FC task with multiple Gabor patches in each interval, thresholds for speed discrimination decreased with the number of patches, while simply increasing the area of a single patch produced no such effect. This result could be explained by multiple patches reducing spatial uncertainty. However, the fact that thresholds decrease with number even when the patches are in fixed positions argues against this explanation. We therefore performed additional experiments to explore the lack of an area effect. Three observers did a 21FC speed discrimination task with 6 Gabor patches in each interval, and were asked to pick the interval in which the gratings moved faster. The 50% contrast patches were placed on a circle at 4 deg. eccentricity, either equally spaced and maximally separated (hexagonal array), or closely-spaced, in consecutive positions (string of pearls). For the string-of-pearls condition, the grating phases were either random, or consistent with a full-field grating viewed through multiple Gaussian windows. When grating phases were random, the thresholds for the hexagonal and string-of-pearls layouts were indistinguishable. For the string-of-pearls layout, thresholds in the consistent-phase condition were higher by 15 +/- 6% than in the random-phase condition. (Thresholds increased by 57 +/- 7% in going from 6 patches to a single patch of equivalent area.). For random-phase patches, the lower thresholds for 6 patches does not depend on a specific spacing or spatial layout. Multiple, closely-spaced, consistent-phase patches that can be interpreted as a single grating, result in thresholds closer to that produced by a single patch. Together, our results suggest that object segmentation may play a role in the integration of speed information.

  16. Use of forecasting signatures to help distinguish periodicity, randomness, and chaos in ripples and other spatial patterns

    USGS Publications Warehouse

    Rubin, D.M.

    1992-01-01

    Forecasting of one-dimensional time series previously has been used to help distinguish periodicity, chaos, and noise. This paper presents two-dimensional generalizations for making such distinctions for spatial patterns. The techniques are evaluated using synthetic spatial patterns and then are applied to a natural example: ripples formed in sand by blowing wind. Tests with the synthetic patterns demonstrate that the forecasting techniques can be applied to two-dimensional spatial patterns, with the same utility and limitations as when applied to one-dimensional time series. One limitation is that some combinations of periodicity and randomness exhibit forecasting signatures that mimic those of chaos. For example, sine waves distorted with correlated phase noise have forecasting errors that increase with forecasting distance, errors that, are minimized using nonlinear models at moderate embedding dimensions, and forecasting properties that differ significantly between the original and surrogates. Ripples formed in sand by flowing air or water typically vary in geometry from one to another, even when formed in a flow that is uniform on a large scale; each ripple modifies the local flow or sand-transport field, thereby influencing the geometry of the next ripple downcurrent. Spatial forecasting was used to evaluate the hypothesis that such a deterministic process - rather than randomness or quasiperiodicity - is responsible for the variation between successive ripples. This hypothesis is supported by a forecasting error that increases with forecasting distance, a greater accuracy of nonlinear relative to linear models, and significant differences between forecasts made with the original ripples and those made with surrogate patterns. Forecasting signatures cannot be used to distinguish ripple geometry from sine waves with correlated phase noise, but this kind of structure can be ruled out by two geometric properties of the ripples: Successive ripples are highly correlated in wavelength, and ripple crests display dislocations such as branchings and mergers. ?? 1992 American Institute of Physics.

  17. The Ciliate Paramecium Shows Higher Motility in Non-Uniform Chemical Landscapes

    PubMed Central

    Giuffre, Carl; Hinow, Peter; Vogel, Ryan; Ahmed, Tanvir; Stocker, Roman; Consi, Thomas R.; Strickler, J. Rudi

    2011-01-01

    We study the motility behavior of the unicellular protozoan Paramecium tetraurelia in a microfluidic device that can be prepared with a landscape of attracting or repelling chemicals. We investigate the spatial distribution of the positions of the individuals at different time points with methods from spatial statistics and Poisson random point fields. This makes quantitative the informal notion of “uniform distribution” (or lack thereof). Our device is characterized by the absence of large systematic biases due to gravitation and fluid flow. It has the potential to be applied to the study of other aquatic chemosensitive organisms as well. This may result in better diagnostic devices for environmental pollutants. PMID:21494596

  18. Spatial and Alignment Analyses for a Field of Small Volcanic Vents South of Pavonis Mons and Implications for the Tharsis Province, Mars

    NASA Technical Reports Server (NTRS)

    Bleacher, Jacob E.; Glaze, Lori S.; Greeley, Ronald; Hauber, Ernst; Baloga, Stephen; Sakimoto, Susan E. H.; Williams, David A.; Glotch, Timothy D.

    2009-01-01

    A field of small volcanic vents south of Pavonis Mons was mapped with each vent assigned a two-dimensional data point. Nearest neighbor and two-point azimuth analyses were applied to the resulting location data. Nearest neighbor results show that vents within this field are spatially random in a Poisson sense, suggesting that the vents formed independently of each other without sharing a centralized magma source at shallow depth. Two-point azimuth results show that the vents display north-trending alignment relationships between one another. This trend corresponds to the trends of faults and fractures of the Noachian-aged Claritas Fossae, which might extend into our study area buried beneath more recently emplaced lava flows. However, individual elongate vent summit structures do not consistently display the same trend. The development of the volcanic field appears to display tectonic control from buried Noachian-aged structural patterns on small, ascending magma bodies while the surface orientations of the linear vents might reflect different, younger tectonic patterns. These results suggest a complex interaction between magma ascension through the crust, and multiple, older, buried Tharsis-related tectonic structures.

  19. Identification and Simulation of Subsurface Soil patterns using hidden Markov random fields and remote sensing and geophysical EMI data sets

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Wellmann, Florian; Verweij, Elizabeth; von Hebel, Christian; van der Kruk, Jan

    2017-04-01

    Lateral and vertical spatial heterogeneity of subsurface properties such as soil texture and structure influences the available water and resource supply for crop growth. High-resolution mapping of subsurface structures using non-invasive geo-referenced geophysical measurements, like electromagnetic induction (EMI), enables a characterization of 3D soil structures, which have shown correlations to remote sensing information of the crop states. The benefit of EMI is that it can return 3D subsurface information, however the spatial dimensions are limited due to the labor intensive measurement procedure. Although active and passive sensors mounted on air- or space-borne platforms return 2D images, they have much larger spatial dimensions. Combining both approaches provides us with a potential pathway to extend the detailed 3D geophysical information to a larger area by using remote sensing information. In this study, we aim at extracting and providing insights into the spatial and statistical correlation of the geophysical and remote sensing observations of the soil/vegetation continuum system. To this end, two key points need to be addressed: 1) how to detect and recognize the geometric patterns (i.e., spatial heterogeneity) from multiple data sets, and 2) how to quantitatively describe the statistical correlation between remote sensing information and geophysical measurements. In the current study, the spatial domain is restricted to shallow depths up to 3 meters, and the geostatistical database contains normalized difference vegetation index (NDVI) derived from RapidEye satellite images and apparent electrical conductivities (ECa) measured from multi-receiver EMI sensors for nine depths of exploration ranging from 0-2.7 m. The integrated data sets are mapped into both the physical space (i.e. the spatial domain) and feature space (i.e. a two-dimensional space framed by the NDVI and the ECa data). Hidden Markov Random Fields (HMRF) are employed to model the underlying heterogeneities in spatial domain and finite Gaussian mixture models are adopted to quantitatively describe the statistical patterns in terms of center vectors and covariance matrices in feature space. A recently developed parallel stochastic clustering algorithm is adopted to implement the HMRF models and the Markov chain Monte Carlo based Bayesian inference. Certain spatial patterns such as buried paleo-river channels covered by shallow sediments are investigated as typical examples. The results indicate that the geometric patterns of the subsurface heterogeneity can be represented and quantitatively characterized by HMRF. Furthermore, the statistical patterns of the NDVI and the EMI data from the soil/vegetation-continuum system can be inferred and analyzed in a quantitative manner.

  20. Zealots tame oscillations in the spatial rock-paper-scissors game

    NASA Astrophysics Data System (ADS)

    Szolnoki, Attila; Perc, Matjaž

    2016-06-01

    The rock-paper-scissors game is a paradigmatic model for biodiversity, with applications ranging from microbial populations to human societies. Research has shown, however, that mobility jeopardizes biodiversity by promoting the formation of spiral waves, especially if there is no conservation law in place for the total number of competing players. First, we show that even if such a conservation law applies, mobility still jeopardizes biodiversity in the spatial rock-paper-scissors game if only a small fraction of links of the square lattice is randomly rewired. Secondly, we show that zealots are very effective in taming the amplitude of oscillations that emerge due to mobility and/or interaction randomness, and this regardless of whether the later is quenched or annealed. While even a tiny fraction of zealots brings significant benefits, at 5% occupancy zealots practically destroy all oscillations regardless of the intensity of mobility, and regardless of the type and strength of randomness in the interaction structure. Interestingly, by annealed randomness the impact of zealots is qualitatively the same as by mobility, which highlights that fast diffusion does not necessarily destroy the coexistence of species, and that zealotry thus helps to recover the stable mean-field solution. Our results strengthen the important role of zealots in models of cyclic dominance, and they reveal fascinating evolutionary outcomes in structured populations that are a unique consequence of such uncompromising behavior.

  1. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  2. Soil Sampling Techniques For Alabama Grain Fields

    NASA Technical Reports Server (NTRS)

    Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.

    2003-01-01

    Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.

  3. Syzygies, Pluricanonical Maps, and the Birational Geometry of Varieties of Maximal Albanese Dimension

    NASA Astrophysics Data System (ADS)

    Tesfagiorgis, Kibrewossen B.

    Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products in mountainous regions. The present work develops an approach to seamlessly blend satellite, available radar, climatological and gauge precipitation products to fill gaps in ground-based radar precipitation field. To mix different precipitation products, the error of any of the products relative to each other should be removed. For bias correction, the study uses a new ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar-gauge precipitation product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. In addition to biases, sometimes there is also spatial error between the radar and satellite precipitation estimates; one of them has to be geometrically corrected with reference to the other. A set of corresponding raining points between SPE and radar products are selected to apply linear registration using a regularized least square technique to minimize the dislocation error in SPEs with respect to available radar products. A weighted Successive Correction Method (SCM) is used to make the merging between error corrected satellite and radar precipitation estimates. In addition to SCM, we use a combination of SCM and Bayesian spatial method for merging the rain gauges and climatological precipitation sources with radar and SPEs. We demonstrated the method using two satellite-based, CPC Morphing (CMORPH) and Hydro-Estimator (HE), two radar-gauge based, Stage-II and ST-IV, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over different geographical locations of the United States. Results show that: (a) the method of ensembles helped reduce biases in SPEs significantly; (b) the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements .The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the operational meteorology and hydrology community.

  4. Clustering and Hazard Estimation in the Auckland Volcanic Field, New Zealand

    NASA Astrophysics Data System (ADS)

    Cronin, S. J.; Bebbington, M. S.

    2009-12-01

    The Auckland Volcanic Field (AVF) with its 49 eruptive centres formed over the last c. 250 ka presents several unique challenges to our understanding of distributed volcanic field construction and evolution. Due to the youth of the field, high-resolution stratigraphy of eruption centres and ash-fall sequences is possible, allowing time-breaks, soil and peat formation between eruption units to be identified. Radiocarbon dating of sediments between volcanic deposits shows that at least five of the centres have erupted on more than one occasion, with time breaks of 50-100 years between episodes. In addition, paleomagnetic and ash fall evidence implies that there has been strong clustering of eruption events over time, with a specific “flare-up” event involving over possibly up to 19 eruptions occurring between 35-25 ka, in spatially disparate locations. An additional complicating factor is that the only centre that shows any major evidence for evolution out of standard alkali basaltic compositions is also the youngest and largest in volume by several orders of magnitude. All of these features of the AVF, along with relatively poor age-control for many of the vents make spatio-temporal hazard forecasting for the field based on assumptions of past behaviour extremely difficult. Any relationships that take volumetric considerations into account are particularly difficult, since any trend analysis produces unreasonably large future eruptions. The most reasonable model is spatial, via eruption location. We have re-examined the age progression of eruptive events in the AVF, incorporating the most reliable sources of age and stratigraphic data, including developing new correlations between ashfall records in cores and likely vent locations via a probabilistic model of tephra dispersal. A Monte Carlo procedure using the age-progression, stratigraphy and dating constraints can then randomly reproduce likely orderings of events in the field. These were fitted by a clustering-based model of vent locations as originally applied by Magill et al (2005: Mathematical Geol. 37: 227-242) to the Allen and Smith (1994; Geosci. Report Shizuoka Univ 20: 5-14) age ordering of volcanism at AVF. Applying this model, modified by allowing continuation of activity at or around the youngest event, to sampled age orderings from the Monte Carlo procedure shows a very different spatial forecast to the earlier analysis. It is also different to the distribution from randomly ordered events, implying there is at least some clustering control on the location of eruptions in the field. Further iterations of this modelling approach will be tested in relation to eruptive volume and applied to other comparative volcanic fields.

  5. Forced magnetohydrodynamic turbulence in a uniform external magnetic field

    NASA Technical Reports Server (NTRS)

    Hossain, M.; Vahala, G.; Montgomery, D.

    1985-01-01

    Two-dimensional dissipative MHD turbulence is randomly driven at small spatial scales and is studied by numerical simulation in the presence of a strong uniform external magnetic field. A behavior is observed which is apparently distinct from the inverse cascade which prevails in the absence of an external magnetic field. The magnetic spectrum becomes dominated by the three longest wavelength Alfven waves in the system allowed by the boundary conditions: those which, in a box size of edge 2 pi, have wave numbers (kx, ky) = (1, 1), and (1, -1), where the external magnetic field is in the x direction. At any given instant, one of these three modes dominates the vector potential spectrum, but they do not constitute a resonantly coupled triad. Rather, they are apparently coupled by the smaller-scale turbulence.

  6. Forced MHD turbulence in a uniform external magnetic field

    NASA Technical Reports Server (NTRS)

    Hossain, M.; Vahala, G.; Montgomery, D.

    1985-01-01

    Two-dimensional dissipative MHD turbulence is randomly driven at small spatial scales and is studied by numerical simulation in the presence of a strong uniform external magnetic field. A behavior is observed which is apparently distinct from the inverse cascade which prevails in the absence of an external magnetic field. The magnetic spectrum becomes dominated by the three longest wavelength Alfven waves in the system allowed by the boundary conditions: those which, in a box size of edge 2 pi, have wave numbers (kx' ky) = (1, 1), and (1, -1), where the external magnetic field is in the x direction. At any given instant, one of these three modes dominates the vector potential spectrum, but they do not constitute a resonantly coupled triad. Rather, they are apparently coupled by the smaller-scale turbulence.

  7. Beyond time and space: The effect of a lateralized sustained attention task and brain stimulation on spatial and selective attention.

    PubMed

    Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena

    2017-10-03

    The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Alternative scenarios of bioenergy crop production in an agricultural landscape and implications for bird communities.

    PubMed

    Blank, Peter J; Williams, Carol L; Sample, David W; Meehan, Timothy D; Turner, Monica G

    2016-01-01

    Increased demand and government mandates for bioenergy crops in the United States could require a large allocation of agricultural land to bioenergy feedstock production and substantially alter current landscape patterns. Incorporating bioenergy landscape design into land-use decision making could help maximize benefits and minimize trade-offs among alternative land uses. We developed spatially explicit landscape scenarios of increased bioenergy crop production in an 80-km radius agricultural landscape centered on a potential biomass-processing energy facility and evaluated the consequences of each scenario for bird communities. Our scenarios included conversion of existing annual row crops to perennial bioenergy grasslands and conversion of existing grasslands to annual bioenergy row crops. The scenarios explored combinations of four biomass crop types (three potential grassland crops along a gradient of plant diversity and one annual row crop [corn]), three land conversion percentages to bioenergy crops (10%, 20%, or 30% of row crops or grasslands), and three spatial configurations of biomass crop fields (random, clustered near similar field types, or centered on the processing plant), yielding 36 scenarios. For each scenario, we predicted the impact on four bird community metrics: species richness, total bird density, species of greatest conservation need (SGCN) density, and SGCN hotspots (SGCN birds/ha ≥ 2). Bird community metrics consistently increased with conversion of row crops to bioenergy grasslands and consistently decreased with conversion of grasslands to bioenergy row crops. Spatial arrangement of bioenergy fields had strong effects on the bird community and in some cases was more influential than the amount converted to bioenergy crops. Clustering grasslands had a stronger positive influence on the bird community than locating grasslands near the central plant or at random. Expansion of bioenergy grasslands onto marginal agricultural lands will likely benefit grassland bird populations, and bioenergy landscapes could be designed to maximize biodiversity benefits while meeting targets for biomass production.

  9. Electrically controllable liquid crystal random lasers below the Fréedericksz transition threshold.

    PubMed

    Lee, Chia-Rong; Lin, Jia-De; Huang, Bo-Yuang; Lin, Shih-Hung; Mo, Ting-Shan; Huang, Shuan-Yu; Kuo, Chie-Tong; Yeh, Hui-Chen

    2011-01-31

    This investigation elucidates for the first time electrically controllable random lasers below the threshold voltage in dye-doped liquid crystal (DDLC) cells with and without adding an azo-dye. Experimental results show that the lasing intensities and the energy thresholds of the random lasers can be decreased and increased, respectively, by increasing the applied voltage below the Fréedericksz transition threshold. The below-threshold-electric-controllability of the random lasers is attributable to the effective decrease of the spatial fluctuation of the orientational order and thus of the dielectric tensor of LCs by increasing the electric-field-aligned order of LCs below the threshold, thereby increasing the diffusion constant and decreasing the scattering strength of the fluorescence photons in their recurrent multiple scattering. This can result in the decrease in the lasing intensity of the random lasers and the increase in their energy thresholds. Furthermore, the addition of an azo-dye in DDLC cell can induce the range of the working voltage below the threshold for the control of the random laser to reduce.

  10. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.

  11. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  12. Controls on the spatial variability of key soil properties: comparing field data with a mechanistic soilscape evolution model

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Román, A.; Giraldez, J. V.

    2016-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of a geostatistical versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  13. Applications of Geostatistics in Plant Nematology

    PubMed Central

    Wallace, M. K.; Hawkins, D. M.

    1994-01-01

    The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the Ap horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities. PMID:19279938

  14. Applications of geostatistics in plant nematology.

    PubMed

    Wallace, M K; Hawkins, D M

    1994-12-01

    The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the A(p) horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities.

  15. Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling.

    PubMed

    Batool, Nazre; Chellappa, Rama

    2014-09-01

    Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.

  16. Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.

    PubMed

    Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica

    2018-05-01

    We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within low velocity regions, where diffusion-dispersion mechanisms are dominant; and (ii) embedding the velocity field in the error estimator guiding strategy yields an improved characterization of the forward fringe of solute fronts which propagate through high velocity regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A novel look at the pulsar force-free magnetosphere

    NASA Astrophysics Data System (ADS)

    Petrova, S. A.; Flanchik, A. B.

    2018-03-01

    The stationary axisymmetric force-free magnetosphere of a pulsar is considered. We present an exact dipolar solution of the pulsar equation, construct the magnetospheric model on its basis and examine its observational support. The new model has toroidal rather than common cylindrical geometry, in line with that of the plasma outflow observed directly as the pulsar wind nebula at much larger spatial scale. In its new configuration, the axisymmetric magnetosphere consumes the neutron star rotational energy much more efficiently, implying re-estimation of the stellar magnetic field, B_{new}0=3.3×10^{-4}B/P, where P is the pulsar period. Then the 7-order scatter of the magnetic field derived from the rotational characteristics of the pulsars observed appears consistent with the \\cotχ-law, where χ is a random quantity uniformly distributed in the interval [0,π/2]. Our result is suggestive of a unique actual magnetic field strength of the neutron stars along with a random angle between the magnetic and rotational axes and gives insight into the neutron star unification on the geometrical basis.

  18. A random spatial network model based on elementary postulates

    USGS Publications Warehouse

    Karlinger, Michael R.; Troutman, Brent M.

    1989-01-01

    A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

  19. A Complex Network Theory Approach for the Spatial Distribution of Fire Breaks in Heterogeneous Forest Landscapes for the Control of Wildland Fires

    PubMed Central

    Russo, Lucia; Russo, Paola; Siettos, Constantinos I.

    2016-01-01

    Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire. PMID:27780249

  20. Estimation of regionalized compositions: A comparison of three methods

    USGS Publications Warehouse

    Pawlowsky, V.; Olea, R.A.; Davis, J.C.

    1995-01-01

    A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.

  1. A Complex Network Theory Approach for the Spatial Distribution of Fire Breaks in Heterogeneous Forest Landscapes for the Control of Wildland Fires.

    PubMed

    Russo, Lucia; Russo, Paola; Siettos, Constantinos I

    2016-01-01

    Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V. V.; Fischer, P. J.; Chan, E. R.

    We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope's MTF, tests with the BPRML sample can be used to fine tune the instrument's focal distance. Finally, our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V. V., E-mail: VVYashchuk@lbl.gov; Chan, E. R.; Lacey, I.

    We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate themore » MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope’s MTF, tests with the BPRML sample can be used to fine tune the instrument’s focal distance. Our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less

  4. Incorporating geologic information into hydraulic tomography: A general framework based on geostatistical approach

    NASA Astrophysics Data System (ADS)

    Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Onoe, Hironori; Mok, Chin Man W.; Wen, Jet-Chau; Huang, Shao-Yang; Wang, Wenke

    2017-04-01

    Hydraulic tomography (HT) has become a mature aquifer test technology over the last two decades. It collects nonredundant information of aquifer heterogeneity by sequentially stressing the aquifer at different wells and collecting aquifer responses at other wells during each stress. The collected information is then interpreted by inverse models. Among these models, the geostatistical approaches, built upon the Bayesian framework, first conceptualize hydraulic properties to be estimated as random fields, which are characterized by means and covariance functions. They then use the spatial statistics as prior information with the aquifer response data to estimate the spatial distribution of the hydraulic properties at a site. Since the spatial statistics describe the generic spatial structures of the geologic media at the site rather than site-specific ones (e.g., known spatial distributions of facies, faults, or paleochannels), the estimates are often not optimal. To improve the estimates, we introduce a general statistical framework, which allows the inclusion of site-specific spatial patterns of geologic features. Subsequently, we test this approach with synthetic numerical experiments. Results show that this approach, using conditional mean and covariance that reflect site-specific large-scale geologic features, indeed improves the HT estimates. Afterward, this approach is applied to HT surveys at a kilometer-scale-fractured granite field site with a distinct fault zone. We find that by including fault information from outcrops and boreholes for HT analysis, the estimated hydraulic properties are improved. The improved estimates subsequently lead to better prediction of flow during a different pumping test at the site.

  5. The where and how of attention-based rehearsal in spatial working memory.

    PubMed

    Postle, B R; Awh, E; Jonides, J; Smith, E E; D'Esposito, M

    2004-07-01

    Rehearsal in human spatial working memory is accomplished, in part, via covert shifts of spatial selective attention to memorized locations ("attention-based rehearsal"). We addressed two outstanding questions about attention-based rehearsal: the topography of the attention-based rehearsal effect, and the mechanism by which it operates. Using event-related fMRI and a procedure that randomized the presentation of trials with delay epochs that were either filled with a flickering checkerboard or unfilled, we localized the effect to extrastriate areas 18 and 19, and confirmed its absence in striate cortex. Delay-epoch activity in these extrastriate regions, as well as in superior parietal lobule and intraparietal sulcus, was also lateralized on unfilled trials, suggesting that attention-based rehearsal produces a baseline shift in areas representing the to-be-remembered location in space. No frontal regions (including frontal eye fields) demonstrated lateralized activity consistent with a role in attention-based rehearsal.

  6. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  7. Temporal Dynamics and Spatial Variation of Azoxystrobin and Propiconazole Resistance in Zymoseptoria tritici: A Hierarchical Survey of Commercial Winter Wheat Fields in the Willamette Valley, Oregon.

    PubMed

    Hagerty, Christina H; Anderson, Nicole P; Mundt, Christopher C

    2017-03-01

    Fungicide resistance can cause disease control failure in agricultural systems, and is particularly concerning with Zymoseptoria tritici, the causal agent of Septoria tritici blotch of wheat. In North America, the first quinone outside inhibitor resistance in Z. tritici was discovered in the Willamette Valley of Oregon in 2012, which prompted this hierarchical survey of commercial winter wheat fields to monitor azoxystrobin- and propiconazole-resistant Z. tritici. Surveys were conducted in June 2014, January 2015, May 2015, and January 2016. The survey was organized in a hierarchical scheme: regions within the Willamette Valley, fields within the region, transects within the field, and samples within the transect. Overall, frequency of azoxystrobin-resistant isolates increased from 63 to 93% from June 2014 to January 2016. Resistance to azoxystrobin increased over time even within fields receiving no strobilurin applications. Propiconazole sensitivity varied over the course of the study but, overall, did not significantly change. Sensitivity to both fungicides showed no regional aggregation within the Willamette Valley. Greater than 80% of spatial variation in fungicide sensitivity was at the smallest hierarchical scale (within the transect) of the survey for both fungicides, and the resistance phenotypes were randomly distributed within sampled fields. Results suggest a need for a better understanding of the dynamics of fungicide resistance at the landscape level.

  8. Spatial Mapping of the Mobility-Lifetime (microtau) Production in Cadmium Zinc Telluride Nuclear Radiation Detectors Using Transport Imaging

    DTIC Science & Technology

    2013-06-01

    Under the influence of an electrical field, these electrons and holes migrate to their respective electrodes, where they are collected and...an electrical response which translates to an intensity reading on the detector’s readout meter. Since high-resolution detector materials are the...magnitude of three factors: inherent statistical variation of the electric signal measured at the detector’s contacts (Fano noise ∆EF), random electron

  9. Brain MR image segmentation based on an improved active contour model

    PubMed Central

    Meng, Xiangrui; Gu, Wenya; Zhang, Jianwei

    2017-01-01

    It is often a difficult task to accurately segment brain magnetic resonance (MR) images with intensity in-homogeneity and noise. This paper introduces a novel level set method for simultaneous brain MR image segmentation and intensity inhomogeneity correction. To reduce the effect of noise, novel anisotropic spatial information, which can preserve more details of edges and corners, is proposed by incorporating the inner relationships among the neighbor pixels. Then the proposed energy function uses the multivariate Student's t-distribution to fit the distribution of the intensities of each tissue. Furthermore, the proposed model utilizes Hidden Markov random fields to model the spatial correlation between neigh-boring pixels/voxels. The means of the multivariate Student's t-distribution can be adaptively estimated by multiplying a bias field to reduce the effect of intensity inhomogeneity. In the end, we reconstructed the energy function to be convex and calculated it by using the Split Bregman method, which allows our framework for random initialization, thereby allowing fully automated applications. Our method can obtain the final result in less than 1 second for 2D image with size 256 × 256 and less than 300 seconds for 3D image with size 256 × 256 × 171. The proposed method was compared to other state-of-the-art segmentation methods using both synthetic and clinical brain MR images and increased the accuracies of the results more than 3%. PMID:28854235

  10. Fine scale spatial variability of microbial pesticide degradation in soil: scales, controlling factors, and implications

    PubMed Central

    Dechesne, Arnaud; Badawi, Nora; Aamand, Jens; Smets, Barth F.

    2014-01-01

    Pesticide biodegradation is a soil microbial function of critical importance for modern agriculture and its environmental impact. While it was once assumed that this activity was homogeneously distributed at the field scale, mounting evidence indicates that this is rarely the case. Here, we critically examine the literature on spatial variability of pesticide biodegradation in agricultural soil. We discuss the motivations, methods, and main findings of the primary literature. We found significant diversity in the approaches used to describe and quantify spatial heterogeneity, which complicates inter-studies comparisons. However, it is clear that the presence and activity of pesticide degraders is often highly spatially variable with coefficients of variation often exceeding 50% and frequently displays non-random spatial patterns. A few controlling factors have tentatively been identified across pesticide classes: they include some soil characteristics (pH) and some agricultural management practices (pesticide application, tillage), while other potential controlling factors have more conflicting effects depending on the site or the pesticide. Evidence demonstrating the importance of spatial heterogeneity on the fate of pesticides in soil has been difficult to obtain but modeling and experimental systems that do not include soil's full complexity reveal that this heterogeneity must be considered to improve prediction of pesticide biodegradation rates or of leaching risks. Overall, studying the spatial heterogeneity of pesticide biodegradation is a relatively new field at the interface of agronomy, microbial ecology, and geosciences and a wealth of novel data is being collected from these different disciplinary perspectives. We make suggestions on possible avenues to take full advantage of these investigations for a better understanding and prediction of the fate of pesticides in soil. PMID:25538691

  11. Damage spreading in spatial and small-world random Boolean networks

    NASA Astrophysics Data System (ADS)

    Lu, Qiming; Teuscher, Christof

    2014-02-01

    The study of the response of complex dynamical social, biological, or technological networks to external perturbations has numerous applications. Random Boolean networks (RBNs) are commonly used as a simple generic model for certain dynamics of complex systems. Traditionally, RBNs are interconnected randomly and without considering any spatial extension and arrangement of the links and nodes. However, most real-world networks are spatially extended and arranged with regular, power-law, small-world, or other nonrandom connections. Here we explore the RBN network topology between extreme local connections, random small-world, and pure random networks, and study the damage spreading with small perturbations. We find that spatially local connections change the scaling of the Hamming distance at very low connectivities (K¯≪1) and that the critical connectivity of stability Ks changes compared to random networks. At higher K¯, this scaling remains unchanged. We also show that the Hamming distance of spatially local networks scales with a power law as the system size N increases, but with a different exponent for local and small-world networks. The scaling arguments for small-world networks are obtained with respect to the system sizes and strength of spatially local connections. We further investigate the wiring cost of the networks. From an engineering perspective, our new findings provide the key design trade-offs between damage spreading (robustness), the network's wiring cost, and the network's communication characteristics.

  12. Hippocampal Remapping Is Constrained by Sparseness rather than Capacity

    PubMed Central

    Kammerer, Axel; Leibold, Christian

    2014-01-01

    Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570

  13. Spatiotemporal Analysis of Microbiological Contamination in New York State Produce Fields following Extensive Flooding from Hurricane Irene, August 2011.

    PubMed

    Bergholz, Peter W; Strawn, Laura K; Ryan, Gina T; Warchocki, Steven; Wiedmann, Martin

    2016-03-01

    Although flooding introduces microbiological, chemical, and physical hazards onto croplands, few data are available on the spatial extent, patterns, and development of contamination over time postflooding. To address this paucity of information, we conducted a spatially explicit study of Escherichia coli and Salmonella contamination prevalence and genetic diversity in produce fields after the catastrophic flooding that occurred in New England during 2011. Although no significant differences were detected between the two participating farms, both random forest and logistic regression revealed changes in the spatial pattern of E. coli contamination in drag swab samples over time. Analyses also indicated that E. coli detection was associated with changes in farm management to remediate the land after flooding. In particular, E. coli was widespread in drag swab samples at 21 days postflooding, but the spatial pattern changed by 238 days postflooding such that E. coli was then most prevalent in close proximity to surface water features. The combined results of several population genetics analyses indicated that over time postflooding E. coli populations on the farms (i) changed in composition and (ii) declined overall. Salmonella was primarily detected in surface water features, but some Salmonella strains were isolated from soil and drag swab samples at 21 and 44 days postflooding. Although postflood contamination and land management responses should always be evaluated in the context of each unique farm landscape, our results provide quantitative data on the general patterns of contamination after flooding and support the practice of establishing buffer zones between flood-contaminated cropland and harvestable crops in produce fields.

  14. Can we infer plant facilitation from remote sensing? A test across global drylands

    PubMed Central

    Xu, Chi; Holmgren, Milena; Van Nes, Egbert H.; Maestre, Fernando T.; Soliveres, Santiago; Berdugo, Miguel; Kéfi, Sonia; Marquet, Pablo A.; Abades, Sebastian; Scheffer, Marten

    2016-01-01

    Facilitation is a major force shaping the structure and diversity of plant communities in terrestrial ecosystems. Detecting positive plant-plant interactions relies on the combination of field experimentation and the demonstration of spatial association between neighboring plants. This has often restricted the study of facilitation to particular sites, limiting the development of systematic assessments of facilitation over regional and global scales. Here we explore whether the frequency of plant spatial associations detected from high-resolution remotely-sensed images can be used to infer plant facilitation at the community level in drylands around the globe. We correlated the information from remotely-sensed images freely available through Google Earth™ with detailed field assessments, and used a simple individual-based model to generate patch-size distributions using different assumptions about the type and strength of plant-plant interactions. Most of the patterns found from the remotely-sensed images were more right-skewed than the patterns from the null model simulating a random distribution. This suggests that the plants in the studied drylands show stronger spatial clustering than expected by chance. We found that positive plant co-occurrence, as measured in the field, was significantly related to the skewness of vegetation patch-size distribution measured using Google Earth™ images. Our findings suggest that the relative frequency of facilitation may be inferred from spatial pattern signals measured from remotely-sensed images, since facilitation often determines positive co-occurrence among neighboring plants. They pave the road for a systematic global assessment of the role of facilitation in terrestrial ecosystems. PMID:26552256

  15. Data-driven mapping of the potential mountain permafrost distribution.

    PubMed

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2017-07-15

    Existing mountain permafrost distribution models generally offer a good overview of the potential extent of this phenomenon at a regional scale. They are however not always able to reproduce the high spatial discontinuity of permafrost at the micro-scale (scale of a specific landform; ten to several hundreds of meters). To overcome this lack, we tested an alternative modelling approach using three classification algorithms belonging to statistics and machine learning: Logistic regression, Support Vector Machines and Random forests. These supervised learning techniques infer a classification function from labelled training data (pixels of permafrost absence and presence) with the aim of predicting the permafrost occurrence where it is unknown. The research was carried out in a 588km 2 area of the Western Swiss Alps. Permafrost evidences were mapped from ortho-image interpretation (rock glacier inventorying) and field data (mainly geoelectrical and thermal data). The relationship between selected permafrost evidences and permafrost controlling factors was computed with the mentioned techniques. Classification performances, assessed with AUROC, range between 0.81 for Logistic regression, 0.85 with Support Vector Machines and 0.88 with Random forests. The adopted machine learning algorithms have demonstrated to be efficient for permafrost distribution modelling thanks to consistent results compared to the field reality. The high resolution of the input dataset (10m) allows elaborating maps at the micro-scale with a modelled permafrost spatial distribution less optimistic than classic spatial models. Moreover, the probability output of adopted algorithms offers a more precise overview of the potential distribution of mountain permafrost than proposing simple indexes of the permafrost favorability. These encouraging results also open the way to new possibilities of permafrost data analysis and mapping. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension, Markov random field features, and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-10-01

    In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.

  17. Application of an imputation method for geospatial inventory of forest structural attributes across multiple spatial scales in the Lake States, U.S.A

    NASA Astrophysics Data System (ADS)

    Deo, Ram K.

    Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.

  18. Research on photodiode detector-based spatial transient light detection and processing system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Yang; Zhao, Hui; Nan, Meng

    2016-10-01

    In order to realize real-time signal identification and processing of spatial transient light, the features and the energy of the captured target light signal are first described and quantitatively calculated. Considering that the transient light signal has random occurrence, a short duration and an evident beginning and ending, a photodiode detector based spatial transient light detection and processing system is proposed and designed in this paper. This system has a large field of view and is used to realize non-imaging energy detection of random, transient and weak point target under complex background of spatial environment. Weak signal extraction under strong background is difficult. In this paper, considering that the background signal changes slowly and the target signal changes quickly, filter is adopted for signal's background subtraction. A variable speed sampling is realized by the way of sampling data points with a gradually increased interval. The two dilemmas that real-time processing of large amount of data and power consumption required by the large amount of data needed to be stored are solved. The test results with self-made simulative signal demonstrate the effectiveness of the design scheme. The practical system could be operated reliably. The detection and processing of the target signal under the strong sunlight background was realized. The results indicate that the system can realize real-time detection of target signal's characteristic waveform and monitor the system working parameters. The prototype design could be used in a variety of engineering applications.

  19. Structuring Stokes correlation functions using vector-vortex beam

    NASA Astrophysics Data System (ADS)

    Kumar, Vijay; Anwar, Ali; Singh, R. P.

    2018-01-01

    Higher order statistical correlations of the optical vector speckle field, formed due to scattering of a vector-vortex beam, are explored. Here, we report on the experimental construction of the Stokes parameters covariance matrix, consisting of all possible spatial Stokes parameters correlation functions. We also propose and experimentally realize a new Stokes correlation functions called Stokes field auto correlation functions. It is observed that the Stokes correlation functions of the vector-vortex beam will be reflected in the respective Stokes correlation functions of the corresponding vector speckle field. The major advantage of proposing Stokes correlation functions is that the Stokes correlation function can be easily tuned by manipulating the polarization of vector-vortex beam used to generate vector speckle field and to get the phase information directly from the intensity measurements. Moreover, this approach leads to a complete experimental Stokes characterization of a broad range of random fields.

  20. The role of magnetic loops in particle acceleration at nearly perpendicular shocks

    NASA Technical Reports Server (NTRS)

    Decker, R. B.

    1993-01-01

    The acceleration of superthermal ions is investigated when a planar shock that is on average nearly perpendicular propagates through a plasma in which the magnetic field is the superposition of a constant uniform component plus a random field of transverse hydromagnetic fluctuations. The importance of the broadband nature of the transverse magnetic fluctuations in mediating ion acceleration at nearly perpendicular shocks is pointed out. Specifically, the fluctuations are composed of short-wavelength components which scatter ions in pitch angle and long-wavelength components which are responsible for a spatial meandering of field lines about the mean field. At nearly perpendicular shocks the field line meandering produces a distribution of transient loops along the shock. As an application of this model, the acceleration of a superthermal monoenergetic population of seed protons at a perpendicular shock is investigated by integrating along the exact phase-space orbits.

  1. Weak scattering of scalar and electromagnetic random fields

    NASA Astrophysics Data System (ADS)

    Tong, Zhisong

    This dissertation encompasses several studies relating to the theory of weak potential scattering of scalar and electromagnetic random, wide-sense statistically stationary fields from various types of deterministic or random linear media. The proposed theory is largely based on the first Born approximation for potential scattering and on the angular spectrum representation of fields. The main focus of the scalar counterpart of the theory is made on calculation of the second-order statistics of scattered light fields in cases when the scattering medium consists of several types of discrete particles with deterministic or random potentials. It is shown that the knowledge of the correlation properties for the particles of the same and different types, described with the newly introduced pair-scattering matrix, is crucial for determining the spectral and coherence states of the scattered radiation. The approach based on the pair-scattering matrix is then used for solving an inverse problem of determining the location of an "alien" particle within the scattering collection of "normal" particles, from several measurements of the spectral density of scattered light. Weak scalar scattering of light from a particulate medium in the presence of optical turbulence existing between the scattering centers is then approached using the combination of the Born's theory for treating the light interaction with discrete particles and the Rytov's theory for light propagation in extended turbulent medium. It is demonstrated how the statistics of scattered radiation depend on scattering potentials of particles and the power spectra of the refractive index fluctuations of turbulence. This theory is of utmost importance for applications involving atmospheric and oceanic light transmission. The second part of the dissertation includes the theoretical procedure developed for predicting the second-order statistics of the electromagnetic random fields, such as polarization and linear momentum, scattered from static media. The spatial distribution of these properties of scattered fields is shown to be substantially dependent on the correlation and polarization properties of incident fields and on the statistics of the refractive index distribution within the scatterers. Further, an example is considered which illustrates the usefulness of the electromagnetic scattering theory of random fields in the case when the scattering medium is a thin bio-tissue layer with the prescribed power spectrum of the refractive index fluctuations. The polarization state of the scattered light is shown to be influenced by correlation and polarization states of the illumination as well as by the particle size distribution of the tissue slice.

  2. Chemical Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Aquino, T.; Dentz, M.

    2017-12-01

    Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.

  3. Spectral filtering of gradient for l2-norm frequency-domain elastic waveform inversion

    NASA Astrophysics Data System (ADS)

    Oh, Ju-Won; Min, Dong-Joo

    2013-05-01

    To enhance the robustness of the l2-norm elastic full-waveform inversion (FWI), we propose a denoise function that is incorporated into single-frequency gradients. Because field data are noisy and modelled data are noise-free, the denoise function is designed based on the ratio of modelled data to field data summed over shots and receivers. We first take the sums of the modelled data and field data over shots, then take the sums of the absolute values of the resultant modelled data and field data over the receivers. Due to the monochromatic property of wavefields at each frequency, signals in both modelled and field data tend to be cancelled out or maintained, whereas certain types of noise, particularly random noise, can be amplified in field data. As a result, the spectral distribution of the denoise function is inversely proportional to the ratio of noise to signal at each frequency, which helps prevent the noise-dominant gradients from contributing to model parameter updates. Numerical examples show that the spectral distribution of the denoise function resembles a frequency filter that is determined by the spectrum of the signal-to-noise (S/N) ratio during the inversion process, with little human intervention. The denoise function is applied to the elastic FWI of synthetic data, with three types of random noise generated by the modified version of the Marmousi-2 model: white, low-frequency and high-frequency random noises. Based on the spectrum of S/N ratios at each frequency, the denoise function mainly suppresses noise-dominant single-frequency gradients, which improves the inversion results at the cost of spatial resolution.

  4. Two spatial light modulator system for laboratory simulation of random beam propagation in random media.

    PubMed

    Wang, Fei; Toselli, Italo; Korotkova, Olga

    2016-02-10

    An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics.

  5. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  6. Metal-superconductor transition in low-dimensional superconducting clusters embedded in two-dimensional electron systems

    NASA Astrophysics Data System (ADS)

    Bucheli, D.; Caprara, S.; Castellani, C.; Grilli, M.

    2013-02-01

    Motivated by recent experimental data on thin film superconductors and oxide interfaces, we propose a random-resistor network apt to describe the occurrence of a metal-superconductor transition in a two-dimensional electron system with disorder on the mesoscopic scale. We consider low-dimensional (e.g. filamentary) structures of a superconducting cluster embedded in the two-dimensional network and we explore the separate effects and the interplay of the superconducting structure and of the statistical distribution of local critical temperatures. The thermal evolution of the resistivity is determined by a numerical calculation of the random-resistor network and, for comparison, a mean-field approach called effective medium theory (EMT). Our calculations reveal the relevance of the distribution of critical temperatures for clusters with low connectivity. In addition, we show that the presence of spatial correlations requires a modification of standard EMT to give qualitative agreement with the numerical results. Applying the present approach to an LaTiO3/SrTiO3 oxide interface, we find that the measured resistivity curves are compatible with a network of spatially dense but loosely connected superconducting islands.

  7. Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.

    PubMed

    Manjón, José V; Tohka, Jussi; Robles, Montserrat

    2010-11-01

    This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Effective pore-scale dispersion upscaling with a correlated continuous time random walk approach

    NASA Astrophysics Data System (ADS)

    Le Borgne, T.; Bolster, D.; Dentz, M.; de Anna, P.; Tartakovsky, A.

    2011-12-01

    We investigate the upscaling of dispersion from a pore-scale analysis of Lagrangian velocities. A key challenge in the upscaling procedure is to relate the temporal evolution of spreading to the pore-scale velocity field properties. We test the hypothesis that one can represent Lagrangian velocities at the pore scale as a Markov process in space. The resulting effective transport model is a continuous time random walk (CTRW) characterized by a correlated random time increment, here denoted as correlated CTRW. We consider a simplified sinusoidal wavy channel model as well as a more complex heterogeneous pore space. For both systems, the predictions of the correlated CTRW model, with parameters defined from the velocity field properties (both distribution and correlation), are found to be in good agreement with results from direct pore-scale simulations over preasymptotic and asymptotic times. In this framework, the nontrivial dependence of dispersion on the pore boundary fluctuations is shown to be related to the competition between distribution and correlation effects. In particular, explicit inclusion of spatial velocity correlation in the effective CTRW model is found to be important to represent incomplete mixing in the pore throats.

  9. Effect of acute pesticide exposure on bee spatial working memory using an analogue of the radial-arm maze

    NASA Astrophysics Data System (ADS)

    Samuelson, Elizabeth E. W.; Chen-Wishart, Zachary P.; Gill, Richard J.; Leadbeater, Ellouise

    2016-12-01

    Pesticides, including neonicotinoids, typically target pest insects by being neurotoxic. Inadvertent exposure to foraging insect pollinators is usually sub-lethal, but may affect cognition. One cognitive trait, spatial working memory, may be important in avoiding previously-visited flowers and other spatial tasks such as navigation. To test this, we investigated the effect of acute thiamethoxam exposure on spatial working memory in the bumblebee Bombus terrestris, using an adaptation of the radial-arm maze (RAM). We first demonstrated that bumblebees use spatial working memory to solve the RAM by showing that untreated bees performed significantly better than would be expected if choices were random or governed by stereotyped visitation rules. We then exposed bees to either a high sub-lethal positive control thiamethoxam dose (2.5 ng-1 bee), or one of two low doses (0.377 or 0.091 ng-1) based on estimated field-realistic exposure. The high dose caused bees to make more and earlier spatial memory errors and take longer to complete the task than unexposed bees. For the low doses, the negative effects were smaller but statistically significant, and dependent on bee size. The spatial working memory impairment shown here has the potential to harm bees exposed to thiamethoxam, through possible impacts on foraging efficiency or homing.

  10. Assessing the performance of multiple spectral-spatial features of a hyperspectral image for classification of urban land cover classes using support vector machines and artificial neural network

    NASA Astrophysics Data System (ADS)

    Pullanagari, Reddy; Kereszturi, Gábor; Yule, Ian J.; Ghamisi, Pedram

    2017-04-01

    Accurate and spatially detailed mapping of complex urban environments is essential for land managers. Classifying high spectral and spatial resolution hyperspectral images is a challenging task because of its data abundance and computational complexity. Approaches with a combination of spectral and spatial information in a single classification framework have attracted special attention because of their potential to improve the classification accuracy. We extracted multiple features from spectral and spatial domains of hyperspectral images and evaluated them with two supervised classification algorithms; support vector machines (SVM) and an artificial neural network. The spatial features considered are produced by a gray level co-occurrence matrix and extended multiattribute profiles. All of these features were stacked, and the most informative features were selected using a genetic algorithm-based SVM. After selecting the most informative features, the classification model was integrated with a segmentation map derived using a hidden Markov random field. We tested the proposed method on a real application of a hyperspectral image acquired from AisaFENIX and on widely used hyperspectral images. From the results, it can be concluded that the proposed framework significantly improves the results with different spectral and spatial resolutions over different instrumentation.

  11. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less

  12. The cosmological principle is not in the sky

    NASA Astrophysics Data System (ADS)

    Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan

    2017-08-01

    The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.

  13. Relative distribution of cosmic rays and magnetic fields

    NASA Astrophysics Data System (ADS)

    Seta, Amit; Shukurov, Anvar; Wood, Toby S.; Bushby, Paul J.; Snodin, Andrew P.

    2018-02-01

    Synchrotron radiation from cosmic rays is a key observational probe of the galactic magnetic field. Interpreting synchrotron emission data requires knowledge of the cosmic ray number density, which is often assumed to be in energy equipartition (or otherwise tightly correlated) with the magnetic field energy. However, there is no compelling observational or theoretical reason to expect such a tight correlation to hold across all scales. We use test particle simulations, tracing the propagation of charged particles (protons) through a random magnetic field, to study the cosmic ray distribution at scales comparable to the correlation scale of the turbulent flow in the interstellar medium (≃100 pc in spiral galaxies). In these simulations, we find that there is no spatial correlation between the cosmic ray number density and the magnetic field energy density. In fact, their distributions are approximately statistically independent. We find that low-energy cosmic rays can become trapped between magnetic mirrors, whose location depends more on the structure of the field lines than on the field strength.

  14. Object-based change detection method using refined Markov random field

    NASA Astrophysics Data System (ADS)

    Peng, Daifeng; Zhang, Yongjun

    2017-01-01

    In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.

  15. A Heavy Tailed Expectation Maximization Hidden Markov Random Field Model with Applications to Segmentation of MRI

    PubMed Central

    Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego

    2017-01-01

    A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194

  16. Scale linkage and contingency effects of field-scale and hillslope-scale controls of long-term soil erosion: Anthropogeomorphic sediment flux in agricultural loess watersheds of Southern Germany

    NASA Astrophysics Data System (ADS)

    Houben, Peter

    2008-10-01

    Agricultural landscapes with a millennial-scale history of cultivation are common in many loess areas of central Europe. Over time, patterns of erosion and sedimentation have been continually modified via the variable imposition of anthropogenic discontinuities and linkages on fragmented hillslope sediment cascades, which eventually caused the complicated soilscape pattern. These field records challenge topographically oriented models of hillslope erosion and simple predictions of longer-term change of spatial soilscape by cultivation activities. A thorough understanding how soilscape patterns form in the long-term, however, is essential to develop spatial concepts of the sediment budget, particularly for the spatial modeling of anthropogenic hillslope sediment flux using GIS. In this study I used extensive datasets of anthropogenic soil truncation and burial in a typical undulating loess watershed in southern Germany (10 km 2, Wetterau Basin, N of Frankfurt a.M.). Spatial soilscape properties and historic sediment flux, as caused by cultivation over seven millennia, were evaluated by these data. The soilscape pattern on the low-gradient hillslopes of the study area was found to be marked by a statistical near-random pattern of varying depth (thickness) of truncation and overthickened burial. Moreover, it was shown that truncation and burial had developed independently from each other and did not correlate with either hillslope gradient or downslope curvature. Hence, in the field any combination of (few) nearly preserved, severely truncated or completely removed soil profiles with either no, some or a thick sediment cover is present, thereby lacking an obvious spatial pattern. Here, I suggest putting long-term change of the soilscape into a contextual anthropogeomorphic systems perspective, that accommodates components of human-induced soil erosion operating at different spatial scales to interpret the longer-term spatial consequences at the hillslope-system level. In the study area, system scale linkages are marked by the spatial intersection of a finer-scaled managed field system with a broader hillslope-scale framework of 'natural' erosion controls. In the low-gradient study area, field borders exert control over the spatial reference of soil erosion and sedimentation sites. Over time, this brought about a growing historical and spatial contingency change to the soilscape, because of arbitrary spatial changes of the field system which are inherent in its socio-agricultural maintenance. Thus, the very low-gradient and low-erosivity setting of the study area have singled out the agency of human-induced spatial and connectivity controls and contingency for long-term spatial hillslope sediment flux. Although these findings may be less true for different settings, they allow for deriving a generic conceptual model of the linkages between 'natural' and anthropogenic subsystems to interpret the effects of long-term human-induced sediment flux. Accordingly, the resulting balance between on-hillslope net storage and net delivery to streams is scaling with basic physiographic properties of erosivity and sedimentation as well as the degree of anthropogenic hillslope fragmentation. For loess areas in Europe variable fields are fundamental anthropogeomorphic units that determine appropriate system scaling for historic sediment flux analysis and constrain retrodiction and prediction of changing fluxes at a point and a time at watershed scales. Methodical implications address adequate sampling strategies to record soilscape change, as a result of which a critical review of the applicability of the catena concept to long-cultivated hillslopes in central Europe was included. Finally, the suggested refined generic model of long-term, human-controlled sediment flux involves a number of research opportunities, particularly for linking modeling approaches to long-term field records of cultivation-related change in the soilscape.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu

    In the landscape perspective, our Universe begins with a quantum tunneling from an eternally-inflating parent vacuum, followed by a period of slow-roll inflation. We investigate the tunneling process and calculate the probability distribution for the initial conditions and for the number of e-folds of slow-roll inflation, modeling the landscape by a small-field one-dimensional random Gaussian potential. We find that such a landscape is fully consistent with observations, but the probability for future detection of spatial curvature is rather low, P ∼ 10{sup −3}.

  18. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  19. Spatial versus sequential correlations for random access coding

    NASA Astrophysics Data System (ADS)

    Tavakoli, Armin; Marques, Breno; Pawłowski, Marcin; Bourennane, Mohamed

    2016-03-01

    Random access codes are important for a wide range of applications in quantum information. However, their implementation with quantum theory can be made in two very different ways: (i) by distributing data with strong spatial correlations violating a Bell inequality or (ii) using quantum communication channels to create stronger-than-classical sequential correlations between state preparation and measurement outcome. Here we study this duality of the quantum realization. We present a family of Bell inequalities tailored to the task at hand and study their quantum violations. Remarkably, we show that the use of spatial and sequential quantum correlations imposes different limitations on the performance of quantum random access codes: Sequential correlations can outperform spatial correlations. We discuss the physics behind the observed discrepancy between spatial and sequential quantum correlations.

  20. Assessing the spatial distribution of glyphosate-AMPA in an Argentinian farm field using a pedometric technique

    NASA Astrophysics Data System (ADS)

    Barbera, Agustin; Zamora, Martin; Domenech, Marisa; Vega-Becerra, Andres; Castro-Franco, Mauricio

    2017-04-01

    The cultivation of transgenic glyphosate-resistant crops has been the most rapidly adopted crop technology in Argentina since 1997. Thus, more than 180 million liters of the broad-spectrum herbicide glyphosate (N - phosphonomethylglicine) are applied every year. The intensive use of glyphosate combined with geomorphometrical characteristics of the Pampa region is a matter of environmental concern. An integral component of assessing the risk of soil contamination in farm fields is to describe the spatial distribution of the levels of contaminant agent. Application of pedometric techniques for this purpose has been scarcely demonstrated. These techniques could provide an estimate of the concentration at a given unsampled location, as well as the probability that concentration will exceed the critical threshold concentration. In this work, a pedometric technique for assessing the spatial distribution of glyphosate in farm fields was developed. A field located at INTA Barrow, Argentina (Lat: -38.322844, Lon: -60.25572) which has a great soil spatial variability, was divided by soil-specific zones using a pedometric technique. This was developed integrating INTA Soil Survey information and a digital elevation model (DEM) obtained from a DGPS. Firstly, 10 topographic indices derived from a DEM were computed in a Random Forest algorithm to obtain a classification model for soil map units (SMU). Secondly, a classification model was applied to those topographic indices but at a scale higher than 1:1000. Finally, a spatial principal component analysis and a clustering using Fuzzy K-means were used into each SMU. From this clustering, three soil-specific zones were determined which were also validated through apparent electrical conductivity (CEa) measurements. Three soil sample points were determined by zone. In each one, samples from 0-10, 10-20 and 20-40cm depth were taken. Glyphosate content and AMPA in each soil sample were analyzed using de UPLC-MS/MS ESI (+/-). Only AMPA at 10-20 cm depth had significant difference among soil-specific zones. However, marked trends for glyphosate content and AMPA were clearly shown among zones. These results suggest that (i) the presence of glyphosate and AMPA has spatial patterns distribution related to soil properties at field scale; and (ii) the proposed technique allowed to determine soil-specific zones related to the spatial distribution of glyphosate and AMPA fast, cost-effective and accurately. In further works, we would suggest adding new soil information sources to improve soil-specific zone delimitation.

  1. Transient flow conditions in probabilistic wellhead protection: importance and ways to manage spatial and temporal uncertainty in capture zone delineation

    NASA Astrophysics Data System (ADS)

    Enzenhoefer, R.; Rodriguez-Pretelin, A.; Nowak, W.

    2012-12-01

    "From an engineering standpoint, the quantification of uncertainty is extremely important not only because it allows estimating risk but mostly because it allows taking optimal decisions in an uncertain framework" (Renard, 2007). The most common way to account for uncertainty in the field of subsurface hydrology and wellhead protection is to randomize spatial parameters, e.g. the log-hydraulic conductivity or porosity. This enables water managers to take robust decisions in delineating wellhead protection zones with rationally chosen safety margins in the spirit of probabilistic risk management. Probabilistic wellhead protection zones are commonly based on steady-state flow fields. However, several past studies showed that transient flow conditions may substantially influence the shape and extent of catchments. Therefore, we believe they should be accounted for in the probabilistic assessment and in the delineation process. The aim of our work is to show the significance of flow transients and to investigate the interplay between spatial uncertainty and flow transients in wellhead protection zone delineation. To this end, we advance our concept of probabilistic capture zone delineation (Enzenhoefer et al., 2012) that works with capture probabilities and other probabilistic criteria for delineation. The extended framework is able to evaluate the time fraction that any point on a map falls within a capture zone. In short, we separate capture probabilities into spatial/statistical and time-related frequencies. This will provide water managers additional information on how to manage a well catchment in the light of possible hazard conditions close to the capture boundary under uncertain and time-variable flow conditions. In order to save computational costs, we take advantage of super-positioned flow components with time-variable coefficients. We assume an instantaneous development of steady-state flow conditions after each temporal change in driving forces, following an idea by Festger and Walter, 2002. These quasi steady-state flow fields are cast into a geostatistical Monte Carlo framework to admit and evaluate the influence of parameter uncertainty on the delineation process. Furthermore, this framework enables conditioning on observed data with any conditioning scheme, such as rejection sampling, Ensemble Kalman Filters, etc. To further reduce the computational load, we use the reverse formulation of advective-dispersive transport. We simulate the reverse transport by particle tracking random walk in order to avoid numerical dispersion to account for well arrival times.

  2. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  3. Monitoring Method of Cow Anthrax Based on Gis and Spatial Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Li, Lin; Yang, Yong; Wang, Hongbin; Dong, Jing; Zhao, Yujun; He, Jianbin; Fan, Honggang

    Geographic information system (GIS) is a computer application system, which possesses the ability of manipulating spatial information and has been used in many fields related with the spatial information management. Many methods and models have been established for analyzing animal diseases distribution models and temporal-spatial transmission models. Great benefits have been gained from the application of GIS in animal disease epidemiology. GIS is now a very important tool in animal disease epidemiological research. Spatial analysis function of GIS can be widened and strengthened by using spatial statistical analysis, allowing for the deeper exploration, analysis, manipulation and interpretation of spatial pattern and spatial correlation of the animal disease. In this paper, we analyzed the cow anthrax spatial distribution characteristics in the target district A (due to the secret of epidemic data we call it district A) based on the established GIS of the cow anthrax in this district in combination of spatial statistical analysis and GIS. The Cow anthrax is biogeochemical disease, and its geographical distribution is related closely to the environmental factors of habitats and has some spatial characteristics, and therefore the correct analysis of the spatial distribution of anthrax cow for monitoring and the prevention and control of anthrax has a very important role. However, the application of classic statistical methods in some areas is very difficult because of the pastoral nomadic context. The high mobility of livestock and the lack of enough suitable sampling for the some of the difficulties in monitoring currently make it nearly impossible to apply rigorous random sampling methods. It is thus necessary to develop an alternative sampling method, which could overcome the lack of sampling and meet the requirements for randomness. The GIS computer application software ArcGIS9.1 was used to overcome the lack of data of sampling sites.Using ArcGIS 9.1 and GEODA to analyze the cow anthrax spatial distribution of district A. we gained some conclusions about cow anthrax' density: (1) there is a spatial clustering model. (2) there is an intensely spatial autocorrelation. We established a prediction model to estimate the anthrax distribution based on the spatial characteristic of the density of cow anthrax. Comparing with the true distribution, the prediction model has a well coincidence and is feasible to the application. The method using a GIS tool facilitates can be implemented significantly in the cow anthrax monitoring and investigation, and the space statistics - related prediction model provides a fundamental use for other study on space-related animal diseases.

  4. Construction of Solar-Wind-Like Magnetic Fields

    NASA Technical Reports Server (NTRS)

    Roberts, Dana Aaron

    2012-01-01

    Fluctuations in the solar wind fields tend to not only have velocities and magnetic fields correlated in the sense consistent with Alfven waves traveling from the Sun, but they also have the magnitude of the magnetic field remarkably constant despite their being broadband. This paper provides, for the first time, a method for constructing fields with nearly constant magnetic field, zero divergence, and with any specified power spectrum for the fluctuations of the components of the field. Every wave vector, k, is associated with two polarizations the relative phases of these can be chosen to minimize the variance of the field magnitude while retaining the\\random character of the fields. The method is applied to a case with one spatial coordinate that demonstrates good agreement with observed time series and power spectra of the magnetic field in the solar wind, as well as with the distribution of the angles of rapid changes (discontinuities), thus showing a deep connection between two seemingly unrelated issues. It is suggested that using this construction will lead to more realistic simulations of solar wind turbulence and of the propagation of energetic particles.

  5. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  6. Triggering extreme events at the nanoscale in photonic seas

    NASA Astrophysics Data System (ADS)

    Liu, C.; van der Wel, R. E. C.; Rotenberg, N.; Kuipers, L.; Krauss, T. F.; di Falco, A.; Fratalocchi, A.

    2015-04-01

    Hurricanes, tsunamis, rogue waves and tornadoes are rare natural phenomena that embed an exceptionally large amount of energy, which appears and quickly disappears in a probabilistic fashion. This makes them difficult to predict and hard to generate on demand. Here we demonstrate that we can trigger the onset of rare events akin to rogue waves controllably, and systematically use their generation to break the diffraction limit of light propagation. We illustrate this phenomenon in the case of a random field, where energy oscillates among incoherent degrees of freedom. Despite the low energy carried by each wave, we illustrate how to control a mechanism of spontaneous synchronization, which constructively builds up the spectral energy available in the whole bandwidth of the field into giant structures, whose statistics is predictable. The larger the frequency bandwidth of the random field, the larger the amplitude of rare events that are built up by this mechanism. Our system is composed of an integrated optical resonator, realized on a photonic crystal chip. Through near-field imaging experiments, we record confined rogue waves characterized by a spatial localization of 206 nm and with an ultrashort duration of 163 fs at a wavelength of 1.55 μm. Such localized energy patterns are formed in a deterministic dielectric structure that does not require nonlinear properties.

  7. Learning a constrained conditional random field for enhanced segmentation of fallen trees in ALS point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2018-06-01

    In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.

  8. Effect of magnetic field on the flux pinning mechanisms in Al and SiC co-doped MgB2 superconductor

    NASA Astrophysics Data System (ADS)

    Kia, N. S.; Ghorbani, S. R.; Arabi, H.; Hossain, M. S. A.

    2018-07-01

    MgB2 superconductor samples co-doped with 0.02 wt. Al2O3 and 0-0.05 wt. SiC were studied by magnetization - magnetic field (M-H) loop measurements at different temperatures. The critical current density has been calculated by the Bean model, and the irreversibility field, Hirr, has been obtained by the Kramer method. The pinning mechanism of the co-doped sample with 2% Al and 5% SiC was investigated in particular due to its having the highest Hirr. The normalized volume pinning force f = F/Fmax as a function of reduced magnetic field h = H/Hirr has been obtained, and the pinning mechanism was studied by the Dew-Houghes model. It was found that the normal point pinning (NPP), the normal surface pinning (NSP), and the normal volume pinning (NVP) mechanisms play the main roles. The magnetic field and temperature dependence of contributions of the NPP, NSP, and NVP pinning mechanisms were obtained. The results show that the contributions of the pinning mechanisms depend on the temperature and magnetic field. From the temperature dependence of the critical current density within the collective pinning theory, it was found that both the δl pinning due to spatial fluctuations of the charge-carrier mean free path and the δTc pinning due to randomly distributed spatial variations in the transition temperature coexist at zero magnetic field in co-doped samples. Yet, the charge-carrier mean-free-path fluctuation pinning (δl) is the only important pinning mechanism at non-zero magnetic fields.

  9. Cluster: A New Application for Spatial Analysis of Pixelated Data for Epiphytotics.

    PubMed

    Nelson, Scot C; Corcoja, Iulian; Pethybridge, Sarah J

    2017-12-01

    Spatial analysis of epiphytotics is essential to develop and test hypotheses about pathogen ecology, disease dynamics, and to optimize plant disease management strategies. Data collection for spatial analysis requires substantial investment in time to depict patterns in various frames and hierarchies. We developed a new approach for spatial analysis of pixelated data in digital imagery and incorporated the method in a stand-alone desktop application called Cluster. The user isolates target entities (clusters) by designating up to 24 pixel colors as nontargets and moves a threshold slider to visualize the targets. The app calculates the percent area occupied by targeted pixels, identifies the centroids of targeted clusters, and computes the relative compass angle of orientation for each cluster. Users can deselect anomalous clusters manually and/or automatically by specifying a size threshold value to exclude smaller targets from the analysis. Up to 1,000 stochastic simulations randomly place the centroids of each cluster in ranked order of size (largest to smallest) within each matrix while preserving their calculated angles of orientation for the long axes. A two-tailed probability t test compares the mean inter-cluster distances for the observed versus the values derived from randomly simulated maps. This is the basis for statistical testing of the null hypothesis that the clusters are randomly distributed within the frame of interest. These frames can assume any shape, from natural (e.g., leaf) to arbitrary (e.g., a rectangular or polygonal field). Cluster summarizes normalized attributes of clusters, including pixel number, axis length, axis width, compass orientation, and the length/width ratio, available to the user as a downloadable spreadsheet. Each simulated map may be saved as an image and inspected. Provided examples demonstrate the utility of Cluster to analyze patterns at various spatial scales in plant pathology and ecology and highlight the limitations, trade-offs, and considerations for the sensitivities of variables and the biological interpretations of results. The Cluster app is available as a free download for Apple computers at iTunes, with a link to a user guide website.

  10. Spectral-spatial classification of hyperspectral imagery with cooperative game

    NASA Astrophysics Data System (ADS)

    Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei

    2018-01-01

    Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.

  11. A dynamic spatio-temporal model for spatial data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin; Walsh, Daniel P.

    2017-01-01

    Analyzing spatial data often requires modeling dependencies created by a dynamic spatio-temporal data generating process. In many applications, a generalized linear mixed model (GLMM) is used with a random effect to account for spatial dependence and to provide optimal spatial predictions. Location-specific covariates are often included as fixed effects in a GLMM and may be collinear with the spatial random effect, which can negatively affect inference. We propose a dynamic approach to account for spatial dependence that incorporates scientific knowledge of the spatio-temporal data generating process. Our approach relies on a dynamic spatio-temporal model that explicitly incorporates location-specific covariates. We illustrate our approach with a spatially varying ecological diffusion model implemented using a computationally efficient homogenization technique. We apply our model to understand individual-level and location-specific risk factors associated with chronic wasting disease in white-tailed deer from Wisconsin, USA and estimate the location the disease was first introduced. We compare our approach to several existing methods that are commonly used in spatial statistics. Our spatio-temporal approach resulted in a higher predictive accuracy when compared to methods based on optimal spatial prediction, obviated confounding among the spatially indexed covariates and the spatial random effect, and provided additional information that will be important for containing disease outbreaks.

  12. Decoding complex flow-field patterns in visual working memory.

    PubMed

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Carbon mapping of Argentine savannas: Using fractional tree cover to scale from field to region

    NASA Astrophysics Data System (ADS)

    González-Roglich, M.; Swenson, J. J.

    2015-12-01

    Programs which intend to maintain or enhance carbon (C) stocks in natural ecosystems are promising, but require detailed and spatially explicit C distribution models to monitor the effectiveness of management interventions. Savanna ecosystems are significant components of the global C cycle, covering about one fifth of the global land mass, but they have received less attention in C monitoring protocols. Our goal was to estimate C storage across a broad savanna ecosystem using field surveys and freely available satellite images. We first mapped tree canopies at 2.5 m resolution with a spatial subset of high resolution panchromatic images to then predict regional wall-to-wall tree percent cover using 30-m Landsat imagery and the Random Forests algorithms. We found that a model with summer and winter spectral indices from Landsat, climate and topography performed best. Using a linear relationship between C and % tree cover, we then predicted tree C stocks across the gradient of tree cover, explaining 87 % of the variability. The spatially explicit validation of the tree C model with field-measured C-stocks revealed an RMSE of 8.2 tC/ha which represented ~30% of the mean C stock for areas with tree cover, comparable to studies based on more advanced remote sensing methods, such as LiDAR and RADAR. Sample spatial distribution highly affected the performance of the RF models in predicting tree cover, raising concerns regarding the predictive capabilities of the model in areas for which training data is not present. The 50,000 km2 has ~41 Tg C, which could be released to the atmosphere if agricultural pressure intensifies in this semiarid savanna.

  14. Semantic, perceptual and number space: relations between category width and spatial processing.

    PubMed

    Brugger, Peter; Loetscher, Tobias; Graves, Roger E; Knoch, Daria

    2007-05-17

    Coarse semantic encoding and broad categorization behavior are the hallmarks of the right cerebral hemisphere's contribution to language processing. We correlated 40 healthy subjects' breadth of categorization as assessed with Pettigrew's category width scale with lateral asymmetries in perceptual and representational space. Specifically, we hypothesized broader category width to be associated with larger leftward spatial biases. For the 20 men, but not the 20 women, this hypothesis was confirmed both in a lateralized tachistoscopic task with chimeric faces and a random digit generation task; the higher a male participant's score on category width, the more pronounced were his left-visual field bias in the judgement of chimeric faces and his small-number preference in digit generation ("small" is to the left of "large" in number space). Subjects' category width was unrelated to lateral displacements in a blindfolded tactile-motor rod centering task. These findings indicate that visual-spatial functions of the right hemisphere should not be considered independent of the same hemisphere's contribution to language. Linguistic and spatial cognition may be more tightly interwoven than is currently assumed.

  15. Magnetoresistance of an Anderson insulator of bosons.

    PubMed

    Gangopadhyay, Anirban; Galitski, Victor; Müller, Markus

    2013-07-12

    We study the magnetoresistance of two-dimensional bosonic Anderson insulators. We describe the change in spatial decay of localized excitations in response to a magnetic field, which is given by an interference sum over alternative tunneling trajectories. The excitations become more localized with increasing field (in sharp contrast to generic fermionic excitations which get weakly delocalized): the localization length ξ(B) is found to change as ξ(-1)(B)-ξ(-1)(0)~B(4/5). The quantum interference problem maps onto the classical statistical mechanics of directed polymers in random media (DPRM). We explain the observed scaling using a simplified droplet model which incorporates the nontrivial DPRM exponents. Our results have implications for a variety of experiments on magnetic-field-tuned superconductor-to-insulator transitions observed in disordered films, granular superconductors, and Josephson junction arrays, as well as for cold atoms in artificial gauge fields.

  16. REVIEWS OF TOPICAL PROBLEMS: Particle kinetics in highly turbulent plasmas (renormalization and self-consistent field methods)

    NASA Astrophysics Data System (ADS)

    Bykov, Andrei M.; Toptygin, Igor'N.

    1993-11-01

    This review presents methods available for calculating transport coefficients for impurity particles in plasmas with strong long-wave MHD-type velocity and magnetic-field fluctuations, and random ensembles of strong shock fronts. The renormalization of the coefficients of the mean-field equation of turbulent dynamo theory is also considered. Particular attention is devoted to the renormalization method developed by the authors in which the renormalized transport coefficients are calculated from a nonlinear transcendental equation (or a set of such equations) and are expressed in the form of explicit functions of pair correlation tensors describing turbulence. Numerical calculations are reproduced for different turbulence spectra. Spatial transport in a magnetic field and particle acceleration by strong turbulence are investigated. The theory can be used in a wide range of practical problems in plasma physics, atmospheric physics, ocean physics, astrophysics, cosmic-ray physics, and so on.

  17. Spatial pattern of Baccharis platypoda shrub as determined by sex and life stages

    NASA Astrophysics Data System (ADS)

    Fonseca, Darliana da Costa; de Oliveira, Marcio Leles Romarco; Pereira, Israel Marinho; Gonzaga, Anne Priscila Dias; de Moura, Cristiane Coelho; Machado, Evandro Luiz Mendonça

    2017-11-01

    Spatial patterns of dioecious species can be determined by their nutritional requirements and intraspecific competition, apart from being a response to environmental heterogeneity. The aim of the study was to evaluate the spatial pattern of populations of a dioecious shrub reporting to sex and reproductive stage patterns of individuals. Sampling was carried out in three areas located in the meridional portion of Serra do Espinhaço, where in individuals of the studied species were mapped. The spatial pattern was determined through O-ring analysis and Ripley's K-function and the distribution of individuals' frequencies was verified through x2 test. Populations in two areas showed an aggregate spatial pattern tending towards random or uniform according to the observed scale. Male and female adults presented an aggregate pattern at smaller scales, while random and uniform patterns were verified above 20 m for individuals of both sexes of the areas A2 and A3. Young individuals presented an aggregate pattern in all areas and spatial independence in relation to adult individuals, especially female plants. The interactions between individuals of both genders presented spatial independence with respect to spatial distribution. Baccharis platypoda showed characteristics in accordance with the spatial distribution of savannic and dioecious species, whereas the population was aggregated tending towards random at greater spatial scales. Young individuals showed an aggregated pattern at different scales compared to adults, without positive association between them. Female and male adult individuals presented similar characteristics, confirming that adult individuals at greater scales are randomly distributed despite their distinct preferences for environments with moisture variation.

  18. Correlated randomness and switching phenomena

    NASA Astrophysics Data System (ADS)

    Stanley, H. E.; Buldyrev, S. V.; Franzese, G.; Havlin, S.; Mallamace, F.; Kumar, P.; Plerou, V.; Preis, T.

    2010-08-01

    One challenge of biology, medicine, and economics is that the systems treated by these serious scientific disciplines have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. Further, many of these processes and structures have the remarkable feature of “switching” from one behavior to another as if by magic. The past century has, philosophically, been concerned with placing aside the human tendency to see the universe as a fine-tuned machine. Here we will address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at some of the many spatial and temporal patterns in biology, medicine, and economics and even begin to characterize the switching phenomena that enables a system to pass from one state to another. Inspired by principles developed by A. Nihat Berker and scores of other statistical physicists in recent years, we discuss some applications of correlated randomness to understand switching phenomena in various fields. Specifically, we present evidence from experiments and from computer simulations supporting the hypothesis that water’s anomalies are related to a switching point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell), and that the bubbles in economic phenomena that occur on all scales are not “outliers” (another Gladwell immortalization). Though more speculative, we support the idea of disease as arising from some kind of yet-to-be-understood complex switching phenomenon, by discussing data on selected examples, including heart disease and Alzheimer disease.

  19. Entropy of spatial network ensembles

    NASA Astrophysics Data System (ADS)

    Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis

    2018-04-01

    We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.

  20. A study of the breast cancer dynamics in North Carolina.

    PubMed

    Christakos, G; Lai, J J

    1997-11-01

    This work is concerned with the study of breast cancer incidence in the State of North Carolina. Methodologically, the current analysis illustrates the importance of spatiotemporal random field modelling and introduces a mode of reasoning that is based on a combination of inductive and deductive processes. The composite space/time analysis utilizes the variability characteristics of incidence and the mathematical features of the random field model to fit it to the data. The analysis is significantly general and can efficiently represent non-homogeneous and non-stationary characteristics of breast cancer variation. Incidence predictions are produced using data at the same time period as well as data from other time periods and disease registries. The random field provides a rigorous and systematic method for generating detailed maps, which offer a quantitative description of the incidence variation from place to place and from time to time, together with a measure of the accuracy of the incidence maps. Spatiotemporal mapping accounts for the geographical locations and the time instants of the incidence observations, which is not usually the case with most empirical Bayes methods. It is also more accurate than purely spatial statistics methods, and can offer valuable information about the breast cancer risk and dynamics in North Carolina. Field studies could be initialized in high-rate areas identified by the maps in an effort to uncover environmental or life-style factors that might be responsible for the high risk rates. Also, the incidence maps can help elucidate causal mechanisms, explain disease occurrences at a certain scale, and offer guidance in health management and administration.

  1. Striking the balance: Privacy and spatial pattern preservation in masked GPS data

    NASA Astrophysics Data System (ADS)

    Seidl, Dara E.

    Volunteered location and trajectory data are increasingly collected and applied in analysis for a variety of academic fields and recreational pursuits. As access to personal location data increases, issues of privacy arise as individuals become identifiable and linked to other repositories of information. While the quality and precision of data are essential to accurate analysis, there is a tradeoff between privacy and access to data. Obfuscation of point data is a solution that aims to protect privacy and maximize preservation of spatial pattern. This study explores two methods of location obfuscation for volunteered GPS data: grid masking and random perturbation. These methods are applied to travel survey GPS data in the greater metropolitan regions of Chicago and Atlanta in the first large-scale GPS masking study of its kind.

  2. Connection between two statistical approaches for the modelling of particle velocity and concentration distributions in turbulent flow: The mesoscopic Eulerian formalism and the two-point probability density function method

    NASA Astrophysics Data System (ADS)

    Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre

    2006-12-01

    The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.

  3. Altered prefrontal function with aging: insights into age-associated performance decline.

    PubMed

    Solbakk, Anne-Kristin; Fuhrmann Alpert, Galit; Furst, Ansgar J; Hale, Laura A; Oga, Tatsuhide; Chetty, Sundari; Pickard, Natasha; Knight, Robert T

    2008-09-26

    We examined the effects of aging on visuo-spatial attention. Participants performed a bi-field visual selective attention task consisting of infrequent target and task-irrelevant novel stimuli randomly embedded among repeated standards in either attended or unattended visual fields. Blood oxygenation level dependent (BOLD) responses to the different classes of stimuli were measured using functional magnetic resonance imaging. The older group had slower reaction times to targets, and committed more false alarms but had comparable detection accuracy to young controls. Attended target and novel stimuli activated comparable widely distributed attention networks, including anterior and posterior association cortex, in both groups. The older group had reduced spatial extent of activation in several regions, including prefrontal, basal ganglia, and visual processing areas. In particular, the anterior cingulate and superior frontal gyrus showed more restricted activation in older compared with young adults across all attentional conditions and stimulus categories. The spatial extent of activations correlated with task performance in both age groups, but the regional pattern of association between hemodynamic responses and behavior differed between the groups. Whereas the young subjects relied on posterior regions, the older subjects engaged frontal areas. The results indicate that aging alters the functioning of neural networks subserving visual attention, and that these changes are related to cognitive performance.

  4. Assessment of Antarctic moss health from multi-sensor UAS imagery with Random Forest Modelling

    NASA Astrophysics Data System (ADS)

    Turner, Darren; Lucieer, Arko; Malenovský, Zbyněk; King, Diana; Robinson, Sharon A.

    2018-06-01

    Moss beds are one of very few terrestrial vegetation types that can be found on the Antarctic continent and as such mapping their extent and monitoring their health is important to environmental managers. Across Antarctica, moss beds are experiencing changes in health as their environment changes. As Antarctic moss beds are spatially fragmented with relatively small extent they require very high resolution remotely sensed imagery to monitor their distribution and dynamics. This study demonstrates that multi-sensor imagery collected by an Unmanned Aircraft System (UAS) provides a novel data source for assessment of moss health. In this study, we train a Random Forest Regression Model (RFM) with long-term field quadrats at a study site in the Windmill Islands, East Antarctica and apply it to UAS RGB and 6-band multispectral imagery, derived vegetation indices, 3D topographic data, and thermal imagery to predict moss health. Our results suggest that moss health, expressed as a percentage between 0 and 100% healthy, can be estimated with a root mean squared error (RMSE) between 7 and 12%. The RFM also quantifies the importance of input variables for moss health estimation showing the multispectral sensor data was important for accurate health prediction, such information being essential for planning future field investigations. The RFM was applied to the entire moss bed, providing an extrapolation of the health assessment across a larger spatial area. With further validation the resulting maps could be used for change detection of moss health across multiple sites and seasons.

  5. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    NASA Astrophysics Data System (ADS)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  6. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  7. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    NASA Astrophysics Data System (ADS)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the drought impacts in Texas counties in the past years, where the spatiotemporal dynamics are represented in areal data.

  8. Field strategies for the calibration and validation of high-resolution forest carbon maps: Scaling from plots to a three state region MD, DE, & PA, USA.

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.; Huang, W.; Johnson, K. D.; Birdsey, R.; Finley, A. O.; Dubayah, R.; Hurtt, G. C.

    2016-12-01

    In 2010 Congress directed NASA to initiate research towards the development of Carbon Monitoring Systems (CMS). In response, our team has worked to develop a robust, replicable framework to quantify and map aboveground forest biomass at high spatial resolutions. Crucial to this framework has been the collection of field-based estimates of aboveground tree biomass, combined with remotely detected canopy and structural attributes, for calibration and validation. Here we evaluate the field- based calibration and validation strategies within this carbon monitoring framework and discuss the implications on local to national monitoring systems. Through project development, the domain of this research has expanded from two counties in MD (2,181 km2), to the entire state of MD (32,133 km2), and most recently the tri-state region of MD, PA, and DE (157,868 km2) and covers forests in four major USDA ecological providences. While there are approximately 1000 Forest Inventory and Analysis (FIA) plots distributed across the state of MD, 60% fell in areas considered non-forest or had conditions that precluded them from being measured in the last forest inventory. Across the two pilot counties, where population and landuse competition is high, that proportion rose to 70% Thus, during the initial phases of this project 850 independent field plots were established for model calibration following a random stratified design to insure the adequate representation of height and vegetation classes found across the state, while FIA data were used as an independent data source for validation. As the project expanded to cover the larger spatial tri-state domain, the strategy was flipped to base calibration on more than 3,300 measured FIA plots, as they provide a standardized, consistent and available data source across the nation. An additional 350 stratified random plots were deployed in the Northern Mixed forests of PA and the Coastal Plains forests of DE for validation.

  9. High-Resolution Regional Biomass Map of Siberia from Glas, Palsar L-Band Radar and Landsat Vcf Data

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ranson, K.; Montesano, P.; Zhang, Z.; Kharuk, V.

    2015-12-01

    The Arctic-Boreal zone is known be warming at an accelerated rate relative to other biomes. The taiga or boreal forest covers over 16 x106 km2 of Arctic North America, Scandinavia, and Eurasia. A large part of the northern Boreal forests are in Russia's Siberia, as area with recent accelerated climate warming. During the last two decades we have been working on characterization of boreal forests in north-central Siberia using field and satellite measurements. We have published results of circumpolar biomass using field plots, airborne (PALS, ACTM) and spaceborne (GLAS) lidar data with ASTER DEM, LANDSAT and MODIS land cover classification, MODIS burned area and WWF's ecoregion map. Researchers from ESA and Russia have also been working on biomass (or growing stock) mapping in Siberia. For example, they developed a pan-boreal growing stock volume map at 1-kilometer scale using hyper-temporal ENVISAT ASAR ScanSAR backscatter data. Using the annual PALSAR mosaics from 2007 to 2010 growing stock volume maps were retrieved based on a supervised random forest regression approach. This method is being used in the ESA/Russia ZAPAS project for Central Siberia Biomass mapping. Spatially specific biomass maps of this region at higher resolution are desired for carbon cycle and climate change studies. In this study, our work focused on improving resolution ( 50 m) of a biomass map based on PALSAR L-band data and Landsat Vegetation Canopy Fraction products. GLAS data were carefully processed and screened using land cover classification, local slope, and acquisition dates. The biomass at remaining footprints was estimated using a model developed from field measurements at GLAS footprints. The GLAS biomass samples were then aggregated into 1 Mg/ha bins of biomass and mean VCF and PALSAR backscatter and textures were calculated for each of these biomass bins. The resulted biomass/signature data was used to train a random forest model for biomass mapping of entire region from 50oN to 75oN, and 80oE to 145oE. The spatial patterns of the new biomass map is much better than the previous maps due to spatially specific mapping in high resolution. The uncertainties of field/GLAS and GLAS/imagery models were investigated using bootstrap procedure, and the final biomass map was compared with previous maps.

  10. Observed Trend in Surface Wind Speed Over the Conterminous USA and CMIP5 Simulations

    NASA Technical Reports Server (NTRS)

    Hashimoto, Hirofumi; Nemani, Ramakrishna R.

    2016-01-01

    There has been no spatial surface wind map even over the conterminous USA due to the difficulty of spatial interpolation of wind field. As a result, the reanalysis data were often used to analyze the statistics of spatial pattern in surface wind speed. Unfortunately, no consistent trend in wind field was found among the available reanalysis data, and that obstructed the further analysis or projection of spatial pattern of wind speed. In this study, we developed the methodology to interpolate the observed wind speed data at weather stations using random forest algorithm. We produced the 1-km daily climate variables over the conterminous USA from 1979 to 2015. The validation using Ameriflux daily data showed that R2 is 0.59. Existing studies have found the negative trend over the Eastern US, and our study also showed same results. However, our new datasets also revealed the significant increasing trend over the southwest US especially from April to June. The trend in the southwestern US represented change or seasonal shift in North American Monsoon. Global analysis of CMIP5 data projected the decrease trend in mid-latitude, while increase trend in tropical region over the land. Most likely because of the low resolution in GCM, CMIP5 data failed to simulate the increase trend in the southwest US, even though it was qualitatively predicted that pole ward shift of anticyclone help the North American Monsoon.

  11. Spatial Transport of Magnetic Flux Surfaces in Strongly Anisotropic Turbulence

    NASA Astrophysics Data System (ADS)

    Matthaeus, W. H.; Servidio, S.; Wan, M.; Ruffolo, D. J.; Rappazzo, A. F.; Oughton, S.

    2013-12-01

    Magnetic flux surfaces afford familiar descriptions of spatial structure, dynamics, and connectivity of magnetic fields, with particular relevance in contexts such as solar coronal flux tubes, magnetic field connectivity in the interplanetary and interstellar medium, as well as in laboratory plasmas and dynamo problems [1-4]. Typical models assume that field-lines are orderly, and flux tubes remain identifiable over macroscopic distances; however, a previous study has shown that flux tubes shred in the presence of fluctuations, typically losing identity after several correlation scales [5]. Here, the structure of magnetic flux surfaces is numerically investigated in a reduced magnetohydrodynamic (RMHD) model of homogeneous turbulence. Short and long-wavelength behavior is studied statistically by propagating magnetic surfaces along the mean field. At small scales magnetic surfaces become complex, experiencing an exponential thinning. At large scales, instead, the magnetic flux undergoes a diffusive behavior. The link between the diffusion of the coarse-grained flux and field-line random walk is established by means of a multiple scale analysis. Both large and small scales limits are controlled by the Kubo number. These results have consequences for understanding and interpreting processes such as magnetic reconnection and field-line diffusion in plasmas [6]. [1] E. N. Parker, Cosmical Magnetic Fields (Oxford Univ. Press, New York, 1979). [2] J. R. Jokipii and E. N. Parker, Phys. Rev. Lett. 21, 44 (1968). [3] R. Bruno et al., Planet. Space Sci. 49, 1201 (2001). [4] M. N. Rosenbluth et al., Nuclear Fusion 6, 297 (1966). [5] W. H. Matthaeus et al., Phys. Rev. Lett. 75, 2136 (1995). [6] S. Servidio et al., submitted (2013).

  12. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  13. Upscaling of dilution and mixing using a trajectory based Spatial Markov random walk model in a periodic flow domain

    NASA Astrophysics Data System (ADS)

    Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo

    2017-05-01

    The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.

  14. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.

  15. Large-area imaging reveals biologically driven non-random spatial patterns of corals at a remote reef

    NASA Astrophysics Data System (ADS)

    Edwards, Clinton B.; Eynaud, Yoan; Williams, Gareth J.; Pedersen, Nicole E.; Zgliczynski, Brian J.; Gleason, Arthur C. R.; Smith, Jennifer E.; Sandin, Stuart A.

    2017-12-01

    For sessile organisms such as reef-building corals, differences in the degree of dispersion of individuals across a landscape may result from important differences in life-history strategies or may reflect patterns of habitat availability. Descriptions of spatial patterns can thus be useful not only for the identification of key biological and physical mechanisms structuring an ecosystem, but also by providing the data necessary to generate and test ecological theory. Here, we used an in situ imaging technique to create large-area photomosaics of 16 plots at Palmyra Atoll, central Pacific, each covering 100 m2 of benthic habitat. We mapped the location of 44,008 coral colonies and identified each to the lowest taxonomic level possible. Using metrics of spatial dispersion, we tested for departures from spatial randomness. We also used targeted model fitting to explore candidate processes leading to differences in spatial patterns among taxa. Most taxa were clustered and the degree of clustering varied by taxon. A small number of taxa did not significantly depart from randomness and none revealed evidence of spatial uniformity. Importantly, taxa that readily fragment or tolerate stress through partial mortality were more clustered. With little exception, clustering patterns were consistent with models of fragmentation and dispersal limitation. In some taxa, dispersion was linearly related to abundance, suggesting density dependence of spatial patterning. The spatial patterns of stony corals are non-random and reflect fundamental life-history characteristics of the taxa, suggesting that the reef landscape may, in many cases, have important elements of spatial predictability.

  16. Optimal sampling design for estimating spatial distribution and abundance of a freshwater mussel population

    USGS Publications Warehouse

    Pooler, P.S.; Smith, D.R.

    2005-01-01

    We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.

  17. Elementary Theoretical Forms for the Spatial Power Spectrum of Earth's Crustal Magnetic Field

    NASA Technical Reports Server (NTRS)

    Voorhies, C.

    1998-01-01

    The magnetic field produced by magnetization in Earth's crust and lithosphere can be distinguished from the field produced by electric currents in Earth's core because the spatial magnetic power spectrum of the crustal field differs from that of the core field. Theoretical forms for the spectrum of the crustal field are derived by treating each magnetic domain in the crust as the point source of a dipole field. The geologic null-hypothesis that such moments are uncorrelated is used to obtain the magnetic spectrum expected from a randomly magnetized, or unstructured, spherical crust of negligible thickness. This simplest spectral form is modified to allow for uniform crustal thickness, ellipsoidality, and the polarization of domains by an periodically reversing, geocentric axial dipole field from Earth's core. Such spectra are intended to describe the background crustal field. Magnetic anomalies due to correlated magnetization within coherent geologic structures may well be superimposed upon this background; yet representing each such anomaly with a single point dipole may lead to similar spectral forms. Results from attempts to fit these forms to observational spectra, determined via spherical harmonic analysis of MAGSAT data, are summarized in terms of amplitude, source depth, and misfit. Each theoretical spectrum reduces to a source factor multiplied by the usual exponential function of spherical harmonic degree n due to geometric attenuation with attitude above the source layer. The source factors always vary with n and are approximately proportional to n(exp 3) for degrees 12 through 120. The theoretical spectra are therefore not directly proportional to an exponential function of spherical harmonic degree n. There is no radius at which these spectra are flat, level, or otherwise independent of n.

  18. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.

  19. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  20. Volumetric Light-field Encryption at the Microscopic Scale

    PubMed Central

    Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu

    2017-01-01

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale. PMID:28059149

  1. Volumetric Light-field Encryption at the Microscopic Scale

    NASA Astrophysics Data System (ADS)

    Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu

    2017-01-01

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.

  2. Nanometer scale composition study of MBE grown BGaN performed by atom probe tomography

    NASA Astrophysics Data System (ADS)

    Bonef, Bastien; Cramer, Richard; Speck, James S.

    2017-06-01

    Laser assisted atom probe tomography is used to characterize the alloy distribution in BGaN. The effect of the evaporation conditions applied on the atom probe specimens on the mass spectrum and the quantification of the III site atoms is first evaluated. The evolution of the Ga++/Ga+ charge state ratio is used to monitor the strength of the applied field. Experiments revealed that applying high electric fields on the specimen results in the loss of gallium atoms, leading to the over-estimation of boron concentration. Moreover, spatial analysis of the surface field revealed a significant loss of atoms at the center of the specimen where high fields are applied. A good agreement between X-ray diffraction and atom probe tomography concentration measurements is obtained when low fields are applied on the tip. A random distribution of boron in the BGaN layer grown by molecular beam epitaxy is obtained by performing accurate and site specific statistical distribution analysis.

  3. An imaging colorimeter for noncontact tissue color mapping.

    PubMed

    Balas, C

    1997-06-01

    There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.

  4. Airborne hyperspectral imaging for the detection of powdery mildew in wheat

    NASA Astrophysics Data System (ADS)

    Franke, Jonas; Mewes, Thorsten; Menz, Gunter

    2008-08-01

    Plant stresses, in particular fungal diseases, show a high variability in spatial and temporal dimension with respect to their impact on the host. Recent "Precision Agriculture"-techniques allow for a spatially and temporally adjusted pest control that might reduce the amount of cost-intensive and ecologically harmful agrochemicals. Conventional stressdetection techniques such as random monitoring do not meet demands of such optimally placed management actions. The prerequisite is an accurate sensor-based detection of stress symptoms. The present study focuses on a remotely sensed detection of the fungal disease powdery mildew (Blumeria graminis) in wheat, Europe's main crop. In a field experiment, the potential of hyperspectral data for an early detection of stress symptoms was tested. A sophisticated endmember selection procedure was used and, additionally, a linear spectral mixture model was applied to a pixel spectrum with known characteristics, in order to derive an endmember representing 100% powdery mildew-infected wheat. Regression analyses of matched fraction estimates of this endmember and in-field-observed powdery mildew severities showed promising results (r=0.82 and r2=0.67).

  5. Consequences of using nonlinear particle trajectories to compute spatial diffusion coefficients. [for charged particles in interplanetary space

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.

    1976-01-01

    The propagation of charged particles through interstellar and interplanetary space has often been described as a random process in which the particles are scattered by ambient electromagnetic turbulence. In general, this changes both the magnitude and direction of the particles' momentum. Some situations for which scattering in direction (pitch angle) is of primary interest were studied. A perturbed orbit, resonant scattering theory for pitch-angle diffusion in magnetostatic turbulence was slightly generalized and then utilized to compute the diffusion coefficient for spatial propagation parallel to the mean magnetic field, Kappa. All divergences inherent in the quasilinear formalism when the power spectrum of the fluctuation field falls off as K to the minus Q power (Q less than 2) were removed. Various methods of computing Kappa were compared and limits on the validity of the theory discussed. For Q less than 1 or 2, the various methods give roughly comparable values of Kappa, but use of perturbed orbits systematically results in a somewhat smaller Kappa than can be obtained from quasilinear theory.

  6. On the emergence of an ‘intention field’ for socially cohesive agents

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Borghesi, Christian; Jensen, Pablo

    2014-03-01

    We argue that when a social convergence mechanism exists and is strong enough, one should expect the emergence of a well-defined ‘field’, i.e. a slowly evolving, local quantity around which individual attributes fluctuate in a finite range. This condensation phenomenon is well illustrated by the Deffuant-Weisbuch opinion model for which we provide a natural extension to allow for spatial heterogeneities. We show analytically and numerically that the resulting dynamics of the emergent field is a noisy diffusion equation that has a slow dynamics. This random diffusion equation reproduces the long-ranged, logarithmic decrease of the correlation of spatial voting patterns empirically found in Borghesi and Bouchaud (2010 Eur. Phys. J. B 75 395) and Borghesi et al (2012 PLoS One 7 e36289). Interestingly enough, we find that when the social cohesion mechanism becomes too weak, cultural cohesion breaks down completely, in the sense that the distribution of intentions/opinions becomes infinitely broad. No emerging field exists in this case. All these analytical findings are confirmed by numerical simulations of an agent-based model.

  7. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  8. Spatial coherence resonance and spatial pattern transition induced by the decrease of inhibitory effect in a neuronal network

    NASA Astrophysics Data System (ADS)

    Tao, Ye; Gu, Huaguang; Ding, Xueli

    2017-10-01

    Spiral waves were observed in the biological experiment on rat brain cortex with the application of carbachol and bicuculline which can block inhibitory coupling from interneurons to pyramidal neurons. To simulate the experimental spiral waves, a two-dimensional neuronal network composed of pyramidal neurons and inhibitory interneurons was built. By decreasing the percentage of active inhibitory interneurons, the random-like spatial patterns change to spiral waves and to random-like spatial patterns or nearly synchronous behaviors. The spiral waves appear at a low percentage of inhibitory interneurons, which matches the experimental condition that inhibitory couplings of the interneurons were blocked. The spiral waves exhibit a higher order or signal-to-noise ratio (SNR) characterized by spatial structure function than both random-like spatial patterns and nearly synchronous behaviors, which shows that changes of the percentage of active inhibitory interneurons can induce spatial coherence resonance-like behaviors. In addition, the relationship between the coherence degree and the spatial structures of the spiral waves is identified. The results not only present a possible and reasonable interpretation to the spiral waves observed in the biological experiment on the brain cortex with disinhibition, but also reveal that the spiral waves exhibit more ordered degree in spatial patterns.

  9. Comparing spatial regression to random forests for large environmental data sets

    EPA Science Inventory

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputatio...

  10. Application of information theory to the design of line-scan imaging systems

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Park, S. K.; Halyo, N.; Stallman, S.

    1981-01-01

    Information theory is used to formulate a single figure of merit for assessing the performance of line scan imaging systems as a function of their spatial response (point spread function or modulation transfer function), sensitivity, sampling and quantization intervals, and the statistical properties of a random radiance field. Computational results for the information density and efficiency (i.e., the ratio of information density to data density) are intuitively satisfying and compare well with experimental and theoretical results obtained by earlier investigators concerned with the performance of TV systems.

  11. Quantum chaos on a critical Fermi surface.

    PubMed

    Patel, Aavishkar A; Sachdev, Subir

    2017-02-21

    We compute parameters characterizing many-body quantum chaos for a critical Fermi surface without quasiparticle excitations. We examine a theory of [Formula: see text] species of fermions at nonzero density coupled to a [Formula: see text] gauge field in two spatial dimensions and determine the Lyapunov rate and the butterfly velocity in an extended random-phase approximation. The thermal diffusivity is found to be universally related to these chaos parameters; i.e., the relationship is independent of [Formula: see text], the gauge-coupling constant, the Fermi velocity, the Fermi surface curvature, and high-energy details.

  12. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  13. Onset of natural convection in a continuously perturbed system

    NASA Astrophysics Data System (ADS)

    Ghorbani, Zohreh; Riaz, Amir

    2017-11-01

    The convective mixing triggered by gravitational instability plays an important role in CO2 sequestration in saline aquifers. The linear stability analysis and the numerical simulation concerning convective mixing in porous media requires perturbations of small amplitude to be imposed on the concentration field in the form of an initial shape function. In aquifers, however, the instability is triggered by local porosity and permeability. In this work, we consider a canonical 2D homogeneous system where perturbations arise due to spatial variation of porosity in the system. The advantage of this approach is not only the elimination of the required initial shape function, but it also serves as a more realistic approach. Using a reduced nonlinear method, we first explore the effect of harmonic variations of porosity in the transverse and streamwise direction on the onset time of convection and late time behavior. We then obtain the optimal porosity structure that minimizes the convection onset. We further examine the effect of a random porosity distribution, that is independent of the spatial mode of porosity structure, on the convection onset. Using high-order pseudospectral DNS, we explore how the random distribution differs from the modal approach in predicting the onset time.

  14. Perceived change in orientation from optic flow in the central visual field

    NASA Technical Reports Server (NTRS)

    Dyre, Brian P.; Andersen, George J.

    1988-01-01

    The effects of internal depth within a simulation display on perceived changes in orientation have been studied. Subjects monocularly viewed displays simulating observer motion within a volume of randomly positioned points through a window which limited the field of view to 15 deg. Changes in perceived spatial orientation were measured by changes in posture. The extent of internal depth within the display, the presence or absence of visual information specifying change in orientation, and the frequency of motion supplied by the display were examined. It was found that increased sway occurred at frequencies equal to or below 0.375 Hz when motion at these frequencies was displayed. The extent of internal depth had no effect on the perception of changing orientation.

  15. Spectral statistics of random geometric graphs

    NASA Astrophysics Data System (ADS)

    Dettmann, C. P.; Georgiou, O.; Knight, G.

    2017-04-01

    We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.

  16. Random Wiring, Ganglion Cell Mosaics, and the Functional Architecture of the Visual Cortex

    PubMed Central

    Coppola, David; White, Leonard E.; Wolf, Fred

    2015-01-01

    The architecture of iso-orientation domains in the primary visual cortex (V1) of placental carnivores and primates apparently follows species invariant quantitative laws. Dynamical optimization models assuming that neurons coordinate their stimulus preferences throughout cortical circuits linking millions of cells specifically predict these invariants. This might indicate that V1’s intrinsic connectome and its functional architecture adhere to a single optimization principle with high precision and robustness. To validate this hypothesis, it is critical to closely examine the quantitative predictions of alternative candidate theories. Random feedforward wiring within the retino-cortical pathway represents a conceptually appealing alternative to dynamical circuit optimization because random dimension-expanding projections are believed to generically exhibit computationally favorable properties for stimulus representations. Here, we ask whether the quantitative invariants of V1 architecture can be explained as a generic emergent property of random wiring. We generalize and examine the stochastic wiring model proposed by Ringach and coworkers, in which iso-orientation domains in the visual cortex arise through random feedforward connections between semi-regular mosaics of retinal ganglion cells (RGCs) and visual cortical neurons. We derive closed-form expressions for cortical receptive fields and domain layouts predicted by the model for perfectly hexagonal RGC mosaics. Including spatial disorder in the RGC positions considerably changes the domain layout properties as a function of disorder parameters such as position scatter and its correlations across the retina. However, independent of parameter choice, we find that the model predictions substantially deviate from the layout laws of iso-orientation domains observed experimentally. Considering random wiring with the currently most realistic model of RGC mosaic layouts, a pairwise interacting point process, the predicted layouts remain distinct from experimental observations and resemble Gaussian random fields. We conclude that V1 layout invariants are specific quantitative signatures of visual cortical optimization, which cannot be explained by generic random feedforward-wiring models. PMID:26575467

  17. Spatial two-photon coherence of the entangled field produced by down-conversion using a partially spatially coherent pump beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Anand Kumar; Boyd, Robert W.

    2010-01-15

    We study the spatial coherence properties of the entangled two-photon field produced by parametric down-conversion (PDC) when the pump field is, spatially, a partially coherent beam. By explicitly treating the case of a pump beam of the Gaussian Schell-model type, we show that in PDC the spatial coherence properties of the pump field get entirely transferred to the spatial coherence properties of the down-converted two-photon field. As one important consequence of this study, we find that, for two-qubit states based on the position correlations of the two-photon field, the maximum achievable entanglement, as quantified by concurrence, is bounded by themore » degree of spatial coherence of the pump field. These results could be important by providing a means of controlling the entanglement of down-converted photons by tailoring the degree of coherence of the pump field.« less

  18. Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts

    PubMed Central

    Westphal-Fitch, Gesche; Fitch, W. Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095

  19. Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.

    PubMed

    Westphal-Fitch, Gesche; Fitch, W Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.

  20. Uncertainty in Random Forests: What does it mean in a spatial context?

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fouedjio, Francky

    2017-04-01

    Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.

  1. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.

  2. Spatial distribution of solute leaching with snowmelt and irrigation: measurements and simulations

    NASA Astrophysics Data System (ADS)

    Schotanus, D.; van der Ploeg, M. J.; van der Zee, S. E. A. T. M.

    2013-04-01

    Transport of a tracer and a degradable solute in a heterogeneous soil was measured in the field, and simulated with several transient and steady state infiltration rates. Leaching surfaces were used to investigate the solute leaching in space and time simultaneously. In the simulations, a random field for the scaling factor in the retention curve was used for the heterogeneous soil, which was based on the spatial distribution of drainage in an experiment with a multi-compartment sampler. As a criterion to compare the results from simulations and observations, the sorted and cumulative total drainage in a cell was used. The effect of the ratio of the infiltration rate over the degradation rate on leaching of degradable solutes was investigated. Furthermore, the spatial distribution of the leaching of degradable and non-degradable solutes was compared. The infiltration rate determines the amount of leaching of the degradable solute. This can be partly explained by a decreasing travel time with an increasing infiltration rate. The spatial distribution of the leaching also depends on the infiltration rate. When the infiltration rate is high compared to the degradation rate, the leaching of the degradable solute is similar as for the tracer. The fraction of the pore space of the soil that contributes to solute leaching increases with an increasing infiltration rate. This fraction is similar for a tracer and a degradable solute. With increasing depth, the leaching becomes more homogeneous, as a result of dispersion. The spatial distribution of the solute leaching is different under different transient infiltration rates, therefore, also the amount of leaching is different. With independent stream tube approaches, this effect would be ignored.

  3. Spatial distribution of solute leaching with snowmelt and irrigation: measurements and simulations

    NASA Astrophysics Data System (ADS)

    Schotanus, D.; van der Ploeg, M. J.; van der Zee, S. E. A. T. M.

    2012-12-01

    Transport of a tracer and a degradable solute in a heterogeneous soil was measured in the field, and simulated with several transient and steady state infiltration rates. Leaching surfaces were used to investigate the solute leaching in space and time simultaneously. In the simulations, a random field for the scaling factor in the retention curve was used for the heterogeneous soil, which was based on the spatial distribution of drainage in an experiment with a multi-compartment sampler. As a criterion to compare the results from simulations and observations, the sorted and cumulative total drainage in a cell was used. The effect of the ratio of the infiltration rate over the degradation rate on leaching of degradable solutes was investigated. Furthermore, the spatial distribution of the leaching of degradable and non-degradable solutes was compared. The infiltration rate determines the amount of leaching of the degradable solute. This can be partly explained by a decreasing travel time with an increasing infiltration rate. The spatial distribution of the leaching also depends on the infiltration rate. When the infiltration rate is high compared to the degradation rate, the leaching of the degradable solute is similar as for the tracer. The fraction of the soil that contributes to solute leaching increases with an increasing infiltration rate. This fraction is similar for a tracer and a degradable solute. With increasing depth, the leaching becomes more homogeneous, as a result of dispersion. The spatial distribution of the solute leaching is different under different transient infiltration rates, therefore also the amount of leaching is different. With independent stream tube approaches, this effect would be ignored.

  4. The Schaake shuffle: A method for reconstructing space-time variability in forecasted precipitation and temperature fields

    USGS Publications Warehouse

    Clark, M.R.; Gangopadhyay, S.; Hay, L.; Rajagopalan, B.; Wilby, R.

    2004-01-01

    A number of statistical methods that are used to provide local-scale ensemble forecasts of precipitation and temperature do not contain realistic spatial covariability between neighboring stations or realistic temporal persistence for subsequent forecast lead times. To demonstrate this point, output from a global-scale numerical weather prediction model is used in a stepwise multiple linear regression approach to downscale precipitation and temperature to individual stations located in and around four study basins in the United States. Output from the forecast model is downscaled for lead times up to 14 days. Residuals in the regression equation are modeled stochastically to provide 100 ensemble forecasts. The precipitation and temperature ensembles from this approach have a poor representation of the spatial variability and temporal persistence. The spatial correlations for downscaled output are considerably lower than observed spatial correlations at short forecast lead times (e.g., less than 5 days) when there is high accuracy in the forecasts. At longer forecast lead times, the downscaled spatial correlations are close to zero. Similarly, the observed temporal persistence is only partly present at short forecast lead times. A method is presented for reordering the ensemble output in order to recover the space-time variability in precipitation and temperature fields. In this approach, the ensemble members for a given forecast day are ranked and matched with the rank of precipitation and temperature data from days randomly selected from similar dates in the historical record. The ensembles are then reordered to correspond to the original order of the selection of historical data. Using this approach, the observed intersite correlations, intervariable correlations, and the observed temporal persistence are almost entirely recovered. This reordering methodology also has applications for recovering the space-time variability in modeled streamflow. ?? 2004 American Meteorological Society.

  5. Role of turbulence fluctuations on uncertainties of acoutic Doppler current profiler discharge measurements

    USGS Publications Warehouse

    Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin

    2012-01-01

    This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).

  6. Restricted spatial regression in practice: Geostatistical models, confounding, and robustness under model misspecification

    USGS Publications Warehouse

    Hanks, Ephraim M.; Schliep, Erin M.; Hooten, Mevin B.; Hoeting, Jennifer A.

    2015-01-01

    In spatial generalized linear mixed models (SGLMMs), covariates that are spatially smooth are often collinear with spatially smooth random effects. This phenomenon is known as spatial confounding and has been studied primarily in the case where the spatial support of the process being studied is discrete (e.g., areal spatial data). In this case, the most common approach suggested is restricted spatial regression (RSR) in which the spatial random effects are constrained to be orthogonal to the fixed effects. We consider spatial confounding and RSR in the geostatistical (continuous spatial support) setting. We show that RSR provides computational benefits relative to the confounded SGLMM, but that Bayesian credible intervals under RSR can be inappropriately narrow under model misspecification. We propose a posterior predictive approach to alleviating this potential problem and discuss the appropriateness of RSR in a variety of situations. We illustrate RSR and SGLMM approaches through simulation studies and an analysis of malaria frequencies in The Gambia, Africa.

  7. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  8. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    NASA Astrophysics Data System (ADS)

    Löwe, H.; Helbig, N.

    2012-04-01

    We provide a new quasi-analytical method to compute the topographic influence on the effective albedo of complex topography as required for meteorological, land-surface or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain averages of direct, diffuse and terrain radiation and the sky view factor. Domain averaged quantities are related to a type of level-crossing probability of the random field which is approximated by longstanding results developed for acoustic scattering at ocean boundaries. This allows us to express all non-local horizon effects in terms of a local terrain parameter, namely the mean squared slope. Emerging integrals are computed numerically and fit formulas are given for practical purposes. As an implication of our approach we provide an expression for the effective albedo of complex terrain in terms of the sun elevation angle, mean squared slope, the area averaged surface albedo, and the direct-to-diffuse ratio of solar radiation. As an application, we compute the effective albedo for the Swiss Alps and discuss possible generalizations of the method.

  9. Cluster mass inference via random field theory.

    PubMed

    Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D

    2009-01-01

    Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.

  10. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Gul, M. Shahzeb Khan; Gunturk, Bahadir K.

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  11. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.

    PubMed

    Gul, M Shahzeb Khan; Gunturk, Bahadir K

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  12. Visualizing single molecules interacting with nuclear pore complexes by narrow-field epifluorescence microscopy

    PubMed Central

    Yang, Weidong; Musser, Siegfried M.

    2008-01-01

    The utility of single molecule fluorescence (SMF) for understanding biological reactions has been amply demonstrated by a diverse series of studies over the last decade. In large part, the molecules of interest have been limited to those within a small focal volume or near a surface to achieve the high sensitivity required for detecting the inherently weak signals arising from individual molecules. Consequently, the investigation of molecular behavior with high time and spatial resolution deep within cells using SMF has remained challenging. Recently, we demonstrated that narrow-field epifluorescence microscopy allows visualization of nucleocytoplasmic transport at the single cargo level. We describe here the methodological approach that yields 2 ms and ∼15 nm resolution for a stationary particle. The spatial resolution for a mobile particle is inherently worse, and depends on how fast the particle is moving. The signal-to-noise ratio is sufficiently high to directly measure the time a single cargo molecule spends interacting with the nuclear pore complex. Particle tracking analysis revealed that cargo molecules randomly diffuse within the nuclear pore complex, exiting as a result of a single rate-limiting step. We expect that narrow-field epifluorescence microscopy will be useful for elucidating other binding and trafficking events within cells. PMID:16879979

  13. Chandra ACIS Sub-pixel Resolution

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.

    2011-05-01

    We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy

  14. Spatially-Distributed Cost–Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution

    PubMed Central

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N.; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program–FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ‘‘best approach” depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds. PMID:26313561

  15. Spatially-Distributed Cost-Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution.

    PubMed

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.

  16. Three-dimensional Bayesian geostatistical aquifer characterization at the Hanford 300 Area using tracer test data

    NASA Astrophysics Data System (ADS)

    Chen, Xingyuan; Murakami, Haruko; Hahn, Melanie S.; Hammond, Glenn E.; Rockhold, Mark L.; Zachara, John M.; Rubin, Yoram

    2012-06-01

    Tracer tests performed under natural or forced gradient flow conditions can provide useful information for characterizing subsurface properties, through monitoring, modeling, and interpretation of the tracer plume migration in an aquifer. Nonreactive tracer experiments were conducted at the Hanford 300 Area, along with constant-rate injection tests and electromagnetic borehole flowmeter tests. A Bayesian data assimilation technique, the method of anchored distributions (MAD) (Rubin et al., 2010), was applied to assimilate the experimental tracer test data with the other types of data and to infer the three-dimensional heterogeneous structure of the hydraulic conductivity in the saturated zone of the Hanford formation.In this study, the Bayesian prior information on the underlying random hydraulic conductivity field was obtained from previous field characterization efforts using constant-rate injection and borehole flowmeter test data. The posterior distribution of the conductivity field was obtained by further conditioning the field on the temporal moments of tracer breakthrough curves at various observation wells. MAD was implemented with the massively parallel three-dimensional flow and transport code PFLOTRAN to cope with the highly transient flow boundary conditions at the site and to meet the computational demands of MAD. A synthetic study proved that the proposed method could effectively invert tracer test data to capture the essential spatial heterogeneity of the three-dimensional hydraulic conductivity field. Application of MAD to actual field tracer data at the Hanford 300 Area demonstrates that inverting for spatial heterogeneity of hydraulic conductivity under transient flow conditions is challenging and more work is needed.

  17. Magnetic microstructures for regulating Brownian motion

    NASA Astrophysics Data System (ADS)

    Sooryakumar, Ratnasingham

    2013-03-01

    Nature has proven that it is possible to engineer complex nanoscale machines in the presence of thermal fluctuations. These biological complexes, which harness random thermal energy to provide functionality, yield a framework to develop related artificial, i.e., nonbiological, phenomena and devices. A major challenge to achieving positional control of fluid-borne submicron sized objects is regulating their Brownian fluctuations. In this talk a magnetic-field-based trap that regulates the thermal fluctuations of superparamagnetic beads in suspension will be presented. Local domain-wall fields originating from patterned magnetic wires, whose strength and profile are tuned by weak external fields, enable bead trajectories within the trap to be managed and easily varied between strong confinements and delocalized spatial excursions. Moreover, the frequency spectrum of the trapped bead responds to fields as a power-law function with a tunable, non-integer exponent. When extended to a cluster of particles, the trapping landscape preferentially stabilizes them into formations of 5-fold symmetry, while their Brownian fluctuations result in frequent transitions between different cluster configurations. The quantitative understanding of the Brownian dynamics together with the ability to tune the extent of the fluctuations enables the wire-based platform to serve as a model system to investigate the competition between random and deterministic forces. Funding from the U.S. Army Research Office under contract W911NF-10-1-0353 is acknowledged.

  18. Using Perturbation Theory to Reduce Noise in Diffusion Tensor Fields

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Liu, Jun; Peterson, Bradley S.

    2009-01-01

    We propose the use of Perturbation theory to reduce noise in Diffusion Tensor (DT) fields. Diffusion Tensor Imaging (DTI) encodes the diffusion of water molecules along different spatial directions in a positive-definite, 3 × 3 symmetric tensor. Eigenvectors and eigenvalues of DTs allow the in vivo visualization and quantitative analysis of white matter fiber bundles across the brain. The validity and reliability of these analyses are limited, however, by the low spatial resolution and low Signal-to-Noise Ratio (SNR) in DTI datasets. Our procedures can be applied to improve the validity and reliability of these quantitative analyses by reducing noise in the tensor fields. We model a tensor field as a three-dimensional Markov Random Field and then compute the likelihood and the prior terms of this model using Perturbation theory. The prior term constrains the tensor field to be smooth, whereas the likelihood term constrains the smoothed tensor field to be similar to the original field. Thus, the proposed method generates a smoothed field that is close in structure to the original tensor field. We evaluate the performance of our method both visually and quantitatively using synthetic and real-world datasets. We quantitatively assess the performance of our method by computing the SNR for eigenvalues and the coherence measures for eigenvectors of DTs across tensor fields. In addition, we quantitatively compare the performance of our procedures with the performance of one method that uses a Riemannian distance to compute the similarity between two tensors, and with another method that reduces noise in tensor fields by anisotropically filtering the diffusion weighted images that are used to estimate diffusion tensors. These experiments demonstrate that our method significantly increases the coherence of the eigenvectors and the SNR of the eigenvalues, while simultaneously preserving the fine structure and boundaries between homogeneous regions, in the smoothed tensor field. PMID:19540791

  19. Reducing uncertainty in dust monitoring to detect aeolian sediment transport responses to land cover change

    NASA Astrophysics Data System (ADS)

    Webb, N.; Chappell, A.; Van Zee, J.; Toledo, D.; Duniway, M.; Billings, B.; Tedela, N.

    2017-12-01

    Anthropogenic land use and land cover change (LULCC) influence global rates of wind erosion and dust emission, yet our understanding of the magnitude of the responses remains poor. Field measurements and monitoring provide essential data to resolve aeolian sediment transport patterns and assess the impacts of human land use and management intensity. Data collected in the field are also required for dust model calibration and testing, as models have become the primary tool for assessing LULCC-dust cycle interactions. However, there is considerable uncertainty in estimates of dust emission due to the spatial variability of sediment transport. Field sampling designs are currently rudimentary and considerable opportunities are available to reduce the uncertainty. Establishing the minimum detectable change is critical for measuring spatial and temporal patterns of sediment transport, detecting potential impacts of LULCC and land management, and for quantifying the uncertainty of dust model estimates. Here, we evaluate the effectiveness of common sampling designs (e.g., simple random sampling, systematic sampling) used to measure and monitor aeolian sediment transport rates. Using data from the US National Wind Erosion Research Network across diverse rangeland and cropland cover types, we demonstrate how only large changes in sediment mass flux (of the order 200% to 800%) can be detected when small sample sizes are used, crude sampling designs are implemented, or when the spatial variation is large. We then show how statistical rigour and the straightforward application of a sampling design can reduce the uncertainty and detect change in sediment transport over time and between land use and land cover types.

  20. Scattering effects and high-spatial-frequency nanostructures on ultrafast laser irradiated surfaces of zirconium metallic alloys with nano-scaled topographies.

    PubMed

    Li, Chen; Cheng, Guanghua; Sedao, Xxx; Zhang, Wei; Zhang, Hao; Faure, Nicolas; Jamon, Damien; Colombier, Jean-Philippe; Stoian, Razvan

    2016-05-30

    The origin of high-spatial-frequency laser-induced periodic surface structures (HSFL) driven by incident ultrafast laser fields, with their ability to achieve structure resolutions below λ/2, is often obscured by the overlap with regular ripples patterns at quasi-wavelength periodicities. We experimentally demonstrate here employing defined surface topographies that these structures are intrinsically related to surface roughness in the nano-scale domain. Using Zr-based bulk metallic glass (Zr-BMG) and its crystalline alloy (Zr-CA) counterpart formed by thermal annealing from its glassy precursor, we prepared surfaces showing either smooth appearances on thermoplastic BMG or high-density nano-protuberances from randomly distributed embedded nano-crystallites with average sizes below 200 nm on the recrystallized alloy. Upon ultrashort pulse irradiation employing linearly polarized 50 fs, 800 nm laser pulses, the surfaces show a range of nanoscale organized features. The change of topology was then followed under multiple pulse irradiation at fluences around and below the single pulse threshold. While the former material (Zr-BMG) shows a specific high quality arrangement of standard ripples around the laser wavelength, the latter (Zr-CA) demonstrates strong predisposition to form high spatial frequency rippled structures (HSFL). We discuss electromagnetic scenarios assisting their formation based on near-field interaction between particles and field-enhancement leading to structure linear growth. Finite-difference-time-domain simulations outline individual and collective effects of nanoparticles on electromagnetic energy modulation and the feedback processes in the formation of HSFL structures with correlation to regular ripples (LSFL).

  1. Measuring forest landscape patterns in the Cascade Range of Oregon, USA

    NASA Technical Reports Server (NTRS)

    Ripple, William J.; Bradshaw, G. A.; Spies, Thomas A.

    1995-01-01

    This paper describes the use of a set of spatial statistics to quantify the landscape pattern caused by the patchwork of clearcuts made over a 15-year period in the western Cascades of Oregon. Fifteen areas were selected at random to represent a diversity of landscape fragmentation patterns. Managed forest stands (patches) were digitized and analyzed to produce both tabular and mapped information describing patch size, shape, abundance and spacing, and matrix characteristics of a given area. In addition, a GIS fragmentation index was developed which was found to be sensitive to patch abundance and to the spatial distribution of patches. Use of the GIS-derived index provides an automated method of determining the level of forest fragmentation and can be used to facilitate spatial analysis of the landscape for later coordination with field and remotely sensed data. A comparison of the spatial statistics calculated for the two years indicates an increase in forest fragmentation as characterized by an increase in mean patch abundance and a decrease in interpatch distance, amount of interior natural forest habitat, and the GIS fragmentation index. Such statistics capable of quantifying patch shape and spatial distribution may prove important in the evaluation of the changing character of interior and edge habitats for wildlife.

  2. A Scalable Field Study Protocol and Rationale for Passive Ambient Air Sampling: A Spatial Phytosampling for Leaf Data Collection

    PubMed Central

    Oyana, Tonny J.; Lomnicki, Slawomir M.; Guo, Chuqi; Cormier, Stephania A.

    2018-01-01

    Stable, bioreactive, radicals known as environmentally persistent free radicals (EPFRs) have been found to exist on the surface of airborne PM2.5. These EPFRs have been found to form during many combustion processes, are present in vehicular exhaust, and persist in the environment for weeks and biological systems for up to 12 h. To measure EPFRs in PM samples, high volume samplers are required and measurements are less representative of community exposure; therefore, we developed a novel spatial phytosampling methodology to study the spatial patterns of EPFR concentrations using plants. Leaf samples for laboratory PM analysis were collected from 188 randomly drawn sampling sites within a 500-m buffer zone of pollution sources across a sampling grid measuring 32.9 × 28.4 km in Memphis, Tennessee. PM was isolated from the intact leaves and size fractionated, and EPFRs on PM quantified by electron paramagnetic resonance spectroscopy. The radical concentration was found to positively correlate with the EPFR g-value, thus indicating cumulative content of oxygen centered radicals in PM with higher EPFR load. Our spatial phytosampling approach reveals spatial variations and potential “hotspots” risk due to EPFR exposure across Memphis and provides valuable insights for identifying exposure and demographic differences for health studies. PMID:28805054

  3. A Scalable Field Study Protocol and Rationale for Passive Ambient Air Sampling: A Spatial Phytosampling for Leaf Data Collection.

    PubMed

    Oyana, Tonny J; Lomnicki, Slawomir M; Guo, Chuqi; Cormier, Stephania A

    2017-09-19

    Stable, bioreactive, radicals known as environmentally persistent free radicals (EPFRs) have been found to exist on the surface of airborne PM 2.5 . These EPFRs have been found to form during many combustion processes, are present in vehicular exhaust, and persist in the environment for weeks and biological systems for up to 12 h. To measure EPFRs in PM samples, high volume samplers are required and measurements are less representative of community exposure; therefore, we developed a novel spatial phytosampling methodology to study the spatial patterns of EPFR concentrations using plants. Leaf samples for laboratory PM analysis were collected from 188 randomly drawn sampling sites within a 500-m buffer zone of pollution sources across a sampling grid measuring 32.9 × 28.4 km in Memphis, Tennessee. PM was isolated from the intact leaves and size fractionated, and EPFRs on PM quantified by electron paramagnetic resonance spectroscopy. The radical concentration was found to positively correlate with the EPFR g-value, thus indicating cumulative content of oxygen centered radicals in PM with higher EPFR load. Our spatial phytosampling approach reveals spatial variations and potential "hotspots" risk due to EPFR exposure across Memphis and provides valuable insights for identifying exposure and demographic differences for health studies.

  4. Unsupervised classification of multivariate geostatistical data: Two algorithms

    NASA Astrophysics Data System (ADS)

    Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques

    2015-12-01

    With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.

  5. Using Structured Additive Regression Models to Estimate Risk Factors of Malaria: Analysis of 2010 Malawi Malaria Indicator Survey Data

    PubMed Central

    Chirombo, James; Lowe, Rachel; Kazembe, Lawrence

    2014-01-01

    Background After years of implementing Roll Back Malaria (RBM) interventions, the changing landscape of malaria in terms of risk factors and spatial pattern has not been fully investigated. This paper uses the 2010 malaria indicator survey data to investigate if known malaria risk factors remain relevant after many years of interventions. Methods We adopted a structured additive logistic regression model that allowed for spatial correlation, to more realistically estimate malaria risk factors. Our model included child and household level covariates, as well as climatic and environmental factors. Continuous variables were modelled by assuming second order random walk priors, while spatial correlation was specified as a Markov random field prior, with fixed effects assigned diffuse priors. Inference was fully Bayesian resulting in an under five malaria risk map for Malawi. Results Malaria risk increased with increasing age of the child. With respect to socio-economic factors, the greater the household wealth, the lower the malaria prevalence. A general decline in malaria risk was observed as altitude increased. Minimum temperatures and average total rainfall in the three months preceding the survey did not show a strong association with disease risk. Conclusions The structured additive regression model offered a flexible extension to standard regression models by enabling simultaneous modelling of possible nonlinear effects of continuous covariates, spatial correlation and heterogeneity, while estimating usual fixed effects of categorical and continuous observed variables. Our results confirmed that malaria epidemiology is a complex interaction of biotic and abiotic factors, both at the individual, household and community level and that risk factors are still relevant many years after extensive implementation of RBM activities. PMID:24991915

  6. Using structured additive regression models to estimate risk factors of malaria: analysis of 2010 Malawi malaria indicator survey data.

    PubMed

    Chirombo, James; Lowe, Rachel; Kazembe, Lawrence

    2014-01-01

    After years of implementing Roll Back Malaria (RBM) interventions, the changing landscape of malaria in terms of risk factors and spatial pattern has not been fully investigated. This paper uses the 2010 malaria indicator survey data to investigate if known malaria risk factors remain relevant after many years of interventions. We adopted a structured additive logistic regression model that allowed for spatial correlation, to more realistically estimate malaria risk factors. Our model included child and household level covariates, as well as climatic and environmental factors. Continuous variables were modelled by assuming second order random walk priors, while spatial correlation was specified as a Markov random field prior, with fixed effects assigned diffuse priors. Inference was fully Bayesian resulting in an under five malaria risk map for Malawi. Malaria risk increased with increasing age of the child. With respect to socio-economic factors, the greater the household wealth, the lower the malaria prevalence. A general decline in malaria risk was observed as altitude increased. Minimum temperatures and average total rainfall in the three months preceding the survey did not show a strong association with disease risk. The structured additive regression model offered a flexible extension to standard regression models by enabling simultaneous modelling of possible nonlinear effects of continuous covariates, spatial correlation and heterogeneity, while estimating usual fixed effects of categorical and continuous observed variables. Our results confirmed that malaria epidemiology is a complex interaction of biotic and abiotic factors, both at the individual, household and community level and that risk factors are still relevant many years after extensive implementation of RBM activities.

  7. Influence of Correspondence Noise and Spatial Scaling on the Upper Limit for Spatial Displacement in Fully-Coherent Random-Dot Kinematogram Stimuli

    PubMed Central

    Tripathy, Srimant P.; Shafiullah, Syed N.; Cox, Michael J.

    2012-01-01

    Correspondence noise is a major factor limiting direction discrimination performance in random-dot kinematograms [1]. In the current study we investigated the influence of correspondence noise on Dmax, which is the upper limit for the spatial displacement of the dots for which coherent motion is still perceived. Human direction discrimination performance was measured, using 2-frame kinematograms having leftward/rightward motion, over a 200-fold range of dot-densities and a four-fold range of dot displacements. From this data Dmax was estimated for the different dot densities tested. A model was proposed to evaluate the correspondence noise in the stimulus. This model summed the outputs of a set of elementary Reichardt-type local detectors that had receptive fields tiling the stimulus and were tuned to the two directions of motion in the stimulus. A key assumption of the model was that the local detectors would have the radius of their catchment areas scaled with the displacement that they were tuned to detect; the scaling factor k linking the radius to the displacement was the only free parameter in the model and a single value of k was used to fit all of the psychophysical data collected. This minimal, correspondence-noise based model was able to account for 91% of the variability in the human performance across all of the conditions tested. The results highlight the importance of correspondence noise in constraining the largest displacement that can be detected. PMID:23056172

  8. Influence of correspondence noise and spatial scaling on the upper limit for spatial displacement in fully-coherent random-dot kinematogram stimuli.

    PubMed

    Tripathy, Srimant P; Shafiullah, Syed N; Cox, Michael J

    2012-01-01

    Correspondence noise is a major factor limiting direction discrimination performance in random-dot kinematograms. In the current study we investigated the influence of correspondence noise on Dmax, which is the upper limit for the spatial displacement of the dots for which coherent motion is still perceived. Human direction discrimination performance was measured, using 2-frame kinematograms having leftward/rightward motion, over a 200-fold range of dot-densities and a four-fold range of dot displacements. From this data Dmax was estimated for the different dot densities tested. A model was proposed to evaluate the correspondence noise in the stimulus. This model summed the outputs of a set of elementary Reichardt-type local detectors that had receptive fields tiling the stimulus and were tuned to the two directions of motion in the stimulus. A key assumption of the model was that the local detectors would have the radius of their catchment areas scaled with the displacement that they were tuned to detect; the scaling factor k linking the radius to the displacement was the only free parameter in the model and a single value of k was used to fit all of the psychophysical data collected. This minimal, correspondence-noise based model was able to account for 91% of the variability in the human performance across all of the conditions tested. The results highlight the importance of correspondence noise in constraining the largest displacement that can be detected.

  9. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  10. A Markov random field based approach to the identification of meat and bone meal in feed by near-infrared spectroscopic imaging.

    PubMed

    Jiang, Xunpeng; Yang, Zengling; Han, Lujia

    2014-07-01

    Contaminated meat and bone meal (MBM) in animal feedstuff has been the source of bovine spongiform encephalopathy (BSE) disease in cattle, leading to a ban in its use, so methods for its detection are essential. In this study, five pure feed and five pure MBM samples were used to prepare two sets of sample arrangements: set A for investigating the discrimination of individual feed/MBM particles and set B for larger numbers of overlapping particles. The two sets were used to test a Markov random field (MRF)-based approach. A Fourier transform infrared (FT-IR) imaging system was used for data acquisition. The spatial resolution of the near-infrared (NIR) spectroscopic image was 25 μm × 25 μm. Each spectrum was the average of 16 scans across the wavenumber range 7,000-4,000 cm(-1), at intervals of 8 cm(-1). This study introduces an innovative approach to analyzing NIR spectroscopic images: an MRF-based approach has been developed using the iterated conditional mode (ICM) algorithm, integrating initial labeling-derived results from support vector machine discriminant analysis (SVMDA) and observation data derived from the results of principal component analysis (PCA). The results showed that MBM covered by feed could be successfully recognized with an overall accuracy of 86.59% and a Kappa coefficient of 0.68. Compared with conventional methods, the MRF-based approach is capable of extracting spectral information combined with spatial information from NIR spectroscopic images. This new approach enhances the identification of MBM using NIR spectroscopic imaging.

  11. Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-05-01

    The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  12. Modeling Coupled Processes for Multiphase Fluid Flow in Mechanically Deforming Faults

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Pike, D. Q.

    2011-12-01

    Modeling of coupled hydrological-mechanical processes in fault zones is critical for understanding the long-term behavior of fluids within the shallow crust. Here we utilize a previously developed cellular-automata (CA) model to define the evolution of permeability within a 2-D fault zone under compressive stress. At each time step, the CA model calculates the increase in fluid pressure within the fault at every grid cell. Pressure surpassing a critical threshold (e.g., lithostatic stress) causes a rupture in that cell, and pressure is then redistributed across the neighboring cells. The rupture can cascade through the spatial domain and continue across multiple time steps. Stress continues to increase and the size and location of rupture events are recorded until a percolating backbone of ruptured cells exists across the fault. Previous applications of this model consider uncorrelated random fields for the compressibility of the fault material. The prior focus on uncorrelated property fields is consistent with development of a number of statistical physics models including percolation processes and fracture propagation. However, geologic materials typically express spatial correlation and this can have a significant impact on the results of the pressure and permeability distributions. We model correlation of the fault material compressibility as a multiGaussian random field with a correlation length defined as the full-width at half maximum (FWHM) of the kernel used to create the field. The FWHM is varied from < 0.001 to approximately 0.47 of the domain size. The addition of spatial correlation to the compressibility significantly alters the model results including: 1) Accumulation of larger amounts of strain prior to the first rupture event; 2) Initiation of the percolating backbone at lower amounts of cumulative strain; 3) Changes in the event size distribution to a combined power-law and exponential distribution with a smaller power; and 4) Evolution of the spatial-temporal distribution of rupture event locations from a purely Poisson process to a complex pattern of clustered events with periodic patterns indicative of emergent phenomena. Switching the stress field from compressive to quiescent, or extensional, during the CA simulation results in a fault zone with a complex permeability pattern and disconnected zones of over-pressured fluid that serves as the initial conditions for simulation of capillary invasion of a separate fluid phase. We use Modified Invasion Percolation to simulate the invasion of a less dense fluid into the fault zone. Results show that the variability in fluid displacement measures caused by the heterogeneous permeability field and initial pressure conditions are significant. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000

  13. Correspondence between sound propagation in discrete and continuous random media with application to forest acoustics.

    PubMed

    Ostashev, Vladimir E; Wilson, D Keith; Muhlestein, Michael B; Attenborough, Keith

    2018-02-01

    Although sound propagation in a forest is important in several applications, there are currently no rigorous yet computationally tractable prediction methods. Due to the complexity of sound scattering in a forest, it is natural to formulate the problem stochastically. In this paper, it is demonstrated that the equations for the statistical moments of the sound field propagating in a forest have the same form as those for sound propagation in a turbulent atmosphere if the scattering properties of the two media are expressed in terms of the differential scattering and total cross sections. Using the existing theories for sound propagation in a turbulent atmosphere, this analogy enables the derivation of several results for predicting forest acoustics. In particular, the second-moment parabolic equation is formulated for the spatial correlation function of the sound field propagating above an impedance ground in a forest with micrometeorology. Effective numerical techniques for solving this equation have been developed in atmospheric acoustics. In another example, formulas are obtained that describe the effect of a forest on the interference between the direct and ground-reflected waves. The formulated correspondence between wave propagation in discrete and continuous random media can also be used in other fields of physics.

  14. The Detection of Clusters with Spatial Heterogeneity

    ERIC Educational Resources Information Center

    Zhang, Zuoyi

    2011-01-01

    This thesis consists of two parts. In Chapter 2, we focus on the spatial scan statistics with overdispersion and Chapter 3 is devoted to the randomized permutation test for identifying local patterns of spatial association. The spatial scan statistic has been widely used in spatial disease surveillance and spatial cluster detection. To apply it, a…

  15. Progress on Discrete Fracture Network models with implications on the predictions of permeability and flow channeling structure

    NASA Astrophysics Data System (ADS)

    Darcel, C.; Davy, P.; Le Goc, R.; Maillot, J.; Selroos, J. O.

    2017-12-01

    We present progress on Discrete Fracture Network (DFN) flow modeling, including realistic advanced DFN spatial structures and local fracture transmissivity properties, through an application to the Forsmark site in Sweden. DFN models are a framework to combine fracture datasets from different sources and scales and to interpolate them in combining statistical distributions and stereological relations. The resulting DFN upscaling function - size density distribution - is a model component key to extrapolating fracture size densities between data gaps, from borehole core up to site scale. Another important feature of DFN models lays in the spatial correlations between fractures, with still unevaluated consequences on flow predictions. Indeed, although common Poisson (i.e. spatially random) models are widely used, they do not reflect these geological evidences for more complex structures. To model them, we define a DFN growth process from kinematic rules for nucleation, growth and stopping conditions. It mimics in a simplified way the geological fracturing processes and produces DFN characteristics -both upscaling function and spatial correlations- fully consistent with field observations. DFN structures are first compared for constant transmissivities. Flow simulations for the kinematic and equivalent Poisson DFN models show striking differences: with the kinematic DFN, connectivity and permeability are significantly smaller, down to a difference of one order of magnitude, and flow is much more channelized. Further flow analyses are performed with more realistic transmissivity distribution conditions (sealed parts, relations to fracture sizes, orientations and in-situ stress field). The relative importance of the overall DFN structure in the final flow predictions is discussed.

  16. Hybrid modeling of spatial continuity for application to numerical inverse problems

    USGS Publications Warehouse

    Friedel, Michael J.; Iwashita, Fabio

    2013-01-01

    A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.

  17. Explaining TeV cosmic-ray anisotropies with non-diffusive cosmic-ray propagation

    DOE PAGES

    Harding, James Patrick; Fryer, Chris Lee; Mendel, Susan Marie

    2016-05-11

    Constraining the behavior of cosmic ray data observed at Earth requires a precise understanding of how the cosmic rays propagate in the interstellar medium. The interstellar medium is not homogeneous; although turbulent magnetic fields dominate over large scales, small coherent regions of magnetic field exist on scales relevant to particle propagation in the nearby Galaxy. Guided propagation through a coherent field is significantly different from random particle diffusion and could be the explanation of spatial anisotropies in the observed cosmic rays. We present a Monte Carlo code to propagate cosmic particle through realistic magnetic field structures. We discuss the detailsmore » of the model as well as some preliminary studies which indicate that coherent magnetic structures are important effects in local cosmic-ray propagation, increasing the flux of cosmic rays by over two orders of magnitude at anisotropic locations on the sky. Furthermore, the features induced by coherent magnetic structure could be the cause of the observed TeV cosmic-ray anisotropy.« less

  18. EXPLAINING TEV COSMIC-RAY ANISOTROPIES WITH NON-DIFFUSIVE COSMIC-RAY PROPAGATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, J. Patrick; Fryer, Chris L.; Mendel, Susan, E-mail: jpharding@lanl.gov, E-mail: fryer@lanl.gov, E-mail: smendel@lanl.gov

    2016-05-10

    Constraining the behavior of cosmic ray data observed at Earth requires a precise understanding of how the cosmic rays propagate in the interstellar medium. The interstellar medium is not homogeneous; although turbulent magnetic fields dominate over large scales, small coherent regions of magnetic field exist on scales relevant to particle propagation in the nearby Galaxy. Guided propagation through a coherent field is significantly different from random particle diffusion and could be the explanation of spatial anisotropies in the observed cosmic rays. We present a Monte Carlo code to propagate cosmic particle through realistic magnetic field structures. We discuss the detailsmore » of the model as well as some preliminary studies which indicate that coherent magnetic structures are important effects in local cosmic-ray propagation, increasing the flux of cosmic rays by over two orders of magnitude at anisotropic locations on the sky. The features induced by coherent magnetic structure could be the cause of the observed TeV cosmic-ray anisotropy.« less

  19. Mapping the distribution of the main host for plague in a complex landscape in Kazakhstan: An object-based approach using SPOT-5 XS, Landsat 7 ETM+, SRTM and multiple Random Forests

    NASA Astrophysics Data System (ADS)

    Wilschut, L. I.; Addink, E. A.; Heesterbeek, J. A. P.; Dubyanskiy, V. M.; Davis, S. A.; Laudisoit, A.; Begon, M.; Burdelov, L. A.; Atshabar, B. B.; de Jong, S. M.

    2013-08-01

    Plague is a zoonotic infectious disease present in great gerbil populations in Kazakhstan. Infectious disease dynamics are influenced by the spatial distribution of the carriers (hosts) of the disease. The great gerbil, the main host in our study area, lives in burrows, which can be recognized on high resolution satellite imagery. In this study, using earth observation data at various spatial scales, we map the spatial distribution of burrows in a semi-desert landscape. The study area consists of various landscape types. To evaluate whether identification of burrows by classification is possible in these landscape types, the study area was subdivided into eight landscape units, on the basis of Landsat 7 ETM+ derived Tasselled Cap Greenness and Brightness, and SRTM derived standard deviation in elevation. In the field, 904 burrows were mapped. Using two segmented 2.5 m resolution SPOT-5 XS satellite scenes, reference object sets were created. Random Forests were built for both SPOT scenes and used to classify the images. Additionally, a stratified classification was carried out, by building separate Random Forests per landscape unit. Burrows were successfully classified in all landscape units. In the ‘steppe on floodplain’ areas, classification worked best: producer's and user's accuracy in those areas reached 88% and 100%, respectively. In the ‘floodplain’ areas with a more heterogeneous vegetation cover, classification worked least well; there, accuracies were 86 and 58% respectively. Stratified classification improved the results in all landscape units where comparison was possible (four), increasing kappa coefficients by 13, 10, 9 and 1%, respectively. In this study, an innovative stratification method using high- and medium resolution imagery was applied in order to map host distribution on a large spatial scale. The burrow maps we developed will help to detect changes in the distribution of great gerbil populations and, moreover, serve as a unique empirical data set which can be used as input for epidemiological plague models. This is an important step in understanding the dynamics of plague.

  20. Accounting for rate instability and spatial patterns in the boundary analysis of cancer mortality maps

    PubMed Central

    Goovaerts, Pierre

    2006-01-01

    Boundary analysis of cancer maps may highlight areas where causative exposures change through geographic space, the presence of local populations with distinct cancer incidences, or the impact of different cancer control methods. Too often, such analysis ignores the spatial pattern of incidence or mortality rates and overlooks the fact that rates computed from sparsely populated geographic entities can be very unreliable. This paper proposes a new methodology that accounts for the uncertainty and spatial correlation of rate data in the detection of significant edges between adjacent entities or polygons. Poisson kriging is first used to estimate the risk value and the associated standard error within each polygon, accounting for the population size and the risk semivariogram computed from raw rates. The boundary statistic is then defined as half the absolute difference between kriged risks. Its reference distribution, under the null hypothesis of no boundary, is derived through the generation of multiple realizations of the spatial distribution of cancer risk values. This paper presents three types of neutral models generated using methods of increasing complexity: the common random shuffle of estimated risk values, a spatial re-ordering of these risks, or p-field simulation that accounts for the population size within each polygon. The approach is illustrated using age-adjusted pancreatic cancer mortality rates for white females in 295 US counties of the Northeast (1970–1994). Simulation studies demonstrate that Poisson kriging yields more accurate estimates of the cancer risk and how its value changes between polygons (i.e. boundary statistic), relatively to the use of raw rates or local empirical Bayes smoother. When used in conjunction with spatial neutral models generated by p-field simulation, the boundary analysis based on Poisson kriging estimates minimizes the proportion of type I errors (i.e. edges wrongly declared significant) while the frequency of these errors is predicted well by the p-value of the statistical test. PMID:19023455

  1. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    PubMed

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Persistence in a Random Bond Ising Model of Socio-Econo Dynamics

    NASA Astrophysics Data System (ADS)

    Jain, S.; Yamano, T.

    We study the persistence phenomenon in a socio-econo dynamics model using computer simulations at a finite temperature on hypercubic lattices in dimensions up to five. The model includes a "social" local field which contains the magnetization at time t. The nearest neighbour quenched interactions are drawn from a binary distribution which is a function of the bond concentration, p. The decay of the persistence probability in the model depends on both the spatial dimension and p. We find no evidence of "blocking" in this model. We also discuss the implications of our results for possible applications in the social and economic fields. It is suggested that the absence, or otherwise, of blocking could be used as a criterion to decide on the validity of a given model in different scenarios.

  3. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  4. Correcting Biases in a lower resolution global circulation model with data assimilation

    NASA Astrophysics Data System (ADS)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model's equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d'études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented.

  5. Spatial variability in the soil water content of a Mediterranean agroforestry system with high soil heterogeneity

    NASA Astrophysics Data System (ADS)

    Molina, Antonio Jaime; Llorens, Pilar; Aranda, Xavier; Savé, Robert; Biel, Carmen

    2013-04-01

    Variability of soil water content is known to increase with the size of spatial domain in which measurements are taken. At field scale, heterogeneity in soil, vegetation, topography, water input volume and management affects, among other factors, hydrologic plot behaviour under different mean soil water contents. The present work studies how the spatial variability of soil water content (SWC) is affected by soil type (texture, percentage of stones and the combination of them) in a timber-orientated plantation of cherry tree (Prunus avium) under Mediterranean climatic conditions. The experimental design is a randomized block one with 3 blocks * 4 treatments, based on two factors: irrigation (6 plots irrigated versus 6 plots not irrigated) and soil management (6 plots tillaged versus 6 plots not tillaged). SWC is continuously measured at 25, 50 and 100 cm depth with FDR sensors, located at two positions in each treatment: under tree influence and 2.5 m apart. This study presents the results of the monitoring during 2012 of the 24 sensors located at the 25 cm depth. In each of the measurement point, texture and percentage of stones were measured. Sandy-loam, sandy-clay-loam and loam textures were found together with a percentage of stones ranging from 20 to 70 %. The results indicated that the relationship between the daily mean SWC and its standard deviation, a common procedure used to study spatial variability, changed with texture, percentage of stones and the estimation of field capacity from the combination of both. Temporal stability analysis of SWC showed a clear pattern related to field capacity, with the measurement points of the sandy-loam texture and the high percentage of stones showing the maximun negative diference with the global mean. The high range in the mean relative difference observed (± 75 %), could indicate that the studied plot may be considered as a good field-laboratory to extrapolate results at higher spatial scales. Furthermore, the pattern in the temporal stability of tree growth was clearly related to that one in SWC. Nevertheless, the treatments that represent the mean conditions in growth were not exactly the same than those in SWC, which could be attributable to other characteristics than soil.

  6. The Bayesian group lasso for confounded spatial data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  7. High-resolution stochastic downscaling of climate models: simulating wind advection, cloud cover and precipitation

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Fatichi, Simone; Burlando, Paolo

    2015-04-01

    A new stochastic approach to generate wind advection, cloud cover and precipitation fields is presented with the aim of formulating a space-time weather generator characterized by fields with high spatial and temporal resolution (e.g., 1 km x 1 km and 5 min). Its use is suitable for stochastic downscaling of climate scenarios in the context of hydrological, ecological and geomorphological applications. The approach is based on concepts from the Advanced WEather GENerator (AWE-GEN) presented by Fatichi et al. (2011, Adv. Water Resour.), the Space-Time Realizations of Areal Precipitation model (STREAP) introduced by Paschalis et al. (2013, Water Resour. Res.), and the High-Resolution Synoptically conditioned Weather Generator (HiReS-WG) presented by Peleg and Morin (2014, Water Resour. Res.). Advection fields are generated on the basis of the 500 hPa u and v wind direction variables derived from global or regional climate models. The advection velocity and direction are parameterized using Kappa and von Mises distributions respectively. A random Gaussian fields is generated using a fast Fourier transform to preserve the spatial correlation of advection. The cloud cover area, total precipitation area and mean advection of the field are coupled using a multi-autoregressive model. The approach is relatively parsimonious in terms of computational demand and, in the context of climate change, allows generating many stochastic realizations of current and projected climate in a fast and efficient way. A preliminary test of the approach is presented with reference to a case study in a complex orography terrain in the Swiss Alps.

  8. Electrostatic Solitary Waves in the Solar Wind: Evidence for Instability at Solar Wind Current Sheets

    NASA Technical Reports Server (NTRS)

    Malaspina, David M.; Newman, David L.; Wilson, Lynn Bruce; Goetz, Keith; Kellogg, Paul J.; Kerstin, Kris

    2013-01-01

    A strong spatial association between bipolar electrostatic solitary waves (ESWs) and magnetic current sheets (CSs) in the solar wind is reported here for the first time. This association requires that the plasma instabilities (e.g., Buneman, electron two stream) which generate ESWs are preferentially localized to solar wind CSs. Distributions of CS properties (including shear angle, thickness, solar wind speed, and vector magnetic field change) are examined for differences between CSs associated with ESWs and randomly chosen CSs. Possible mechanisms for producing ESW-generating instabilities at solar wind CSs are considered, including magnetic reconnection.

  9. Microscopic Interpretation and Generalization of the Bloch-Torrey Equation for Diffusion Magnetic Resonance

    PubMed Central

    Seroussi, Inbar; Grebenkov, Denis S.; Pasternak, Ofer; Sochen, Nir

    2017-01-01

    In order to bridge microscopic molecular motion with macroscopic diffusion MR signal in complex structures, we propose a general stochastic model for molecular motion in a magnetic field. The Fokker-Planck equation of this model governs the probability density function describing the diffusion-magnetization propagator. From the propagator we derive a generalized version of the Bloch-Torrey equation and the relation to the random phase approach. This derivation does not require assumptions such as a spatially constant diffusion coefficient, or ad-hoc selection of a propagator. In particular, the boundary conditions that implicitly incorporate the microstructure into the diffusion MR signal can now be included explicitly through a spatially varying diffusion coefficient. While our generalization is reduced to the conventional Bloch-Torrey equation for piecewise constant diffusion coefficients, it also predicts scenarios in which an additional term to the equation is required to fully describe the MR signal. PMID:28242566

  10. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  11. Rapid mapping of polarization switching through complete information acquisition

    NASA Astrophysics Data System (ADS)

    Somnath, Suhas; Belianinov, Alex; Kalinin, Sergei V.; Jesse, Stephen

    2016-12-01

    Polarization switching in ferroelectric and multiferroic materials underpins a broad range of current and emergent applications, ranging from random access memories to field-effect transistors, and tunnelling devices. Switching in these materials is exquisitely sensitive to local defects and microstructure on the nanometre scale, necessitating spatially resolved high-resolution studies of these phenomena. Classical piezoresponse force microscopy and spectroscopy, although providing necessary spatial resolution, are fundamentally limited in data acquisition rates and energy resolution. This limitation stems from their two-tiered measurement protocol that combines slow (~1 s) switching and fast (~10 kHz-1 MHz) detection waveforms. Here we develop an approach for rapid probing of ferroelectric switching using direct strain detection of material response to probe bias. This approach, facilitated by high-sensitivity electronics and adaptive filtering, enables spectroscopic imaging at a rate 3,504 times faster the current state of the art, achieving high-veracity imaging of polarization dynamics in complex microstructures.

  12. Turbulent mass inhomogeneities induced by a point-source

    NASA Astrophysics Data System (ADS)

    Thalabard, Simon

    2018-03-01

    We describe how turbulence distributes tracers away from a localized source of injection, and analyze how the spatial inhomogeneities of the concentration field depend on the amount of randomness in the injection mechanism. For that purpose, we contrast the mass correlations induced by purely random injections with those induced by continuous injections in the environment. Using the Kraichnan model of turbulent advection, whereby the underlying velocity field is assumed to be shortly correlated in time, we explicitly identify scaling regions for the statistics of the mass contained within a shell of radius r and located at a distance ρ away from the source. The two key parameters are found to be (i) the ratio s 2 between the absolute and the relative timescales of dispersion and (ii) the ratio Λ between the size of the cloud and its distance away from the source. When the injection is random, only the former is relevant, as previously shown by Celani et al (2007 J. Fluid Mech. 583 189–98) in the case of an incompressible fluid. It is argued that the space partition in terms of s 2 and Λ is a robust feature of the injection mechanism itself, which should remain relevant beyond the Kraichnan model. This is for instance the case in a generalized version of the model, where the absolute dispersion is prescribed to be ballistic rather than diffusive.

  13. Use of LANDSAT imagery for wildlife habitat mapping in northeast and eastcentral Alaska

    NASA Technical Reports Server (NTRS)

    Lent, P. C. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. There is strong indication that spatially rare feature classes may be missed in clustering classifications based on 2% random sampling. Therefore, it seems advisable to augment random sampling for cluster analysis with directed sampling of any spatially rare features which are relevant to the analysis.

  14. A Randomized Trial of an Elementary School Mathematics Software Intervention: Spatial-Temporal Math

    ERIC Educational Resources Information Center

    Rutherford, Teomara; Farkas, George; Duncan, Greg; Burchinal, Margaret; Kibrick, Melissa; Graham, Jeneen; Richland, Lindsey; Tran, Natalie; Schneider, Stephanie; Duran, Lauren; Martinez, Michael E.

    2014-01-01

    Fifty-two low performing schools were randomly assigned to receive Spatial-Temporal (ST) Math, a supplemental mathematics software and instructional program, in second/third or fourth/fifth grades or to a business-as-usual control. Analyses reveal a negligible effect of ST Math on mathematics scores, which did not differ significantly across…

  15. Hyperspectral Image Classification via Multitask Joint Sparse Representation and Stepwise MRF Optimization.

    PubMed

    Yuan, Yuan; Lin, Jianzhe; Wang, Qi

    2016-12-01

    Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral-spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.

  16. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    PubMed

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  17. Using sequential self-calibration method to identify conductivity distribution: Conditioning on tracer test data

    USGS Publications Warehouse

    Hu, B.X.; He, C.

    2008-01-01

    An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.

  18. Revisiting crash spatial heterogeneity: A Bayesian spatially varying coefficients approach.

    PubMed

    Xu, Pengpeng; Huang, Helai; Dong, Ni; Wong, S C

    2017-01-01

    This study was performed to investigate the spatially varying relationships between crash frequency and related risk factors. A Bayesian spatially varying coefficients model was elaborately introduced as a methodological alternative to simultaneously account for the unstructured and spatially structured heterogeneity of the regression coefficients in predicting crash frequencies. The proposed method was appealing in that the parameters were modeled via a conditional autoregressive prior distribution, which involved a single set of random effects and a spatial correlation parameter with extreme values corresponding to pure unstructured or pure spatially correlated random effects. A case study using a three-year crash dataset from the Hillsborough County, Florida, was conducted to illustrate the proposed model. Empirical analysis confirmed the presence of both unstructured and spatially correlated variations in the effects of contributory factors on severe crash occurrences. The findings also suggested that ignoring spatially structured heterogeneity may result in biased parameter estimates and incorrect inferences, while assuming the regression coefficients to be spatially clustered only is probably subject to the issue of over-smoothness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Potential and flux field landscape theory. I. Global stability and dynamics of spatially dependent non-equilibrium systems.

    PubMed

    Wu, Wei; Wang, Jin

    2013-09-28

    We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.

  20. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  1. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation.

    PubMed

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.

  2. Active and reactive behaviour in human mobility: the influence of attraction points on pedestrians

    NASA Astrophysics Data System (ADS)

    Gutiérrez-Roig, M.; Sagarra, O.; Oltra, A.; Palmer, J. R. B.; Bartumeus, F.; Díaz-Guilera, A.; Perelló, J.

    2016-07-01

    Human mobility is becoming an accessible field of study, thanks to the progress and availability of tracking technologies as a common feature of smart phones. We describe an example of a scalable experiment exploiting these circumstances at a public, outdoor fair in Barcelona (Spain). Participants were tracked while wandering through an open space with activity stands attracting their attention. We develop a general modelling framework based on Langevin dynamics, which allows us to test the influence of two distinct types of ingredients on mobility: reactive or context-dependent factors, modelled by means of a force field generated by attraction points in a given spatial configuration and active or inherent factors, modelled from intrinsic movement patterns of the subjects. The additive and constructive framework model accounts for some observed features. Starting with the simplest model (purely random walkers) as a reference, we progressively introduce different ingredients such as persistence, memory and perceptual landscape, aiming to untangle active and reactive contributions and quantify their respective relevance. The proposed approach may help in anticipating the spatial distribution of citizens in alternative scenarios and in improving the design of public events based on a facts-based approach.

  3. Active and reactive behaviour in human mobility: the influence of attraction points on pedestrians

    PubMed Central

    Sagarra, O.; Oltra, A.; Palmer, J. R. B.; Bartumeus, F.; Díaz-Guilera, A.; Perelló, J.

    2016-01-01

    Human mobility is becoming an accessible field of study, thanks to the progress and availability of tracking technologies as a common feature of smart phones. We describe an example of a scalable experiment exploiting these circumstances at a public, outdoor fair in Barcelona (Spain). Participants were tracked while wandering through an open space with activity stands attracting their attention. We develop a general modelling framework based on Langevin dynamics, which allows us to test the influence of two distinct types of ingredients on mobility: reactive or context-dependent factors, modelled by means of a force field generated by attraction points in a given spatial configuration and active or inherent factors, modelled from intrinsic movement patterns of the subjects. The additive and constructive framework model accounts for some observed features. Starting with the simplest model (purely random walkers) as a reference, we progressively introduce different ingredients such as persistence, memory and perceptual landscape, aiming to untangle active and reactive contributions and quantify their respective relevance. The proposed approach may help in anticipating the spatial distribution of citizens in alternative scenarios and in improving the design of public events based on a facts-based approach. PMID:27493774

  4. Speckless head-up display on two spatial light modulators

    NASA Astrophysics Data System (ADS)

    Siemion, Andrzej; Ducin, Izabela; Kakarenko, Karol; Makowski, Michał; Siemion, Agnieszka; Suszek, Jarosław; Sypek, Maciej; Wojnowski, Dariusz; Jaroszewicz, Zbigniew; Kołodziejczyk, Andrzej

    2010-12-01

    There is a continuous demand for the computer generated holograms to give an almost perfect reconstruction with a reasonable cost of manufacturing. One method of improving the image quality is to illuminate a Fourier hologram with a quasi-random, but well known, light field phase distribution. It can be achieved with a lithographically produced phase mask. Up to date, the implementation of the lithographic technique is relatively complex and time and money consuming, which is why we have decided to use two Spatial Light Modulators (SLM). For the correctly adjusted light polarization a SLM acts as a pure phase modulator with 256 adjustable phase levels between 0 and 2π. The two modulators give us an opportunity to use the whole surface of the device and to reduce the size of the experimental system. The optical system with one SLM can also be used but it requires dividing the active surface into halves (one for the Fourier hologram and the second for the quasi-random diffuser), which implies a more complicated optical setup. A larger surface allows to display three Fourier holograms, each for one primary colour: red, green and blue. This allows to reconstruct almost noiseless colourful dynamic images. In this work we present the results of numerical simulations of image reconstructions with the use of two SLM displays.

  5. Methodological Caveats in the Detection of Coordinated Replay between Place Cells and Grid Cells.

    PubMed

    Trimper, John B; Trettel, Sean G; Hwaun, Ernie; Colgin, Laura Lee

    2017-01-01

    At rest, hippocampal "place cells," neurons with receptive fields corresponding to specific spatial locations, reactivate in a manner that reflects recently traveled trajectories. These "replay" events have been proposed as a mechanism underlying memory consolidation, or the transfer of a memory representation from the hippocampus to neocortical regions associated with the original sensory experience. Accordingly, it has been hypothesized that hippocampal replay of a particular experience should be accompanied by simultaneous reactivation of corresponding representations in the neocortex and in the entorhinal cortex, the primary interface between the hippocampus and the neocortex. Recent studies have reported that coordinated replay may occur between hippocampal place cells and medial entorhinal cortex grid cells, cells with multiple spatial receptive fields. Assessing replay in grid cells is problematic, however, as the cells exhibit regularly spaced spatial receptive fields in all environments and, therefore, coordinated replay between place cells and grid cells may be detected by chance. In the present report, we adapted analytical approaches utilized in recent studies of grid cell and place cell replay to determine the extent to which coordinated replay is spuriously detected between grid cells and place cells recorded from separate rats. For a subset of the employed analytical methods, coordinated replay was detected spuriously in a significant proportion of cases in which place cell replay events were randomly matched with grid cell firing epochs of equal duration. More rigorous replay evaluation procedures and minimum spike count requirements greatly reduced the amount of spurious findings. These results provide insights into aspects of place cell and grid cell activity during rest that contribute to false detection of coordinated replay. The results further emphasize the need for careful controls and rigorous methods when testing the hypothesis that place cells and grid cells exhibit coordinated replay.

  6. Adaptive Kalman filtering for real-time mapping of the visual field

    PubMed Central

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  7. Direct observation of turbulent magnetic fields in hot, dense laser produced plasmas

    PubMed Central

    Mondal, Sudipta; Narayanan, V.; Ding, Wen Jun; Lad, Amit D.; Hao, Biao; Ahmad, Saima; Wang, Wei Min; Sheng, Zheng Ming; Sengupta, Sudip; Kaw, Predhiman; Das, Amita; Kumar, G. Ravindra

    2012-01-01

    Turbulence in fluids is a ubiquitous, fascinating, and complex natural phenomenon that is not yet fully understood. Unraveling turbulence in high density, high temperature plasmas is an even bigger challenge because of the importance of electromagnetic forces and the typically violent environments. Fascinating and novel behavior of hot dense matter has so far been only indirectly inferred because of the enormous difficulties of making observations on such matter. Here, we present direct evidence of turbulence in giant magnetic fields created in an overdense, hot plasma by relativistic intensity (1018W/cm2) femtosecond laser pulses. We have obtained magneto-optic polarigrams at femtosecond time intervals, simultaneously with micrometer spatial resolution. The spatial profiles of the magnetic field show randomness and their k spectra exhibit a power law along with certain well defined peaks at scales shorter than skin depth. Detailed two-dimensional particle-in-cell simulations delineate the underlying interaction between forward currents of relativistic energy “hot” electrons created by the laser pulse and “cold” return currents of thermal electrons induced in the target. Our results are not only fundamentally interesting but should also arouse interest on the role of magnetic turbulence induced resistivity in the context of fast ignition of laser fusion, and the possibility of experimentally simulating such structures with respect to the sun and other stellar environments. PMID:22566660

  8. Direct observation of turbulent magnetic fields in hot, dense laser produced plasmas.

    PubMed

    Mondal, Sudipta; Narayanan, V; Ding, Wen Jun; Lad, Amit D; Hao, Biao; Ahmad, Saima; Wang, Wei Min; Sheng, Zheng Ming; Sengupta, Sudip; Kaw, Predhiman; Das, Amita; Kumar, G Ravindra

    2012-05-22

    Turbulence in fluids is a ubiquitous, fascinating, and complex natural phenomenon that is not yet fully understood. Unraveling turbulence in high density, high temperature plasmas is an even bigger challenge because of the importance of electromagnetic forces and the typically violent environments. Fascinating and novel behavior of hot dense matter has so far been only indirectly inferred because of the enormous difficulties of making observations on such matter. Here, we present direct evidence of turbulence in giant magnetic fields created in an overdense, hot plasma by relativistic intensity (10(18) W/cm(2)) femtosecond laser pulses. We have obtained magneto-optic polarigrams at femtosecond time intervals, simultaneously with micrometer spatial resolution. The spatial profiles of the magnetic field show randomness and their k spectra exhibit a power law along with certain well defined peaks at scales shorter than skin depth. Detailed two-dimensional particle-in-cell simulations delineate the underlying interaction between forward currents of relativistic energy "hot" electrons created by the laser pulse and "cold" return currents of thermal electrons induced in the target. Our results are not only fundamentally interesting but should also arouse interest on the role of magnetic turbulence induced resistivity in the context of fast ignition of laser fusion, and the possibility of experimentally simulating such structures with respect to the sun and other stellar environments.

  9. Unveiling the Role of the Magnetic Field at the Smallest Scales of Star Formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hull, Charles L. H.; Mocz, Philip; Burkhart, Blakesley

    We report Atacama Large Millimeter/submillimeter Array (ALMA) observations of polarized dust emission from the protostellar source Ser-emb 8 at a linear resolution of 140 au. Assuming models of dust-grain alignment hold, the observed polarization pattern gives a projected view of the magnetic field structure in this source. Contrary to expectations based on models of strongly magnetized star formation, the magnetic field in Ser-emb 8 does not exhibit an hourglass morphology. Combining the new ALMA data with previous observational studies, we can connect magnetic field structure from protostellar core (∼80,000 au) to disk (∼100 au) scales. We compare our observations withmore » four magnetohydrodynamic gravo-turbulence simulations made with the AREPO code that have initial conditions ranging from super-Alfvénic (weakly magnetized) to sub-Alfvénic (strongly magnetized). These simulations achieve the spatial dynamic range necessary to resolve the collapse of protostars from the parsec scale of star-forming clouds down to the ∼100 au scale probed by ALMA. Only in the very strongly magnetized simulation do we see both the preservation of the field direction from cloud to disk scales and an hourglass-shaped field at <1000 au scales. We conduct an analysis of the relative orientation of the magnetic field and the density structure in both the Ser-emb 8 ALMA observations and the synthetic observations of the four AREPO simulations. We conclude that the Ser-emb 8 data are most similar to the weakly magnetized simulations, which exhibit random alignment, in contrast to the strongly magnetized simulation, where the magnetic field plays a role in shaping the density structure in the source. In the weak-field case, it is turbulence—not the magnetic field—that shapes the material that forms the protostar, highlighting the dominant role that turbulence can play across many orders of magnitude in spatial scale.« less

  10. Impurity Induced Phase Competition and Supersolidity

    NASA Astrophysics Data System (ADS)

    Karmakar, Madhuparna; Ganesh, R.

    2017-12-01

    Several material families show competition between superconductivity and other orders. When such competition is driven by doping, it invariably involves spatial inhomogeneities which can seed competing orders. We study impurity-induced charge order in the attractive Hubbard model, a prototypical model for competition between superconductivity and charge density wave order. We show that a single impurity induces a charge-ordered texture over a length scale set by the energy cost of the competing phase. Our results are consistent with a strong-coupling field theory proposed earlier in which superconducting and charge order parameters form components of an SO(3) vector field. To discuss the effects of multiple impurities, we focus on two cases: correlated and random distributions. In the correlated case, the CDW puddles around each impurity overlap coherently leading to a "supersolid" phase with coexisting pairing and charge order. In contrast, a random distribution of impurities does not lead to coherent CDW formation. We argue that the energy lowering from coherent ordering can have a feedback effect, driving correlations between impurities. This can be understood as arising from an RKKY-like interaction, mediated by impurity textures. We discuss implications for charge order in the cuprates and doped CDW materials such as NbSe2.

  11. Background fluorescence estimation and vesicle segmentation in live cell imaging with conditional random fields.

    PubMed

    Pécot, Thierry; Bouthemy, Patrick; Boulanger, Jérôme; Chessel, Anatole; Bardin, Sabine; Salamero, Jean; Kervrann, Charles

    2015-02-01

    Image analysis applied to fluorescence live cell microscopy has become a key tool in molecular biology since it enables to characterize biological processes in space and time at the subcellular level. In fluorescence microscopy imaging, the moving tagged structures of interest, such as vesicles, appear as bright spots over a static or nonstatic background. In this paper, we consider the problem of vesicle segmentation and time-varying background estimation at the cellular scale. The main idea is to formulate the joint segmentation-estimation problem in the general conditional random field framework. Furthermore, segmentation of vesicles and background estimation are alternatively performed by energy minimization using a min cut-max flow algorithm. The proposed approach relies on a detection measure computed from intensity contrasts between neighboring blocks in fluorescence microscopy images. This approach permits analysis of either 2D + time or 3D + time data. We demonstrate the performance of the so-called C-CRAFT through an experimental comparison with the state-of-the-art methods in fluorescence video-microscopy. We also use this method to characterize the spatial and temporal distribution of Rab6 transport carriers at the cell periphery for two different specific adhesion geometries.

  12. Evaluation of high-resolution sea ice models on the basis of statistical and scaling properties of Arctic sea ice drift and deformation

    NASA Astrophysics Data System (ADS)

    Girard, L.; Weiss, J.; Molines, J. M.; Barnier, B.; Bouillon, S.

    2009-08-01

    Sea ice drift and deformation from models are evaluated on the basis of statistical and scaling properties. These properties are derived from two observation data sets: the RADARSAT Geophysical Processor System (RGPS) and buoy trajectories from the International Arctic Buoy Program (IABP). Two simulations obtained with the Louvain-la-Neuve Ice Model (LIM) coupled to a high-resolution ocean model and a simulation obtained with the Los Alamos Sea Ice Model (CICE) were analyzed. Model ice drift compares well with observations in terms of large-scale velocity field and distributions of velocity fluctuations although a significant bias on the mean ice speed is noted. On the other hand, the statistical properties of ice deformation are not well simulated by the models: (1) The distributions of strain rates are incorrect: RGPS distributions of strain rates are power law tailed, i.e., exhibit "wild randomness," whereas models distributions remain in the Gaussian attraction basin, i.e., exhibit "mild randomness." (2) The models are unable to reproduce the spatial and temporal correlations of the deformation fields: In the observations, ice deformation follows spatial and temporal scaling laws that express the heterogeneity and the intermittency of deformation. These relations do not appear in simulated ice deformation. Mean deformation in models is almost scale independent. The statistical properties of ice deformation are a signature of the ice mechanical behavior. The present work therefore suggests that the mechanical framework currently used by models is inappropriate. A different modeling framework based on elastic interactions could improve the representation of the statistical and scaling properties of ice deformation.

  13. Brain tumor detection and segmentation in a CRF (conditional random fields) framework with pixel-pairwise affinity and superpixel-level features.

    PubMed

    Wu, Wei; Chen, Albert Y C; Zhao, Liang; Corso, Jason J

    2014-03-01

    Detection and segmentation of a brain tumor such as glioblastoma multiforme (GBM) in magnetic resonance (MR) images are often challenging due to its intrinsically heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI scans was developed and tested. Simple thresholds and statistical methods are unable to adequately segment the various elements of the GBM, such as local contrast enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory results in larger data sets, and the methods based on generative or discriminative models have intrinsic limitations during application, such as small sample set learning and transfer. A new method was developed to overcome these challenges. Multimodal MR images are segmented into superpixels using algorithms to alleviate the sampling issue and to improve the sample representativeness. Next, features were extracted from the superpixels using multi-level Gabor wavelet filters. Based on the features, a support vector machine (SVM) model and an affinity metric model for tumors were trained to overcome the limitations of previous generative models. Based on the output of the SVM and spatial affinity models, conditional random fields theory was applied to segment the tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity model. Finally, labeling noise was removed using "structural knowledge" such as the symmetrical and continuous characteristics of the tumor in spatial domain. The system was evaluated with 20 GBM cases and the BraTS challenge data set. Dice coefficients were computed, and the results were highly consistent with those reported by Zikic et al. (MICCAI 2012, Lecture notes in computer science. vol 7512, pp 369-376, 2012). A brain tumor segmentation method using model-aware affinity demonstrates comparable performance with other state-of-the art algorithms.

  14. An Iterative Inference Procedure Applying Conditional Random Fields for Simultaneous Classification of Land Cover and Land Use

    NASA Astrophysics Data System (ADS)

    Albert, L.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.

  15. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  16. Evaluating uncertainty in predicting spatially variable representative elementary scales in fractured aquifers, with application to Turkey Creek Basin, Colorado

    USGS Publications Warehouse

    Wellman, Tristan P.; Poeter, Eileen P.

    2006-01-01

    Computational limitations and sparse field data often mandate use of continuum representation for modeling hydrologic processes in large‐scale fractured aquifers. Selecting appropriate element size is of primary importance because continuum approximation is not valid for all scales. The traditional approach is to select elements by identifying a single representative elementary scale (RES) for the region of interest. Recent advances indicate RES may be spatially variable, prompting unanswered questions regarding the ability of sparse data to spatially resolve continuum equivalents in fractured aquifers. We address this uncertainty of estimating RES using two techniques. In one technique we employ data‐conditioned realizations generated by sequential Gaussian simulation. For the other we develop a new approach using conditioned random walks and nonparametric bootstrapping (CRWN). We evaluate the effectiveness of each method under three fracture densities, three data sets, and two groups of RES analysis parameters. In sum, 18 separate RES analyses are evaluated, which indicate RES magnitudes may be reasonably bounded using uncertainty analysis, even for limited data sets and complex fracture structure. In addition, we conduct a field study to estimate RES magnitudes and resulting uncertainty for Turkey Creek Basin, a crystalline fractured rock aquifer located 30 km southwest of Denver, Colorado. Analyses indicate RES does not correlate to rock type or local relief in several instances but is generally lower within incised creek valleys and higher along mountain fronts. Results of this study suggest that (1) CRWN is an effective and computationally efficient method to estimate uncertainty, (2) RES predictions are well constrained using uncertainty analysis, and (3) for aquifers such as Turkey Creek Basin, spatial variability of RES is significant and complex.

  17. Geo-additive modelling of malaria in Burundi

    PubMed Central

    2011-01-01

    Background Malaria is a major public health issue in Burundi in terms of both morbidity and mortality, with around 2.5 million clinical cases and more than 15,000 deaths each year. It is still the single main cause of mortality in pregnant women and children below five years of age. Because of the severe health and economic burden of malaria, there is still a growing need for methods that will help to understand the influencing factors. Several studies/researches have been done on the subject yielding different results as which factors are most responsible for the increase in malaria transmission. This paper considers the modelling of the dependence of malaria cases on spatial determinants and climatic covariates including rainfall, temperature and humidity in Burundi. Methods The analysis carried out in this work exploits real monthly data collected in the area of Burundi over 12 years (1996-2007). Semi-parametric regression models are used. The spatial analysis is based on a geo-additive model using provinces as the geographic units of study. The spatial effect is split into structured (correlated) and unstructured (uncorrelated) components. Inference is fully Bayesian and uses Markov chain Monte Carlo techniques. The effects of the continuous covariates are modelled by cubic p-splines with 20 equidistant knots and second order random walk penalty. For the spatially correlated effect, Markov random field prior is chosen. The spatially uncorrelated effects are assumed to be i.i.d. Gaussian. The effects of climatic covariates and the effects of other spatial determinants are estimated simultaneously in a unified regression framework. Results The results obtained from the proposed model suggest that although malaria incidence in a given month is strongly positively associated with the minimum temperature of the previous months, regional patterns of malaria that are related to factors other than climatic variables have been identified, without being able to explain them. Conclusions In this paper, semiparametric models are used to model the effects of both climatic covariates and spatial effects on malaria distribution in Burundi. The results obtained from the proposed models suggest a strong positive association between malaria incidence in a given month and the minimum temperature of the previous month. From the spatial effects, important spatial patterns of malaria that are related to factors other than climatic variables are identified. Potential explanations (factors) could be related to socio-economic conditions, food shortage, limited access to health care service, precarious housing, promiscuity, poor hygienic conditions, limited access to drinking water, land use (rice paddies for example), displacement of the population (due to armed conflicts). PMID:21835010

  18. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    PubMed

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  19. Modeling Forest Biomass and Growth: Coupling Long-Term Inventory and Lidar Data

    NASA Technical Reports Server (NTRS)

    Babcock, Chad; Finley, Andrew O.; Cook, Bruce D.; Weiskittel, Andrew; Woodall, Christopher W.

    2016-01-01

    Combining spatially-explicit long-term forest inventory and remotely sensed information from Light Detection and Ranging (LiDAR) datasets through statistical models can be a powerful tool for predicting and mapping above-ground biomass (AGB) at a range of geographic scales. We present and examine a novel modeling approach to improve prediction of AGB and estimate AGB growth using LiDAR data. The proposed model accommodates temporal misalignment between field measurements and remotely sensed data-a problem pervasive in such settings-by including multiple time-indexed measurements at plot locations to estimate AGB growth. We pursue a Bayesian modeling framework that allows for appropriately complex parameter associations and uncertainty propagation through to prediction. Specifically, we identify a space-varying coefficients model to predict and map AGB and its associated growth simultaneously. The proposed model is assessed using LiDAR data acquired from NASA Goddard's LiDAR, Hyper-spectral & Thermal imager and field inventory data from the Penobscot Experimental Forest in Bradley, Maine. The proposed model outperformed the time-invariant counterpart models in predictive performance as indicated by a substantial reduction in root mean squared error. The proposed model adequately accounts for temporal misalignment through the estimation of forest AGB growth and accommodates residual spatial dependence. Results from this analysis suggest that future AGB models informed using remotely sensed data, such as LiDAR, may be improved by adapting traditional modeling frameworks to account for temporal misalignment and spatial dependence using random effects.

  20. Spatial relationships between above-ground biomass and bird species biodiversity in Palawan, Philippines.

    PubMed

    Singh, Minerva; Friess, Daniel A; Vilela, Bruno; Alban, Jose Don T De; Monzon, Angelica Kristina V; Veridiano, Rizza Karen A; Tumaneng, Roven D

    2017-01-01

    This study maps distribution and spatial congruence between Above-Ground Biomass (AGB) and species richness of IUCN listed conservation-dependent and endemic avian fauna in Palawan, Philippines. Grey Level Co-Occurrence Texture Matrices (GLCMs) extracted from Landsat and ALOS-PALSAR were used in conjunction with local field data to model and map local-scale field AGB using the Random Forest algorithm (r = 0.92 and RMSE = 31.33 Mg·ha-1). A support vector regression (SVR) model was used to identify the factors influencing variation in avian species richness at a 1km scale. AGB is one of the most important determinants of avian species richness for the study area. Topographic factors and anthropogenic factors such as distance from the roads were also found to strongly influence avian species richness. Hotspots of high AGB and high species richness concentration were mapped using hotspot analysis and the overlaps between areas of high AGB and avian species richness was calculated. Results show that the overlaps between areas of high AGB with high IUCN red listed avian species richness and endemic avian species richness were fairly limited at 13% and 8% at the 1-km scale. The overlap between 1) low AGB and low IUCN richness, and 2) low AGB and low endemic avian species richness was higher at 36% and 12% respectively. The enhanced capacity to spatially map the correlation between AGB and avian species richness distribution will further assist the conservation and protection of forest areas and threatened avian species.

  1. High-Dimensional Bayesian Geostatistics

    PubMed Central

    Banerjee, Sudipto

    2017-01-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920

  2. Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies

    NASA Astrophysics Data System (ADS)

    Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu

    2015-09-01

    Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.

  3. A tesselated probabilistic representation for spatial robot perception and navigation

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto

    1989-01-01

    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.

  4. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image

    PubMed Central

    Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping

    2016-01-01

    Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289

  5. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  6. Effects of Heterogeneity and Uncertainties in Sources and Initial and Boundary Conditions on Spatiotemporal Variations of Groundwater Levels

    NASA Astrophysics Data System (ADS)

    Zhang, Y. K.; Liang, X.

    2014-12-01

    Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.

  7. Molecular-dynamics simulation of polymethylene chain confined in cylindrical potentials. I. Nature of the conformational defects

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takashi; Kimikawa, Yuichi

    1992-10-01

    The conformational motion of a polymethylene molecule constrained by a cylindrical potential is simulated up to 100 ps. The molecule consists of 60 CH2 groups and has variable bond lengths, bond angles, and dihedral angles. Our main concern here is the excitation and the dynamics of the conformational defects: kinks, jogs, etc. Under weaker constraint a number of gauche bonds are excited; they mostly form pairs such as gtḡ kinks or gtttḡ jogs. These conformational defects show no continuous drift in space. Instead they often annihilate and then recreate at different sites showing apparently random positional changes. The conformational defects produce characteristic strain fields around them. It seems that the conformational defects interact attractively through these strain fields. This is evidenced by remarkably correlated spatial distributions of the gauche bonds.

  8. Detecting spatio-temporal changes in agricultural land use in Heilongjiang province, China using MODIS time-series data and a random forest regression model

    NASA Astrophysics Data System (ADS)

    Hu, Q.; Friedl, M. A.; Wu, W.

    2017-12-01

    Accurate and timely information regarding the spatial distribution of crop types and their changes is essential for acreage surveys, yield estimation, water management, and agricultural production decision-making. In recent years, increasing population, dietary shifts and climate change have driven drastic changes in China's agricultural land use. However, no maps are currently available that document the spatial and temporal patterns of these agricultural land use changes. Because of its short revisit period, rich spectral bands and global coverage, MODIS time series data has been shown to have great potential for detecting the seasonal dynamics of different crop types. However, its inherently coarse spatial resolution limits the accuracy with which crops can be identified from MODIS in regions with small fields or complex agricultural landscapes. To evaluate this more carefully and specifically understand the strengths and weaknesses of MODIS data for crop-type mapping, we used MODIS time-series imagery to map the sub-pixel fractional crop area for four major crop types (rice, corn, soybean and wheat) at 500-m spatial resolution for Heilongjiang province, one of the most important grain-production regions in China where recent agricultural land use change has been rapid and pronounced. To do this, a random forest regression (RF-g) model was constructed to estimate the percentage of each sub-pixel crop type in 2006, 2011 and 2016. Crop type maps generated through expert visual interpretation of high spatial resolution images (i.e., Landsat and SPOT data) were used to calibrate the regression model. Five different time series of vegetation indices (155 features) derived from different spectral channels of MODIS land surface reflectance (MOD09A1) data were used as candidate features for the RF-g model. An out-of-bag strategy and backward elimination approach was applied to select the optimal spectra-temporal feature subset for each crop type. The resulting crop maps were assessed in two ways: (1) wall-to-wall pixel comparison with corresponding high spatial resolution reference maps; and (2) county-level comparison with census data. Based on these derived maps, changes in crop type, total area, and spatial patterns of change in Heilongjiang province during 2006-2016 were analyzed.

  9. Three-Dimensional Bayesian Geostatistical Aquifer Characterization at the Hanford 300 Area using Tracer Test Data

    NASA Astrophysics Data System (ADS)

    Chen, X.; Murakami, H.; Hahn, M. S.; Hammond, G. E.; Rockhold, M. L.; Rubin, Y.

    2010-12-01

    Tracer testing under natural or forced gradient flow provides useful information for characterizing subsurface properties, by monitoring and modeling the tracer plume migration in a heterogeneous aquifer. At the Hanford 300 Area, non-reactive tracer experiments, in addition to constant-rate injection tests and electromagnetic borehole flowmeter (EBF) profiling, were conducted to characterize the heterogeneous hydraulic conductivity field. A Bayesian data assimilation technique, method of anchored distributions (MAD), is applied to assimilate the experimental tracer test data and to infer the three-dimensional heterogeneous structure of the hydraulic conductivity in the saturated zone of the Hanford formation. In this study, the prior information of the underlying random hydraulic conductivity field was obtained from previous field characterization efforts using the constant-rate injection tests and the EBF data. The posterior distribution of the random field is obtained by further conditioning the field on the temporal moments of tracer breakthrough curves at various observation wells. The parallel three-dimensional flow and transport code PFLOTRAN is implemented to cope with the highly transient flow boundary conditions at the site and to meet the computational demand of the proposed method. The validation results show that the field conditioned on the tracer test data better reproduces the tracer transport behavior compared to the field characterized previously without the tracer test data. A synthetic study proves that the proposed method can effectively assimilate tracer test data to capture the essential spatial heterogeneity of the three-dimensional hydraulic conductivity field. These characterization results will improve conceptual models developed for the site, including reactive transport models. The study successfully demonstrates the capability of MAD to assimilate multi-scale multi-type field data within a consistent Bayesian framework. The MAD framework can potentially be applied to combine geophysical data with other types of data in site characterization.

  10. A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure.

    PubMed

    Muhlestein, Michael B; Haberman, Michael R

    2016-08-01

    An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.

  11. A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure

    PubMed Central

    Haberman, Michael R.

    2016-01-01

    An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed. PMID:27616932

  12. A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure

    NASA Astrophysics Data System (ADS)

    Muhlestein, Michael B.; Haberman, Michael R.

    2016-08-01

    An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.

  13. High Field In Vivo 13C Magnetic Resonance Spectroscopy of Brain by Random Radiofrequency Heteronuclear Decoupling and Data Sampling

    NASA Astrophysics Data System (ADS)

    Li, Ningzhi; Li, Shizhe; Shen, Jun

    2017-06-01

    In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (<= 4 Tesla (T)) to keep the local and averaged specific absorption rate (SAR) under the safety guidelines established by the International Electrotechnical Commission (IEC) and the US Food and Drug Administration (FDA). Alternately, carboxylic/amide carbons are coupled to protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.

  14. Quantifying the roles of random motility and directed motility using advection-diffusion theory for a 3T3 fibroblast cell migration assay stimulated with an electric field.

    PubMed

    Simpson, Matthew J; Lo, Kai-Yin; Sun, Yung-Shin

    2017-03-17

    Directed cell migration can be driven by a range of external stimuli, such as spatial gradients of: chemical signals (chemotaxis); adhesion sites (haptotaxis); or temperature (thermotaxis). Continuum models of cell migration typically include a diffusion term to capture the undirected component of cell motility and an advection term to capture the directed component of cell motility. However, there is no consensus in the literature about the form that the advection term takes. Some theoretical studies suggest that the advection term ought to include receptor saturation effects. However, others adopt a much simpler constant coefficient. One of the limitations of including receptor saturation effects is that it introduces several additional unknown parameters into the model. Therefore, a relevant research question is to investigate whether directed cell migration is best described by a simple constant tactic coefficient or a more complicated model incorporating saturation effects. We study directed cell migration using an experimental device in which the directed component of the cell motility is driven by a spatial gradient of electric potential, which is known as electrotaxis. The electric field (EF) is proportional to the spatial gradient of the electric potential. The spatial variation of electric potential across the experimental device varies in such a way that there are several subregions on the device in which the EF takes on different values that are approximately constant within those subregions. We use cell trajectory data to quantify the motion of 3T3 fibroblast cells at different locations on the device to examine how different values of the EF influences cell motility. The undirected (random) motility of the cells is quantified in terms of the cell diffusivity, D, and the directed motility is quantified in terms of a cell drift velocity, v. Estimates D and v are obtained under a range of four different EF conditions, which correspond to normal physiological conditions. Our results suggest that there is no anisotropy in D, and that D appears to be approximately independent of the EF and the electric potential. The drift velocity increases approximately linearly with the EF, suggesting that the simplest linear advection term, with no additional saturation parameters, provides a good explanation of these physiologically relevant data. We find that the simplest linear advection term in a continuum model of directed cell motility is sufficient to describe a range of different electrotaxis experiments for 3T3 fibroblast cells subject to normal physiological values of the electric field. This is useful information because alternative models that include saturation effects involve additional parameters that need to be estimated before a partial differential equation model can be applied to interpret or predict a cell migration experiment.

  15. A Permutation-Randomization Approach to Test the Spatial Distribution of Plant Diseases.

    PubMed

    Lione, G; Gonthier, P

    2016-01-01

    The analysis of the spatial distribution of plant diseases requires the availability of trustworthy geostatistical methods. The mean distance tests (MDT) are here proposed as a series of permutation and randomization tests to assess the spatial distribution of plant diseases when the variable of phytopathological interest is categorical. A user-friendly software to perform the tests is provided. Estimates of power and type I error, obtained with Monte Carlo simulations, showed the reliability of the MDT (power > 0.80; type I error < 0.05). A biological validation on the spatial distribution of spores of two fungal pathogens causing root rot on conifers was successfully performed by verifying the consistency between the MDT responses and previously published data. An application of the MDT was carried out to analyze the relation between the plantation density and the distribution of the infection of Gnomoniopsis castanea, an emerging fungal pathogen causing nut rot on sweet chestnut. Trees carrying nuts infected by the pathogen were randomly distributed in areas with different plantation densities, suggesting that the distribution of G. castanea was not related to the plantation density. The MDT could be used to analyze the spatial distribution of plant diseases both in agricultural and natural ecosystems.

  16. Safety assessment of a shallow foundation using the random finite element method

    NASA Astrophysics Data System (ADS)

    Zaskórski, Łukasz; Puła, Wojciech

    2015-04-01

    A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.

  17. Dynamic placement of plasmonic hotspots for super-resolution surface-enhanced Raman scattering.

    PubMed

    Ertsgaard, Christopher T; McKoskey, Rachel M; Rich, Isabel S; Lindquist, Nathan C

    2014-10-28

    In this paper, we demonstrate dynamic placement of locally enhanced plasmonic fields using holographic laser illumination of a silver nanohole array. To visualize these focused "hotspots", the silver surface was coated with various biological samples for surface-enhanced Raman spectroscopy (SERS) imaging. Due to the large field enhancements, blinking behavior of the SERS hotspots was observed and processed using a stochastic optical reconstruction microscopy algorithm enabling super-resolution localization of the hotspots to within 10 nm. These hotspots were then shifted across the surface in subwavelength (<100 nm for a wavelength of 660 nm) steps using holographic illumination from a spatial light modulator. This created a dynamic imaging and sensing surface, whereas static illumination would only have produced stationary hotspots. Using this technique, we also show that such subwavelength shifting and localization of plasmonic hotspots has potential for imaging applications. Interestingly, illuminating the surface with randomly shifting SERS hotspots was sufficient to completely fill in a wide field of view for super-resolution chemical imaging.

  18. Analytical and numerical studies of photo-injected charge transport in molecularly-doped polymers

    NASA Astrophysics Data System (ADS)

    Roy Chowdhury, Amrita

    The mobility of photo-injected charge carriers in molecularly-doped polymers (MDPs) exhibits a commonly observed, and nearly universal Poole-Frenkel field dependence, mu exp√(beta0E), that has been shown to arise from the correlated Gaussian energy distribution of transport sites encountered by charges undergoing hopping transport through the material. Analytical and numerical studies of photo-injected charge transport in these materials are presented here with an attempt to understand how specific features of the various models developed to describe these systems depend on the microscopic parameters that define them. Specifically, previously published time-of-flight mobility data for the molecularly doped polymer 30% DEH:PC (polycarbonate doped with 30 wt.% aromatic hydrazone DEH) is compared with direct analytical and numerical predictions of five disorder-based models, the Gaussian disorder model (GDM) of Bassler, and four correlated disorder models introduced by Novikov, et al., and by Parris, et al. In these numerical studies, disorder parameters describing each model were varied from reasonable starting conditions, in order to give the best overall fit. The uncorrelated GDM describes the Poole-Frenkel field dependence of the mobility only at very high fields, but fails for fields lower than about 64 V/mum. The correlated disorder models with small amounts of geometrical disorder do a good over-all job of reproducing a robust Poole-Frenkel field dependence, with correlated disorder theories that employ polaron transition rates showing qualitatively better agreement with experiment than those that employ Miller-Abrahams rates. In a separate study, the heuristic treatment of spatial or geometric disorder incorporated in existing theories is critiqued, and a randomly-diluted lattice gas model is developed to describe the spatial disorder of the transport sites in a more realistic way.

  19. Active marks structure optimization for optical-electronic systems of spatial position control of industrial objects

    NASA Astrophysics Data System (ADS)

    Sycheva, Elena A.; Vasilev, Aleksandr S.; Lashmanov, Oleg U.; Korotaev, Valery V.

    2017-06-01

    The article is devoted to the optimization of optoelectronic systems of the spatial position of objects. Probabilistic characteristics of the detection of an active structured mark on a random noisy background are investigated. The developed computer model and the results of the study allow us to estimate the probabilistic characteristics of detection of a complex structured mark on a random gradient background, and estimate the error of spatial coordinates. The results of the study make it possible to improve the accuracy of measuring the coordinates of the object. Based on the research recommendations are given on the choice of parameters of the optimal mark structure for use in opticalelectronic systems for monitoring the spatial position of large-sized structures.

  20. Field spectroscopy sampling strategies for improved measurement of Earth surface reflectance

    NASA Astrophysics Data System (ADS)

    Mac Arthur, A.; Alonso, L.; Malthus, T. J.; Moreno, J. F.

    2013-12-01

    Over the last two decades extensive networks of research sites have been established to measure the flux of carbon compounds and water vapour between the Earth's surface and the atmosphere using eddy covariance (EC) techniques. However, contributing Earth surface components cannot be determined and (as the ';footprints' are spatially constrained) these measurements cannot be extrapolated to regional cover using this technique. At many of these EC sites researchers have been integrating spectral measurements with EC and ancillary data to better understand light use efficiency and carbon dioxide flux. These spectroscopic measurements could also be used to assess contributing components and provide support for imaging spectroscopy, from airborne or satellite platforms, which can provide unconstrained spatial cover. Furthermore, there is an increasing interest in ';smart' database and information retrieval systems such as that proposed by EcoSIS and OPTIMISE to store, analyse, QA and merge spectral and biophysical measurements and provide information to end users. However, as Earth surfaces are spectrally heterogeneous and imaging and field spectrometers sample different spatial extents appropriate field sampling strategies require to be adopted. To sample Earth surfaces spectroscopists adopt either single; random; regular grid; transect; or 'swiping' point sampling strategies, although little comparative work has been carried out to determine the most appropriate approach; the work by Goetz (2012) is a limited exception. Mac Arthur et al (2012) demonstrated that, for two full wavelength (400 nm to 2,500 nm) field spectroradiometers, the measurement area sampled is defined by each spectroradiometer/fore optic system's directional response function (DRF) rather than the field-of-view (FOV) specified by instrument manufacturers. Mac Arthur et al (2012) also demonstrated that each reflecting element within the sampled area was not weighted equally in the integrated measurement recorded. There were non-uniformities of spectral response with the spectral ';weighting' per wavelength interval being positionally dependent and unique to each spectroradiometer/fore optic system investigated. However, Mac Arthur et al (2012) did not provide any advice on how to compensate for these systematic errors or advise on appropriate sampling strategies. The work reported here will provide the first systematic study of the effect of field spectroscopy sampling strategies for a range of different Earth surface types. Synthetic Earth surface hyperspectral data cubes for each surface type were generated and convolved with a range of the spectrometer/fore optic system directional response functions generated by Mac Arthur et al 2013, to simulate spectroscopic measurements of Earth surfaces. This has enabled different field sampling strategies to be directly compared and their suitability for each measurement purpose and surface type to be assessed and robust field spectroscopy sampling strategy recommendations to be made. This will be particularly of interest to the carbon and water vapour flux communities and assist the development of sampling strategies for field spectroscopy from rotary-wing Unmanned Aerial Vehicles, which will aid acquiring measurements in the spatial domain, and generally further the use of field spectroscopy for quantitative Earth observation.

  1. A higher order conditional random field model for simultaneous classification of land cover and land use

    NASA Astrophysics Data System (ADS)

    Albert, Lena; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    We propose a new approach for the simultaneous classification of land cover and land use considering spatial as well as semantic context. We apply a Conditional Random Fields (CRF) consisting of a land cover and a land use layer. In the land cover layer of the CRF, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Intra-layer edges of the CRF model spatial dependencies between neighbouring image sites. All spatially overlapping sites in both layers are connected by inter-layer edges, which leads to higher order cliques modelling the semantic relation between all land cover and land use sites in the clique. A generic formulation of the higher order potential is proposed. In order to enable efficient inference in the two-layer higher order CRF, we propose an iterative inference procedure in which the two classification tasks mutually influence each other. We integrate contextual relations between land cover and land use in the classification process by using contextual features describing the complex dependencies of all nodes in a higher order clique. These features are incorporated in a discriminative classifier, which approximates the higher order potentials during the inference procedure. The approach is designed for input data based on aerial images. Experiments are carried out on two test sites to evaluate the performance of the proposed method. The experiments show that the classification results are improved compared to the results of a non-contextual classifier. For land cover classification, the result is much more homogeneous and the delineation of land cover segments is improved. For the land use classification, an improvement is mainly achieved for land use objects showing non-typical characteristics or similarities to other land use classes. Furthermore, we have shown that the size of the super-pixels has an influence on the level of detail of the classification result, but also on the degree of smoothing induced by the segmentation method, which is especially beneficial for land cover classes covering large, homogeneous areas.

  2. Within-field spatial distribution of Megacopta cribraria (Hemiptera: Plataspidae) in soybean (Fabales: Fabaceae).

    PubMed

    Seiter, Nicholas J; Reay-Jones, Francis P F; Greene, Jeremy K

    2013-12-01

    The recently introduced plataspid Megacopta cribraria (F.) can infest fields of soybean (Glycine max (L.) Merrill) in the southeastern United States. Grid sampling in four soybean fields was conducted in 2011 and 2012 to study the spatial distribution of M. cribraria adults, nymphs, and egg masses. Peak oviposition typically occurred in early August, while peak levels of adults occurred in mid-late September. The overall sex ratio was slightly biased at 53.1 ± 0.2% (SEM) male. Sweep samples of nymphs were biased toward late instars. All three life stages exhibited a generally aggregated spatial distribution based on Taylor's power law, Iwao's patchiness regression, and spatial analysis by distance indices (SADIE). Interpolation maps of local SADIE aggregation indices showed clusters of adults and nymphs located at field edges, and mean densities of adults were higher in samples taken from field edges than in those taken from field interiors. Adults and nymphs were often spatially associated based on SADIE, indicating spatial stability across life stages.

  3. Attraction of position preference by spatial attention throughout human visual cortex.

    PubMed

    Klein, Barrie P; Harvey, Ben M; Dumoulin, Serge O

    2014-10-01

    Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Spectral Density of Laser Beam Scintillation in Wind Turbulence. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1997-01-01

    The temporal spectral density of the log-amplitude scintillation of a laser beam wave due to a spatially dependent vector-valued crosswind (deterministic as well as random) is evaluated. The path weighting functions for normalized spectral moments are derived, and offer a potential new technique for estimating the wind velocity profile. The Tatarskii-Klyatskin stochastic propagation equation for the Markov turbulence model is used with the solution approximated by the Rytov method. The Taylor 'frozen-in' hypothesis is assumed for the dependence of the refractive index on the wind velocity, and the Kolmogorov spectral density is used for the refractive index field.

  5. Photorefractive detection of tagged photons in ultrasound modulated optical tomography of thick biological tissues.

    PubMed

    Ramaz, F; Forget, B; Atlan, M; Boccara, A C; Gross, M; Delaye, P; Roosen, G

    2004-11-01

    We present a new and simple method to obtain ultrasound modulated optical tomography images in thick biological tissues with the use of a photorefractive crystal. The technique offers the advantage of spatially adapting the output speckle wavefront by analysing the signal diffracted by the interference pattern between this output field and a reference beam, recorded inside the photorefractive crystal. Averaging out due to random phases of the speckle grains vanishes, and we can use a fast single photodetector to measure the ultrasound modulated optical contrast. This technique offers a promising way to make direct measurements within the decorrelation time scale of living tissues.

  6. Modeling the spatial distribution of African buffalo (Syncerus caffer) in the Kruger National Park, South Africa

    PubMed Central

    Hughes, Kristen; Budke, Christine M.; Ward, Michael P.; Kerry, Ruth; Ingram, Ben

    2017-01-01

    The population density of wildlife reservoirs contributes to disease transmission risk for domestic animals. The objective of this study was to model the African buffalo distribution of the Kruger National Park. A secondary objective was to collect field data to evaluate models and determine environmental predictors of buffalo detection. Spatial distribution models were created using buffalo census information and archived data from previous research. Field data were collected during the dry (August 2012) and wet (January 2013) seasons using a random walk design. The fit of the prediction models were assessed descriptively and formally by calculating the root mean square error (rMSE) of deviations from field observations. Logistic regression was used to estimate the effects of environmental variables on the detection of buffalo herds and linear regression was used to identify predictors of larger herd sizes. A zero-inflated Poisson model produced distributions that were most consistent with expected buffalo behavior. Field data confirmed that environmental factors including season (P = 0.008), vegetation type (P = 0.002), and vegetation density (P = 0.010) were significant predictors of buffalo detection. Bachelor herds were more likely to be detected in dense vegetation (P = 0.005) and during the wet season (P = 0.022) compared to the larger mixed-sex herds. Static distribution models for African buffalo can produce biologically reasonable results but environmental factors have significant effects and therefore could be used to improve model performance. Accurate distribution models are critical for the evaluation of disease risk and to model disease transmission. PMID:28902858

  7. Simulation of wave propagation in three-dimensional random media

    NASA Astrophysics Data System (ADS)

    Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1995-04-01

    Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of

  8. Local dependence in random graph models: characterization, properties and statistical inference

    PubMed Central

    Schweinberger, Michael; Handcock, Mark S.

    2015-01-01

    Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142

  9. Spatial Vertical Directionality and Correlation of Low-Frequency Ambient Noise in Deep Ocean Direct-Arrival Zones.

    PubMed

    Yang, Qiulong; Yang, Kunde; Cao, Ran; Duan, Shunli

    2018-01-23

    Wind-driven and distant shipping noise sources contribute to the total noise field in the deep ocean direct-arrival zones. Wind-driven and distant shipping noise sources may significantly and simultaneously affect the spatial characteristics of the total noise field to some extent. In this work, a ray approach and parabolic equation solution method were jointly utilized to model the low-frequency ambient noise field in a range-dependent deep ocean environment by considering their calculation accuracy and efficiency in near-field wind-driven and far-field distant shipping noise fields. The reanalysis databases of National Center of Environment Prediction (NCEP) and Volunteer Observation System (VOS) were used to model the ambient noise source intensity and distribution. Spatial vertical directionality and correlation were analyzed in three scenarios that correspond to three wind speed conditions. The noise field was dominated by distant shipping noise sources when the wind speed was less than 3 m/s, and then the spatial vertical directionality and vertical correlation of the total noise field were nearly consistent with those of distant shipping noise field. The total noise field was completely dominated by near field wind generated noise sources when the wind speed was greater than 12 m/s at 150 Hz, and then the spatial vertical correlation coefficient and directionality pattern of the total noise field was approximately consistent with that of the wind-driven noise field. The spatial characteristics of the total noise field for wind speeds between 3 m/s and 12 m/s were the weighted results of wind-driven and distant shipping noise fields. Furthermore, the spatial characteristics of low-frequency ambient noise field were compared with the classical Cron/Sherman deep water noise field coherence function. Simulation results with the described modeling method showed good agreement with the experimental measurement results based on the vertical line array deployed near the bottom in deep ocean direct-arrival zones.

  10. Spatial Vertical Directionality and Correlation of Low-Frequency Ambient Noise in Deep Ocean Direct-Arrival Zones

    PubMed Central

    Yang, Qiulong; Yang, Kunde; Cao, Ran; Duan, Shunli

    2018-01-01

    Wind-driven and distant shipping noise sources contribute to the total noise field in the deep ocean direct-arrival zones. Wind-driven and distant shipping noise sources may significantly and simultaneously affect the spatial characteristics of the total noise field to some extent. In this work, a ray approach and parabolic equation solution method were jointly utilized to model the low-frequency ambient noise field in a range-dependent deep ocean environment by considering their calculation accuracy and efficiency in near-field wind-driven and far-field distant shipping noise fields. The reanalysis databases of National Center of Environment Prediction (NCEP) and Volunteer Observation System (VOS) were used to model the ambient noise source intensity and distribution. Spatial vertical directionality and correlation were analyzed in three scenarios that correspond to three wind speed conditions. The noise field was dominated by distant shipping noise sources when the wind speed was less than 3 m/s, and then the spatial vertical directionality and vertical correlation of the total noise field were nearly consistent with those of distant shipping noise field. The total noise field was completely dominated by near field wind generated noise sources when the wind speed was greater than 12 m/s at 150 Hz, and then the spatial vertical correlation coefficient and directionality pattern of the total noise field was approximately consistent with that of the wind-driven noise field. The spatial characteristics of the total noise field for wind speeds between 3 m/s and 12 m/s were the weighted results of wind-driven and distant shipping noise fields. Furthermore, the spatial characteristics of low-frequency ambient noise field were compared with the classical Cron/Sherman deep water noise field coherence function. Simulation results with the described modeling method showed good agreement with the experimental measurement results based on the vertical line array deployed near the bottom in deep ocean direct-arrival zones. PMID:29360793

  11. Optimization and universality of Brownian search in a basic model of quenched heterogeneous media

    NASA Astrophysics Data System (ADS)

    Godec, Aljaž; Metzler, Ralf

    2015-05-01

    The kinetics of a variety of transport-controlled processes can be reduced to the problem of determining the mean time needed to arrive at a given location for the first time, the so-called mean first-passage time (MFPT) problem. The occurrence of occasional large jumps or intermittent patterns combining various types of motion are known to outperform the standard random walk with respect to the MFPT, by reducing oversampling of space. Here we show that a regular but spatially heterogeneous random walk can significantly and universally enhance the search in any spatial dimension. In a generic minimal model we consider a spherically symmetric system comprising two concentric regions with piecewise constant diffusivity. The MFPT is analyzed under the constraint of conserved average dynamics, that is, the spatially averaged diffusivity is kept constant. Our analytical calculations and extensive numerical simulations demonstrate the existence of an optimal heterogeneity minimizing the MFPT to the target. We prove that the MFPT for a random walk is completely dominated by what we term direct trajectories towards the target and reveal a remarkable universality of the spatially heterogeneous search with respect to target size and system dimensionality. In contrast to intermittent strategies, which are most profitable in low spatial dimensions, the spatially inhomogeneous search performs best in higher dimensions. Discussing our results alongside recent experiments on single-particle tracking in living cells, we argue that the observed spatial heterogeneity may be beneficial for cellular signaling processes.

  12. Multidecadal Rates of Disturbance- and Climate Change-Induced Land Cover Change in Arctic and Boreal Ecosystems over Western Canada and Alaska Inferred from Dense Landsat Time Series

    NASA Astrophysics Data System (ADS)

    Wang, J.; Sulla-menashe, D. J.; Woodcock, C. E.; Sonnentag, O.; Friedl, M. A.

    2017-12-01

    Rapid climate change in arctic and boreal ecosystems is driving changes to land cover composition, including woody expansion in the arctic tundra, successional shifts following boreal fires, and thaw-induced wetland expansion and forest collapse along the southern limit of permafrost. The impacts of these land cover transformations on the physical climate and the carbon cycle are increasingly well-documented from field and model studies, but there have been few attempts to empirically estimate rates of land cover change at decadal time scale and continental spatial scale. Previous studies have used too coarse spatial resolution or have been too limited in temporal range to enable broad multi-decadal assessment of land cover change. As part of NASA's Arctic Boreal Vulnerability Experiment (ABoVE), we are using dense time series of Landsat remote sensing data to map disturbances and classify land cover types across the ABoVE extended domain (spanning western Canada and Alaska) over the last three decades (1982-2014) at 30 m resolution. We utilize regionally-complete and repeated acquisition high-resolution (<2 m) DigitalGlobe imagery to generate training data from across the region that follows a nested, hierarchical classification scheme encompassing plant functional type and cover density, understory type, wetland status, and land use. Additionally, we crosswalk plot-level field data into our scheme for additional high quality training sites. We use the Continuous Change Detection and Classification algorithm to estimate land cover change dates and temporal-spectral features in the Landsat data. These features are used to train random forest classification models and map land cover and analyze land cover change processes, focusing primarily on tundra "shrubification", post-fire succession, and boreal wetland expansion. We will analyze the high resolution data based on stratified random sampling of our change maps to validate and assess the accuracy of our model predictions. In this paper, we present initial results from this effort, including sub-regional analyses focused on several key areas, such as the Taiga Plains and the Southern Arctic ecozones, to calibrate our random forest models and assess results.

  13. Multiple Point Statistics algorithm based on direct sampling and multi-resolution images

    NASA Astrophysics Data System (ADS)

    Julien, S.; Renard, P.; Chugunova, T.

    2017-12-01

    Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.

  14. Simulating maize yield and biomass with spatial variability of soil field capacity

    USDA-ARS?s Scientific Manuscript database

    Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...

  15. Describing spatial pattern in stream networks: A practical approach

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  16. A geostatistical approach for describing spatial pattern in stream networks

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  17. Restoring the spatial resolution of refocus images on 4D light field

    NASA Astrophysics Data System (ADS)

    Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok

    2010-01-01

    This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.

  18. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  19. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, J.; Lee, J.; Yadav, V.

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  20. Selecting a spatial resolution for estimation of per-field green leaf area index

    NASA Technical Reports Server (NTRS)

    Curran, Paul J.; Williamson, H. Dawn

    1988-01-01

    For any application of multispectral scanner (MSS) data, a user is faced with a number of choices concerning the characteristics of the data; one of these is their spatial resolution. A pilot study was undertaken to determine the spatial resolution that would be optimal for the per-field estimation of green leaf area index (GLAI) in grassland. By reference to empirically-derived data from three areas of grassland, the suitable spatial resolution was hypothesized to lie in the lower portion of a 2-18 m range. To estimate per-field GLAI, airborne MSS data were collected at spatial resolutions of 2 m, 5 m and 10 m. The highest accuracies of per-field GLAI estimation were achieved using MSS data with spatial resolutions of 2 m and 5 m.

  1. Hierarchical Bayesian spatial models for predicting multiple forest variables using waveform LiDAR, hyperspectral imagery, and large inventory datasets

    USGS Publications Warehouse

    Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.

    2013-01-01

    In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.

  2. Spatial methods for deriving crop rotation history

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Trippe, Kristin M.; Whittaker, Gerald W.; Anderson, Nicole P.; Sullivan, Clare S.

    2017-08-01

    Benefits of converting 11 years of remote sensing classification data into cropping history of agricultural fields included measuring lengths of rotation cycles and identifying specific sequences of intervening crops grown between final years of old grass seed stands and establishment of new ones. Spatial and non-spatial methods were complementary. Individual-year classification errors were often correctable in spreadsheet-based non-spatial analysis, whereas their presence in spatial data generally led to exclusion of fields from further analysis. Markov-model testing of non-spatial data revealed that year-to-year cropping sequences did not match average frequencies for transitions among crops grown in western Oregon, implying that rotations into new grass seed stands were influenced by growers' desires to achieve specific objectives. Moran's I spatial analysis of length of time between consecutive grass seed stands revealed that clustering of fields was relatively uncommon, with high and low value clusters only accounting for 7.1 and 6.2% of fields.

  3. What does visual suffix interference tell us about spatial location in working memory?

    PubMed

    Allen, Richard J; Castellà, Judit; Ueno, Taiji; Hitch, Graham J; Baddeley, Alan D

    2015-01-01

    A visual object can be conceived of as comprising a number of features bound together by their joint spatial location. We investigate the question of whether the spatial location is automatically bound to the features or whether the two are separable, using a previously developed paradigm whereby memory is disrupted by a visual suffix. Participants were shown a sample array of four colored shapes, followed by a postcue indicating the target for recall. On randomly intermixed trials, a to-be-ignored suffix array consisting of two different colored shapes was presented between the sample and the postcue. In a random half of suffix trials, one of the suffix items overlaid the location of the target. If location was automatically encoded, one might expect the colocation of target and suffix to differentially impair performance. We carried out three experiments, cuing for recall by spatial location (Experiment 1), color or shape (Experiment 2), or both randomly intermixed (Experiment 3). All three studies showed clear suffix effects, but the colocation of target and suffix was differentially disruptive only when a spatial cue was used. The results suggest that purely visual shape-color binding can be retained and accessed without requiring information about spatial location, even when task demands encourage the encoding of location, consistent with the idea of an abstract and flexible visual working memory system.

  4. Landsat phenological metrics and their relation to aboveground carbon in the Brazilian Savanna.

    PubMed

    Schwieder, M; Leitão, P J; Pinto, J R R; Teixeira, A M C; Pedroni, F; Sanchez, M; Bustamante, M M; Hostert, P

    2018-05-15

    The quantification and spatially explicit mapping of carbon stocks in terrestrial ecosystems is important to better understand the global carbon cycle and to monitor and report change processes, especially in the context of international policy mechanisms such as REDD+ or the implementation of Nationally Determined Contributions (NDCs) and the UN Sustainable Development Goals (SDGs). Especially in heterogeneous ecosystems, such as Savannas, accurate carbon quantifications are still lacking, where highly variable vegetation densities occur and a strong seasonality hinders consistent data acquisition. In order to account for these challenges we analyzed the potential of land surface phenological metrics derived from gap-filled 8-day Landsat time series for carbon mapping. We selected three areas located in different subregions in the central Brazil region, which is a prominent example of a Savanna with significant carbon stocks that has been undergoing extensive land cover conversions. Here phenological metrics from the season 2014/2015 were combined with aboveground carbon field samples of cerrado sensu stricto vegetation using Random Forest regression models to map the regional carbon distribution and to analyze the relation between phenological metrics and aboveground carbon. The gap filling approach enabled to accurately approximate the original Landsat ETM+ and OLI EVI values and the subsequent derivation of annual phenological metrics. Random Forest model performances varied between the three study areas with RMSE values of 1.64 t/ha (mean relative RMSE 30%), 2.35 t/ha (46%) and 2.18 t/ha (45%). Comparable relationships between remote sensing based land surface phenological metrics and aboveground carbon were observed in all study areas. Aboveground carbon distributions could be mapped and revealed comprehensible spatial patterns. Phenological metrics were derived from 8-day Landsat time series with a spatial resolution that is sufficient to capture gradual changes in carbon stocks of heterogeneous Savanna ecosystems. These metrics revealed the relationship between aboveground carbon and the phenology of the observed vegetation. Our results suggest that metrics relating to the seasonal minimum and maximum values were the most influential variables and bear potential to improve spatially explicit mapping approaches in heterogeneous ecosystems, where both spatial and temporal resolutions are critical.

  5. Considering spatial heterogeneity in the distributed lag non-linear model when analyzing spatiotemporal data.

    PubMed

    Chien, Lung-Chang; Guo, Yuming; Li, Xiao; Yu, Hwa-Lung

    2018-01-01

    The distributed lag non-linear (DLNM) model has been frequently used in time series environmental health research. However, its functionality for assessing spatial heterogeneity is still restricted, especially in analyzing spatiotemporal data. This study proposed a solution to take a spatial function into account in the DLNM, and compared the influence with and without considering spatial heterogeneity in a case study. This research applied the DLNM to investigate non-linear lag effect up to 7 days in a case study about the spatiotemporal impact of fine particulate matter (PM 2.5 ) on preschool children's acute respiratory infection in 41 districts of northern Taiwan during 2005 to 2007. We applied two spatiotemporal methods to impute missing air pollutant data, and included the Markov random fields to analyze district boundary data in the DLNM. When analyzing the original data without a spatial function, the overall PM 2.5 effect accumulated from all lag-specific effects had a slight variation at smaller PM 2.5 measurements, but eventually decreased to relative risk significantly <1 when PM 2.5 increased. While analyzing spatiotemporal imputed data without a spatial function, the overall PM 2.5 effect did not decrease but increased in monotone as PM 2.5 increased over 20 μg/m 3 . After adding a spatial function in the DLNM, spatiotemporal imputed data conducted similar results compared with the overall effect from the original data. Moreover, the spatial function showed a clear and uneven pattern in Taipei, revealing that preschool children living in 31 districts of Taipei were vulnerable to acute respiratory infection. Our findings suggest the necessity of including a spatial function in the DLNM to make a spatiotemporal analysis available and to conduct more reliable and explainable research. This study also revealed the analytical impact if spatial heterogeneity is ignored.

  6. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  7. The Influence of Aircraft Speed Variations on Sensible Heat-Flux Measurements by Different Airborne Systems

    NASA Astrophysics Data System (ADS)

    Martin, Sabrina; Bange, Jens

    2014-01-01

    Crawford et al. (Boundary-Layer Meteorol 66:237-245, 1993) showed that the time average is inappropriate for airborne eddy-covariance flux calculations. The aircraft's ground speed through a turbulent field is not constant. One reason can be a correlation with vertical air motion, so that some types of structures are sampled more densely than others. To avoid this, the time-sampled data are adjusted for the varying ground speed so that the modified estimates are equivalent to spatially-sampled data. A comparison of sensible heat-flux calculations using temporal and spatial averaging methods is presented and discussed. Data of the airborne measurement systems , Helipod and Dornier 128-6 are used for the analysis. These systems vary in size, weight and aerodynamic characteristics, since the is a small unmanned aerial vehicle (UAV), the Helipod a helicopter-borne turbulence probe and the Dornier 128-6 a manned research aircraft. The systematic bias anticipated in covariance computations due to speed variations was neither found when averaging over Dornier, Helipod nor UAV flight legs. However, the random differences between spatial and temporal averaging fluxes were found to be up to 30 % on the individual flight legs.

  8. Patterns of mortality in a montane mixed-conifer forest in San Diego County, California.

    PubMed

    Freeman, Mary Pyott; Stow, Douglas A; An, Li

    2017-10-01

    We examine spatial patterns of conifer tree mortality and their changes over time for the montane mixed-conifer forests of San Diego County. These forest areas have recently experienced extensive tree mortality due to multiple factors. A spatial contextual image processing approach was utilized with high spatial resolution digital airborne imagery to map dead trees for the years 1997, 2000, 2002, and 2005 for three study areas: Palomar, Volcan, and Laguna mountains. Plot-based fieldwork was conducted to further assess mortality patterns. Mean mortality remained static from 1997 to 2002 (4, 2.2, and 4.2 trees/ha for Palomar, Volcan, and Laguna) and then increased by 2005 to 10.3, 9.7, and 5.2 trees/ha, respectively. The increase in mortality between 2002 and 2005 represents the temporal pattern of a discrete disturbance event, attributable to the 2002-2003 drought. Dead trees are significantly clustered for all dates, based on spatial cluster analysis, indicating that they form distinct groups, as opposed to spatially random single dead trees. Other tests indicate no directional shift or spread of mortality over time, but rather an increase in density. While general temporal and spatial mortality processes are uniform across all study areas, the plot-based species and quantity distribution of mortality, and diameter distributions of dead vs. living trees, vary by study area. The results of this study improve our understanding of stand- to landscape-level forest structure and dynamics, particularly by examining them from the multiple perspectives of field and remotely sensed data. © 2017 by the Ecological Society of America.

  9. Atomistic simulation on charge mobility of amorphous tris(8-hydroxyquinoline) aluminum (Alq3): origin of Poole-Frenkel-type behavior.

    PubMed

    Nagata, Yuki; Lennartz, Christian

    2008-07-21

    The atomistic simulation of charge transfer process for an amorphous Alq(3) system is reported. By employing electrostatic potential charges, we calculate site energies and find that the standard deviation of site energy distribution is about twice as large as predicted in previous research. The charge mobility is calculated via the Miller-Abrahams formalism and the master equation approach. We find that the wide site energy distribution governs Poole-Frenkel-type behavior of charge mobility against electric field, while the spatially correlated site energy is not a dominant mechanism of Poole-Frenkel behavior in the range from 2x10(5) to 1.4x10(6) V/cm. Also we reveal that randomly meshed connectivities are, in principle, required to account for the Poole-Frenkel mechanism. Charge carriers find a zigzag pathway at low electric field, while they find a straight pathway along electric field when a high electric field is applied. In the space-charge-limited current scheme, the charge-carrier density increases with electric field strength so that the nonlinear behavior of charge mobility is enhanced through the strong charge-carrier density dependence of charge mobility.

  10. Spatial Distribution and Coexisting Patterns of Adults and Nymphs of Tibraca limbativentris (Hemiptera: Pentatomidae) in Paddy Rice Fields.

    PubMed

    Alves, Tavvs M; Maia, Aline H N; Barrigossi, José A F

    2016-12-01

    The rice stem stink bug, Tibraca limbativentris Stål (Hemiptera: Pentatomidae), is a primary insect pest of paddy rice in South America. Knowledge of its spatial distribution can support sampling plans needed for timely decisions about pest control. This study aimed to investigate the spatial distribution of adults and nymphs of T. limbativentris and determine the spatial coexistence of these stages of development. Fifteen paddy rice fields were scouted once each season to estimate insect densities. Scouting was performed on regular grids with sampling points separated by ∼50 m. Moran's I and semivariograms were used to determine spatial distribution patterns. Spatial coexistence of nymphs and adults was explored via spatial point process. Here, adults and nymphs had typically contrasting spatial distribution patterns within the same field; however, the frequency of aggregation was not different between these developmental stages. Adults and nymphs were aggregated in seven fields and randomly distributed in the other eight fields. Uniform distribution of adults or nymphs was not observed. The study-wide semivariogram ranges were ∼40 m for adults and ∼55 m for nymphs. Nymphs and adults spatially coexisted on 67% of the fields. Coexisting patterns were classified using one of the following processes: stage-independent, bidirectional attractive, unidirectional attractive, bidirectional inhibiting, or unidirectional inhibiting. The information presented herein can be important for developing sampling plans for decision-making, implementing tactics for site-specific management, and monitoring areas free of T. limbativentrisResumoO percevejo-do-colmo Tibraca limbativentris Stål (Hemiptera: Pentatomidae) é uma praga primária na cultura do arroz irrigado na América do Sul. O conhecimento de sua distribuição espacial é essencial para desenvolver planos de amostragem e para o controle desta praga. Nosso objetivo foi investigar a distribuição espacial de adultos e ninfas de T. limbativentris e determinar a coexistência espacial entre os estágios de desenvolvimento. As densidades de adultos e ninfas foram estimadas em quinze campos de arroz irrigado. A amostragem foi realizada em grades regulares com pontos de amostragem separados por ∼50 m. Moran's I e semivariogramas foram usados para determinar os padrões de distribuição espacial. A coexistência espacial foi explorada pela análise de processos pontuais. Foi observado que adultos e ninfas tiveram padrões contrastantes de distribuição espacial dentro do mesmo campo; no entanto, a frequência de agregação não foi diferente entre esses estágios de desenvolvimento. Adultos e ninfas estavam agregados em sete campos e distribuídos aleatoriamente nos outros oito campos. Não foi observada a distribuição uniforme de adultos ou ninfas. O alcance médio dos semivariogramas foi de ∼40 m para os adultos e ∼55 m para as ninfas. Ninfas e adultos coexistiram espacialmente em 67% dos campos. Os padrões de coexistência foram classificados usando um dos seguintes processos: independente do estágio de desenvolvimento, atração bilateral, atração unilateral, inibição bilateral, ou inibição unilateral. Nosso estudo poderá contribuir para o desenvolvimento de planos de amostragem para tomada de decisão e implementação de táticas para o manejo de sítios específicos. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Evaluation of Deep Learning Representations of Spatial Storm Data

    NASA Astrophysics Data System (ADS)

    Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.

    2017-12-01

    The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being performed to determine how the choice of input variables affects the results.

  12. BATMAN: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2017-04-01

    This paper describes the Bayesian Technique for Multi-image Analysis (BATMAN), a novel image-segmentation technique based on Bayesian statistics that characterizes any astronomical data set containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (I.e. identical signal within the errors). We illustrate its operation and performance with a set of test cases including both synthetic and real integral-field spectroscopic data. The output segmentations adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. The quality of the recovered signal represents an improvement with respect to the input, especially in regions with low signal-to-noise ratio. However, the algorithm may be sensitive to small-scale random fluctuations, and its performance in presence of spatial gradients is limited. Due to these effects, errors may be underestimated by as much as a factor of 2. Our analysis reveals that the algorithm prioritizes conservation of all the statistically significant information over noise reduction, and that the precise choice of the input data has a crucial impact on the results. Hence, the philosophy of BaTMAn is not to be used as a 'black box' to improve the signal-to-noise ratio, but as a new approach to characterize spatially resolved data prior to its analysis. The source code is publicly available at http://astro.ft.uam.es/SELGIFS/BaTMAn.

  13. Comparison of Different Machine Learning Algorithms for Lithological Mapping Using Remote Sensing Data and Morphological Features: A Case Study in Kurdistan Region, NE Iraq

    NASA Astrophysics Data System (ADS)

    Othman, Arsalan; Gloaguen, Richard

    2015-04-01

    Topographic effects and complex vegetation cover hinder lithology classification in mountain regions based not only in field, but also in reflectance remote sensing data. The area of interest "Bardi-Zard" is located in the NE of Iraq. It is part of the Zagros orogenic belt, where seven lithological units outcrop and is known for its chromite deposit. The aim of this study is to compare three machine learning algorithms (MLAs): Maximum Likelihood (ML), Support Vector Machines (SVM), and Random Forest (RF) in the context of a supervised lithology classification task using Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite, its derived, spatial information (spatial coordinates) and geomorphic data. We emphasize the enhancement in remote sensing lithological mapping accuracy that arises from the integration of geomorphic features and spatial information (spatial coordinates) in classifications. This study identifies that RF is better than ML and SVM algorithms in almost the sixteen combination datasets, which were tested. The overall accuracy of the best dataset combination with the RF map for the all seven classes reach ~80% and the producer and user's accuracies are ~73.91% and 76.09% respectively while the kappa coefficient is ~0.76. TPI is more effective with SVM algorithm than an RF algorithm. This paper demonstrates that adding geomorphic indices such as TPI and spatial information in the dataset increases the lithological classification accuracy.

  14. Effects of strategy-migration direction and noise in the evolutionary spatial prisoner's dilemma

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-Xi; Holme, Petter

    2009-08-01

    Spatial games are crucial for understanding patterns of cooperation in nature (and to some extent society). They are known to be more sensitive to local symmetries than, e.g., spin models. This paper concerns the evolution of the prisoner’s dilemma game on regular lattices with three different types of neighborhoods—the von Neumann, Moore, and kagomé types. We investigate two kinds of dynamics for the players to update their strategies (that can be unconditional cooperator or defector). Depending on the payoff difference, an individual can adopt the strategy of a random neighbor [a voter-model-like dynamics (VMLD)] or impose its strategy on a random neighbor, i.e., invasion-process-like dynamics (IPLD). In particular, we focus on the effects of noise, in combination with the strategy dynamics, on the evolution of cooperation. We find that VMLD, compared to IPLD, better supports the spreading and sustaining of cooperation. We see that noise has nontrivial effects on the evolution of cooperation: maximum cooperation density can be realized either at a medium noise level, in the limit of zero noise or in both these regions. The temptation to defect and the local interaction structure determine the outcome. Especially, in the low noise limit, the local interaction plays a crucial role in determining the fate of cooperators. We elucidate these both by numerical simulations and mean-field cluster approximation methods.

  15. Visualizing Time-Varying Distribution Data in EOS Application

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei

    2004-01-01

    In this research, we have developed several novel visualization methods for spatial probability density function data. Our focus has been on 2D spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We developed novel clustering algorithms as a means to reduce the information contained in these datasets; and investigated different ways of interpreting and clustering the data.

  16. Receptive fields for smooth pursuit eye movements and motion perception.

    PubMed

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. The Temporal Resolution of Laser Induced Fluorescence Photobleaching Anemometer

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Yang, Fang; Wang, Guiren

    2014-11-01

    Recently, in microfluidics, electrokinetic flows are widely used on micromixer designing. However, there is unfortunately no valid velocimeter today that can measure the random velocity fluctuation at high temporal and spatial resolution simultaneously in the complicated flow circumstance. We recently introduced laser induced fluorescence photobleaching anemometer (LIFPA), which has been successfully used in the measurement of velocity field in AC electrically driven microflow. Here, we theoretically study the temporal resolution (TR) of and experimentally verify, LIFPA can have simultaneously ultrahigh temporal (~4 μs) and spatial (~203 nm) resolution and can measure velocity fluctuation up to at least 2 kHz, whose corresponding wave number is about 6 × 106 1/m in an electrokinetically forced unsteady flow in microfluidics. The measurement of LIFPA is also compared with the widely used micro Particle Imaging Velocimetry (μPIV). We found, at the inlet, due to multiple uncertainties, the velocity fluctuations by μPIV exhibits apparently smaller values than that by LIFPA. But at downstreams, where velocity fluctuation is much lower than at the inlet and the uncertainties of complicated electric field on particles becomes smaller, LIFPA and μPIV indicate similar measurement. The work was supported by NSF under grant no. CAREER CBET-0954977 and MRI CBET-1040227, respectively.

  18. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.

    PubMed

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-06-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.

  19. Soil Parameter Mapping and Ad Hoc Power Analysis to Increase Blocking Efficiency Prior to Establishing a Long-Term Field Experiment.

    PubMed

    Collins, Doug; Benedict, Chris; Bary, Andy; Cogger, Craig

    2015-01-01

    The spatial heterogeneity of soil and weed populations poses a challenge to researchers. Unlike aboveground variability, below-ground variability is more difficult to discern without a strategic soil sampling pattern. While blocking is commonly used to control environmental variation, this strategy is rarely informed by data about current soil conditions. Fifty georeferenced sites were located in a 0.65 ha area prior to establishing a long-term field experiment. Soil organic matter (OM) and weed seed bank populations were analyzed at each site and the spatial structure was modeled with semivariograms and interpolated with kriging to map the surface. These maps were used to formulate three strategic blocking patterns and the efficiency of each pattern was compared to a completely randomized design and a west to east model not informed by soil variability. Compared to OM, weeds were more variable across the landscape and had a shorter range of autocorrelation, and models to increase blocking efficiency resulted in less increase in power. Weeds and OM were not correlated, so no model examined improved power equally for both parameters. Compared to the west to east blocking pattern, the final blocking pattern chosen resulted in a 7-fold increase in power for OM and a 36% increase in power for weeds.

  20. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†

    PubMed Central

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-01-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370

  1. A theoretical framework for modeling dilution enhancement of non-reactive solutes in heterogeneous porous media.

    PubMed

    de Barros, F P J; Fiori, A; Boso, F; Bellin, A

    2015-01-01

    Spatial heterogeneity of the hydraulic properties of geological porous formations leads to erratically shaped solute clouds, thus increasing the edge area of the solute body and augmenting the dilution rate. In this study, we provide a theoretical framework to quantify dilution of a non-reactive solute within a steady state flow as affected by the spatial variability of the hydraulic conductivity. Embracing the Lagrangian concentration framework, we obtain explicit semi-analytical expressions for the dilution index as a function of the structural parameters of the random hydraulic conductivity field, under the assumptions of uniform-in-the-average flow, small injection source and weak-to-mild heterogeneity. Results show how the dilution enhancement of the solute cloud is strongly dependent on both the statistical anisotropy ratio and the heterogeneity level of the porous medium. The explicit semi-analytical solution also captures the temporal evolution of the dilution rate; for the early- and late-time limits, the proposed solution recovers previous results from the literature, while at intermediate times it reflects the increasing interplay between large-scale advection and local-scale dispersion. The performance of the theoretical framework is verified with high resolution numerical results and successfully tested against the Cape Cod field data. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Spatial Distribution of Adult Anthonomus grandis Boheman (Coleoptera: Curculionidae) and Damage to Cotton Flower Buds Due to Feeding and Oviposition.

    PubMed

    Grigolli, J F J; Souza, L A; Fernandes, M G; Busoli, A C

    2017-08-01

    The cotton boll weevil Anthonomus grandis Boheman (Coleoptera: Curculionidae) is the main pest in cotton crop around the world, directly affecting cotton production. In order to establish a sequential sampling plan, it is crucial to understand the spatial distribution of the pest population and the damage it causes to the crop through the different developmental stages of cotton plants. Therefore, this study aimed to investigate the spatial distribution of adults in the cultivation area and their oviposition and feeding behavior throughout the development of the cotton plants. The experiment was conducted in Maracaju, Mato Grosso do Sul, Brazil, in the 2012/2013 and 2013/2014 growing seasons, in an area of 10,000 m 2 , planted with the cotton cultivar FM 993. The experimental area was divided into 100 plots of 100 m 2 (10 × 10 m) each, and five plants per plot were sampled weekly throughout the crop cycle. The number of flower buds with feeding and oviposition punctures and of adult A. grandis was recorded throughout the crop cycle in five plants per plot. After determining the aggregation indices (variance/mean ratio, Morisita's index, exponent k of the negative binomial distribution, and Green's coefficient) and adjusting the frequencies observed in the field to the distribution of frequencies (Poisson, negative binomial, and positive binomial) using the chi-squared test, it was observed that flower buds with punctures derived from feeding, oviposition, and feeding + oviposition showed an aggregated distribution in the cultivation area until 85 days after emergence and a random distribution after this stage. The adults of A. grandis presented a random distribution in the cultivation area.

  3. Predicting active-layer soil thickness using topographic variables at a small watershed scale

    PubMed Central

    Li, Aidi; Tan, Xing; Wu, Wei; Liu, Hongbin; Zhu, Jie

    2017-01-01

    Knowledge about the spatial distribution of active-layer (AL) soil thickness is indispensable for ecological modeling, precision agriculture, and land resource management. However, it is difficult to obtain the details on AL soil thickness by using conventional soil survey method. In this research, the objective is to investigate the possibility and accuracy of mapping the spatial distribution of AL soil thickness through random forest (RF) model by using terrain variables at a small watershed scale. A total of 1113 soil samples collected from the slope fields were randomly divided into calibration (770 soil samples) and validation (343 soil samples) sets. Seven terrain variables including elevation, aspect, relative slope position, valley depth, flow path length, slope height, and topographic wetness index were derived from a digital elevation map (30 m). The RF model was compared with multiple linear regression (MLR), geographically weighted regression (GWR) and support vector machines (SVM) approaches based on the validation set. Model performance was evaluated by precision criteria of mean error (ME), mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). Comparative results showed that RF outperformed MLR, GWR and SVM models. The RF gave better values of ME (0.39 cm), MAE (7.09 cm), and RMSE (10.85 cm) and higher R2 (62%). The sensitivity analysis demonstrated that the DEM had less uncertainty than the AL soil thickness. The outcome of the RF model indicated that elevation, flow path length and valley depth were the most important factors affecting the AL soil thickness variability across the watershed. These results demonstrated the RF model is a promising method for predicting spatial distribution of AL soil thickness using terrain parameters. PMID:28877196

  4. Effects of extreme climatic events on small-scale spatial patterns: a 20-year study of the distribution of a desert spider.

    PubMed

    Birkhofer, Klaus; Henschel, Joh; Lubin, Yael

    2012-11-01

    Individuals of most animal species are non-randomly distributed in space. Extreme climatic events are often ignored as potential drivers of distribution patterns, and the role of such events is difficult to assess. Seothyra henscheli (Araneae, Eresidae) is a sedentary spider found in the Namib dunes in Namibia. The spider constructs a sticky-edged silk web on the sand surface, connected to a vertical, silk-lined burrow. Above-ground web structures can be damaged by strong winds or heavy rainfall, and during dispersal spiders are susceptible to environmental extremes. Locations of burrows were mapped in three field sites in 16 out of 20 years from 1987 to 2007, and these grid-based data were used to identify the relationship between spatial patterns, climatic extremes and sampling year. According to Morisita's index, individuals had an aggregated distribution in most years and field sites, and Geary's C suggests clustering up to scales of 2 m. Individuals were more aggregated in years with high maximum wind speed and low annual precipitation. Our results suggest that clustering is a temporally stable property of populations that holds even under fluctuating burrow densities. Climatic extremes, however, affect the intensity of clustering behaviour: individuals seem to be better protected in field sites with many conspecific neighbours. We suggest that burrow-site selection is driven at least partly by conspecific cuing, and this behaviour may protect populations from collapse during extreme climatic events.

  5. A Discrete Fracture Network Model with Stress-Driven Nucleation and Growth

    NASA Astrophysics Data System (ADS)

    Lavoine, E.; Darcel, C.; Munier, R.; Davy, P.

    2017-12-01

    The realism of Discrete Fracture Network (DFN) models, beyond the bulk statistical properties, relies on the spatial organization of fractures, which is not issued by purely stochastic DFN models. The realism can be improved by injecting prior information in DFN from a better knowledge of the geological fracturing processes. We first develop a model using simple kinematic rules for mimicking the growth of fractures from nucleation to arrest, in order to evaluate the consequences of the DFN structure on the network connectivity and flow properties. The model generates fracture networks with power-law scaling distributions and a percentage of T-intersections that are consistent with field observations. Nevertheless, a larger complexity relying on the spatial variability of natural fractures positions cannot be explained by the random nucleation process. We propose to introduce a stress-driven nucleation in the timewise process of this kinematic model to study the correlations between nucleation, growth and existing fracture patterns. The method uses the stress field generated by existing fractures and remote stress as an input for a Monte-Carlo sampling of nuclei centers at each time step. Networks so generated are found to have correlations over a large range of scales, with a correlation dimension that varies with time and with the function that relates the nucleation probability to stress. A sensibility analysis of input parameters has been performed in 3D to quantify the influence of fractures and remote stress field orientations.

  6. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  7. Radial diffusion in magnetodiscs. [charged particle motion in planetary or stellar magnetosphere

    NASA Technical Reports Server (NTRS)

    Birmingham, T. J.

    1985-01-01

    The orbits of charged particles in magnetodiscs are considered. The bounce motion is assumed adiabatic except for transits of a small equatorial region of weak magnetic field strength and high field curvature. Previous theory and modeling have shown that particles scatter randomly in pitch angle with each passage through the equator. A peaked distribution thus diffuses in pitch angle on the time scale of many bounces. It is argued in this paper that spatial diffusion is a further consequence when the magnetodisc has a longitudinal asymmetry. A general expression for DLL, the diffusion of equatorial crossing radii, is derived. DLL is evaluated explicitly for ions in Jupiter's 20-35 radii magnetodisc, assumed to be represented by Connerney et al.'s (1982) Voyager model plus a small image dipole asymmetry. Rates are energy, species, and space dependent but can average as much as a few tenths of a planetary radius per bounce period.

  8. A stochastic-dynamic model for global atmospheric mass field statistics

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.

    1981-01-01

    A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.

  9. A quasi-static model of global atmospheric electricity. I - The lower atmosphere

    NASA Technical Reports Server (NTRS)

    Hays, P. B.; Roble, R. G.

    1979-01-01

    A quasi-steady model of global lower atmospheric electricity is presented. The model considers thunderstorms as dipole electric generators that can be randomly distributed in various regions and that are the only source of atmospheric electricity and includes the effects of orography and electrical coupling along geomagnetic field lines in the ionosphere and magnetosphere. The model is used to calculate the global distribution of electric potential and current for model conductivities and assumed spatial distributions of thunderstorms. Results indicate that large positive electric potentials are generated over thunderstorms and penetrate to ionospheric heights and into the conjugate hemisphere along magnetic field lines. The perturbation of the calculated electric potential and current distributions during solar flares and subsequent Forbush decreases is discussed, and future measurements of atmospheric electrical parameters and modifications of the model which would improve the agreement between calculations and measurements are suggested.

  10. The role of educational trainings in the diffusion of smart metering platforms: An agent-based modeling approach

    NASA Astrophysics Data System (ADS)

    Weron, Tomasz; Kowalska-Pyzalska, Anna; Weron, Rafał

    2018-09-01

    Using an agent-based modeling approach we examine the impact of educational programs and trainings on the diffusion of smart metering platforms (SMPs). We also investigate how social responses, like conformity or independence, mass-media advertising as well as opinion stability impact the transition from predecisional and preactional behavioral stages (opinion formation) to actional and postactional stages (decision-making) of individual electricity consumers. We find that mass-media advertising (i.e., a global external field) and educational trainings (i.e., a local external field) lead to similar, though not identical adoption rates. Secondly, that spatially concentrated 'group' trainings are never worse than randomly scattered ones, and for a certain range of parameters are significantly better. Finally, that by manipulating the time required by an agent to make a decision, e.g., through promotions, we can speed up or slow down the diffusion of SMPs.

  11. Experimental nonlinear dynamical studies in cesium magneto-optical trap using time-series analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anwar, M., E-mail: mamalik2000@gmail.com; Islam, R.; Faisal, M.

    2015-03-30

    A magneto-optical trap of neutral atoms is essentially a dissipative quantum system. The fast thermal atoms continuously dissipate their energy to the environment via spontaneous emissions during the cooling. The atoms are, therefore, strongly coupled with the vacuum reservoir and the laser field. The vacuum fluctuations as well as the field fluctuations are imparted to the atoms as random photon recoils. Consequently, the external and internal dynamics of atoms becomes stochastic. In this paper, we have investigated the stochastic dynamics of the atoms in a magneto-optical trap during the loading process. The time series analysis of the fluorescence signal showsmore » that the dynamics of the atoms evolves, like all dissipative systems, from deterministic to the chaotic regime. The subsequent disappearance and revival of chaos was attributed to chaos synchronization between spatially different atoms in the magneto-optical trap.« less

  12. Dynamical behavior of the random field on the pulsating and snaking solitons in cubic-quintic complex Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Bakhtiar, Nurizatul Syarfinas Ahmad; Abdullah, Farah Aini; Hasan, Yahya Abu

    2017-08-01

    In this paper, we consider the dynamical behaviour of the random field on the pulsating and snaking solitons in a dissipative systems described by the one-dimensional cubic-quintic complex Ginzburg-Landau equation (cqCGLE). The dynamical behaviour of the random filed was simulated by adding a random field to the initial pulse. Then, we solve it numerically by fixing the initial amplitude profile for the pulsating and snaking solitons without losing any generality. In order to create the random field, we choose 0 ≤ ɛ ≤ 1.0. As a result, multiple soliton trains are formed when the random field is applied to a pulse like initial profile for the parameters of the pulsating and snaking solitons. The results also show the effects of varying the random field of the transient energy peaks in pulsating and snaking solitons.

  13. Accounting for time- and space-varying changes in the gravity field to improve the network adjustment of relative-gravity data

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Ferre, Ty P.A.

    2015-01-01

    The relative gravimeter is the primary terrestrial instrument for measuring spatially and temporally varying gravitational fields. The background noise of the instrument—that is, non-linear drift and random tares—typically requires some form of least-squares network adjustment to integrate data collected during a campaign that may take several days to weeks. Here, we present an approach to remove the change in the observed relative-gravity differences caused by hydrologic or other transient processes during a single campaign, so that the adjusted gravity values can be referenced to a single epoch. The conceptual approach is an example of coupled hydrogeophysical inversion, by which a hydrologic model is used to inform and constrain the geophysical forward model. The hydrologic model simulates the spatial variation of the rate of change of gravity as either a linear function of distance from an infiltration source, or using a 3-D numerical groundwater model. The linear function can be included in and solved for as part of the network adjustment. Alternatively, the groundwater model is used to predict the change of gravity at each station through time, from which the accumulated gravity change is calculated and removed from the data prior to the network adjustment. Data from a field experiment conducted at an artificial-recharge facility are used to verify our approach. Maximum gravity change due to hydrology (observed using a superconducting gravimeter) during the relative-gravity field campaigns was up to 2.6 μGal d−1, each campaign was between 4 and 6 d and one month elapsed between campaigns. The maximum absolute difference in the estimated gravity change between two campaigns, two months apart, using the standard network adjustment method and the new approach, was 5.5 μGal. The maximum gravity change between the same two campaigns was 148 μGal, and spatial variation in gravity change revealed zones of preferential infiltration and areas of relatively high groundwater storage. The accommodation for spatially varying gravity change would be most important for long-duration campaigns, campaigns with very rapid changes in gravity and (or) campaigns where especially precise observed relative-gravity differences are used in the network adjustment.

  14. Rapid mapping of polarization switching through complete information acquisition

    DOE PAGES

    Somnath, Suhas; Belianinov, Alex; Kalinin, Sergei V.; ...

    2016-12-02

    Polarization switching in ferroelectric and multiferroic materials underpins a broad range of current and emergent applications, ranging from random access memories to field-effect transistors, and tunnelling devices. Switching in these materials is exquisitely sensitive to local defects and microstructure on the nanometre scale, necessitating spatially resolved high-resolution studies of these phenomena. Classical piezoresponse force microscopy and spectroscopy, although providing necessary spatial resolution, are fundamentally limited in data acquisition rates and energy resolution. This limitation stems from their two-tiered measurement protocol that combines slow (~1 s) switching and fast (~10 kHz–1 MHz) detection waveforms. Here we develop an approach for rapidmore » probing of ferroelectric switching using direct strain detection of material response to probe bias. This approach, facilitated by high-sensitivity electronics and adaptive filtering, enables spectroscopic imaging at a rate 3,504 times faster the current state of the art, achieving high-veracity imaging of polarization dynamics in complex microstructures.« less

  15. DNA fragmentation induced by Fe ions in human cells: shielding influence on spatially correlated damage

    NASA Technical Reports Server (NTRS)

    Antonelli, F.; Belli, M.; Campa, A.; Chatterjee, A.; Dini, V.; Esposito, G.; Rydberg, B.; Simone, G.; Tabocchini, M. A.

    2004-01-01

    Outside the magnetic field of the Earth, high energy heavy ions constitute a relevant part of the biologically significant dose to astronauts during the very long travels through space. The typical pattern of energy deposition in the matter by heavy ions on the microscopic scale is believed to produce spatially correlated damage in the DNA which is critical for radiobiological effects. We have investigated the influence of a lucite shielding on the initial production of very small DNA fragments in human fibroblasts irradiated with 1 GeV/u iron (Fe) ions. We also used gamma rays as reference radiation. Our results show: (1) a lower effect per incident ion when the shielding is used; (2) an higher DNA Double Strand Breaks (DSB) induction by Fe ions than by gamma rays in the size range 1-23 kbp; (3) a non-random DNA DSB induction by Fe ions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  16. Crowding with conjunctions of simple features.

    PubMed

    Põder, Endel; Wagemans, Johan

    2007-11-20

    Several recent studies have related crowding with the feature integration stage in visual processing. In order to understand the mechanisms involved in this stage, it is important to use stimuli that have several features to integrate, and these features should be clearly defined and measurable. In this study, Gabor patches were used as target and distractor stimuli. The stimuli differed in three dimensions: spatial frequency, orientation, and color. A group of 3, 5, or 7 objects was presented briefly at 4 deg eccentricity of the visual field. The observers' task was to identify the object located in the center of the group. A strong effect of the number of distractors was observed, consistent with various spatial pooling models. The analysis of incorrect responses revealed that these were a mix of feature errors and mislocalizations of the target object. Feature errors were not purely random, but biased by the features of distractors. We propose a simple feature integration model that predicts most of the observed regularities.

  17. Advances in feature selection methods for hyperspectral image processing in food industry applications: a review.

    PubMed

    Dai, Qiong; Cheng, Jun-Hu; Sun, Da-Wen; Zeng, Xin-An

    2015-01-01

    There is an increased interest in the applications of hyperspectral imaging (HSI) for assessing food quality, safety, and authenticity. HSI provides abundance of spatial and spectral information from foods by combining both spectroscopy and imaging, resulting in hundreds of contiguous wavebands for each spatial position of food samples, also known as the curse of dimensionality. It is desirable to employ feature selection algorithms for decreasing computation burden and increasing predicting accuracy, which are especially relevant in the development of online applications. Recently, a variety of feature selection algorithms have been proposed that can be categorized into three groups based on the searching strategy namely complete search, heuristic search and random search. This review mainly introduced the fundamental of each algorithm, illustrated its applications in hyperspectral data analysis in the food field, and discussed the advantages and disadvantages of these algorithms. It is hoped that this review should provide a guideline for feature selections and data processing in the future development of hyperspectral imaging technique in foods.

  18. Rapid mapping of polarization switching through complete information acquisition

    PubMed Central

    Somnath, Suhas; Belianinov, Alex; Kalinin, Sergei V.; Jesse, Stephen

    2016-01-01

    Polarization switching in ferroelectric and multiferroic materials underpins a broad range of current and emergent applications, ranging from random access memories to field-effect transistors, and tunnelling devices. Switching in these materials is exquisitely sensitive to local defects and microstructure on the nanometre scale, necessitating spatially resolved high-resolution studies of these phenomena. Classical piezoresponse force microscopy and spectroscopy, although providing necessary spatial resolution, are fundamentally limited in data acquisition rates and energy resolution. This limitation stems from their two-tiered measurement protocol that combines slow (∼1 s) switching and fast (∼10 kHz–1 MHz) detection waveforms. Here we develop an approach for rapid probing of ferroelectric switching using direct strain detection of material response to probe bias. This approach, facilitated by high-sensitivity electronics and adaptive filtering, enables spectroscopic imaging at a rate 3,504 times faster the current state of the art, achieving high-veracity imaging of polarization dynamics in complex microstructures. PMID:27910941

  19. Characteristic eddy decomposition of turbulence in a channel

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Moser, Robert D.

    1991-01-01

    The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.

  20. Valley s'Asymmetric Characteristics of the Loess Plateau in Northwestern Shanxi Based on DEM

    NASA Astrophysics Data System (ADS)

    Duan, J.

    2016-12-01

    The valleys of the Loess Plateau in northwestern Shanxi show great asymmetry. This study using multi-scale DEMs, high-resolution satellite images and digital terrain analysis method, put forward a quantitative index to describe the asymmetric morphology. Several typical areas are selected to test and verify the spatial variability. Results show: (1) Considering the difference of spatial distribution, Pianguanhe basin, Xianchuanhe basin and Yangjiachuan basin are the areas where show most significant asymmetric characteristics . (2) Considering the difference of scale, the shape of large-scale valleys represents three characteristics: randomness, equilibrium and relative symmetry, while small-scale valleys show directionality and asymmetry. (3) Asymmetric morphology performs orientation, and the east-west valleys extremely obvious. Combined with field survey, its formation mechanism can be interpreted as follows :(1)Loess uneven distribution in the valleys. (2) The distribution diversities of vegetation, water , heat conditions and other factors, make a difference in water erosion capability which leads to asymmetric characteristics.

  1. Extreme Threshold Failures Within a Heterogeneous Elastic Thin Sheet and the Spatial-Temporal Development of Induced Seismicity Within the Groningen Gas Field

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.

    2017-12-01

    Measurements of the strains and earthquakes induced by fluid extraction from a subsurface reservoir reveal a transient, exponential-like increase in seismicity relative to the volume of fluids extracted. If the frictional strength of these reactivating faults is heterogeneously and randomly distributed, then progressive failures of the weakest fault patches account in a general manner for this initial exponential-like trend. Allowing for the observable elastic and geometric heterogeneity of the reservoir, the spatiotemporal evolution of induced seismicity over 5 years is predictable without significant bias using a statistical physics model of poroelastic reservoir deformations inducing extreme threshold frictional failures of previously inactive faults. This model is used to forecast the temporal and spatial probability density of earthquakes within the Groningen natural gas reservoir, conditional on future gas production plans. Probabilistic seismic hazard and risk assessments based on these forecasts inform the current gas production policy and building strengthening plans.

  2. Modeling of blob-hole correlations in GPI edge turbulence data

    NASA Astrophysics Data System (ADS)

    Myra, J. R.; Russell, D. A.; Zweben, S. J.

    2017-10-01

    Gas-puff imaging (GPI) observations made on NSTX have revealed two-point spatial correlation patterns in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this work, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlation patterns that are qualitatively similar to the GPI data in many respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored. The possibility of using the model to ascertain new information about edge turbulence is discussed. Work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG02-02ER54678.

  3. Characterization of spatial distribution of Tetranychus urticae in peppermint in California and implication for improving sampling plan.

    PubMed

    Rijal, Jhalendra P; Wilson, Rob; Godfrey, Larry D

    2016-02-01

    Twospotted spider mite, Tetranychus urticae Koch, is an important pest of peppermint in California, USA. Spider mite feeding on peppermint leaves causes physiological changes in the plant, which coupling with the favorable environmental condition can lead to increased mite infestations. Significant yield loss can occur in absence of pest monitoring and timely management. Understating the within-field spatial distribution of T. urticae is critical for the development of reliable sampling plan. The study reported here aims to characterize the spatial distribution of mite infestation in four commercial peppermint fields in northern California using spatial techniques, variogram and Spatial Analysis by Distance IndicEs (SADIE). Variogram analysis revealed that there was a strong evidence for spatially dependent (aggregated) mite population in 13 of 17 sampling dates and the physical distance of the aggregation reached maximum to 7 m in peppermint fields. Using SADIE, 11 of 17 sampling dates showed aggregated distribution pattern of mite infestation. Combining results from variogram and SADIE analysis, the spatial aggregation of T. urticae was evident in all four fields for all 17 sampling dates evaluated. Comparing spatial association using SADIE, ca. 62% of the total sampling pairs showed a positive association of mite spatial distribution patterns between two consecutive sampling dates, which indicates a strong spatial and temporal stability of mite infestation in peppermint fields. These results are discussed in relation to behavior of spider mite distribution within field, and its implications for improving sampling guidelines that are essential for effective pest monitoring and management.

  4. Investigation of the Relationship between the Spatial Visualization Success and Visual/Spatial Intelligence Capabilities of Sixth Grade Students

    ERIC Educational Resources Information Center

    Yenilmez, Kursat; Kakmaci, Ozlem

    2015-01-01

    The main aim of this research was to examine the relationship between the spatial visualization success and visual/spatial intelligence capabilities of sixth grade students. The sample of the research consists of 1011 sixth grade students who were randomly selected from the primary schools in Eskisehir. In this correlational study, data were…

  5. Comparing spatial regression to random forests for large ...

    EPA Pesticide Factsheets

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  6. Computerized stratified random site-selection approaches for design of a ground-water-quality sampling network

    USGS Publications Warehouse

    Scott, J.C.

    1990-01-01

    Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.

  7. Tree species exhibit complex patterns of distribution in bottomland hardwood forests

    Treesearch

    Luben D Dimov; Jim L Chambers; Brian R. Lockhart

    2013-01-01

    & Context Understanding tree interactions requires an insight into their spatial distribution. & Aims We looked for presence and extent of tree intraspecific spatial point pattern (random, aggregated, or overdispersed) and interspecific spatial point pattern (independent, aggregated, or segregated). & Methods We established twelve 0.64-ha plots in natural...

  8. Dynamic change of ERPs related to selective attention to signals from left and right visual field during head-down tilt

    NASA Astrophysics Data System (ADS)

    Wei, Jinhe; Zhao, Lun; Van, Gongdong; Chen, Wenjuan; Ren, Wei; Duan, Ran

    To study further the effect of head-down tilt(HDT) on slow positive potential in the event-related potentials(ERPs), the temporal and spatial features of visual ERPs changes during 2 hour HDT(-10 °) were compared with that during HUT(+20°) in 15 normal subjects. The stimuli were consisted of two color LED flashes appeared randomly in left or right visual field(LVF or RVF) with same probability. The subjects were asked to make switch response to target signals(T) differentially: switching to left for T in LVF and to right for T in RVF, ignoring non-target signals(N). Five sets of tests were made during HUT and HDT. ERPs were obtained from 9 locations on scalp. The mean value of the ERPs in the period from 0.32-0.55 s was taken as the amplitude of slow positive potential(P400). The main results were as follows. 1)The mean amplitude of P400 decreased during HDT which was more significant at the 2nd, 3rd and 5th set of tests; 2)spatially, the reduction of mean P400 amplitude during HDT was more significant for signals from RVF and was more significant at posterior and central brain regions than that on frontal locations. As that the positive potential probably reflects the active inhibition activity in the brain during attention process, these data provide further evidence showing that the higher brain function was affected by the simulated weightlessness and that this effect was not only transient but also with interesting spatial characteristics.

  9. Fractional Diffusion Analysis of the Electromagnetic Field In Fractured Media Part II: 2.5-D Approach

    NASA Astrophysics Data System (ADS)

    Ge, J.; Everett, M. E.; Weiss, C. J.

    2012-12-01

    A 2.5D finite difference (FD) frequency-domain modeling algorithm based on the theory of fractional diffusion of electromagnetic (EM) fields generated by a loop source lying above a fractured geological medium is addressed in this paper. The presence of fractures in the subsurface, usually containing highly conductive pore fluids, gives rise to spatially hierarchical flow paths of induced EM eddy currents. The diffusion of EM eddy currents in such formations is anomalous, generalizing the classical Gaussian process described by the conventional Maxwell equations. Based on the continuous time random walk (CTRW) theory, the diffusion of EM eddy currents in a rough medium is governed by the fractional Maxwell equations. Here, we model the EM response of a 2D subsurface containing fractured zones, with a 3D loop source, which results the so-called 2.5D model geometry. The governing equation in the frequency domain is converted using Fourier transform into k domain along the strike direction (along which the model conductivity doesn't vary). The resulting equation system is solved by the multifrontal massively parallel solver (MUMPS). The data obtained is then converted back to spatial domain and the time domain. We find excellent agreement between the FD and analytic solutions for a rough halfspace model. Then FD solutions are calculated for a 2D fault zone model with variable conductivity and roughness. We compare the results with responses from several classical models and explore the relationship between the roughness and the spatial density of the fracture distribution.

  10. Spatial relationships between above-ground biomass and bird species biodiversity in Palawan, Philippines

    PubMed Central

    Singh, Minerva; Friess, Daniel A.; Vilela, Bruno; Alban, Jose Don T. De; Monzon, Angelica Kristina V.; Veridiano, Rizza Karen A.; Tumaneng, Roven D.

    2017-01-01

    This study maps distribution and spatial congruence between Above-Ground Biomass (AGB) and species richness of IUCN listed conservation-dependent and endemic avian fauna in Palawan, Philippines. Grey Level Co-Occurrence Texture Matrices (GLCMs) extracted from Landsat and ALOS-PALSAR were used in conjunction with local field data to model and map local-scale field AGB using the Random Forest algorithm (r = 0.92 and RMSE = 31.33 Mg·ha-1). A support vector regression (SVR) model was used to identify the factors influencing variation in avian species richness at a 1km scale. AGB is one of the most important determinants of avian species richness for the study area. Topographic factors and anthropogenic factors such as distance from the roads were also found to strongly influence avian species richness. Hotspots of high AGB and high species richness concentration were mapped using hotspot analysis and the overlaps between areas of high AGB and avian species richness was calculated. Results show that the overlaps between areas of high AGB with high IUCN red listed avian species richness and endemic avian species richness were fairly limited at 13% and 8% at the 1-km scale. The overlap between 1) low AGB and low IUCN richness, and 2) low AGB and low endemic avian species richness was higher at 36% and 12% respectively. The enhanced capacity to spatially map the correlation between AGB and avian species richness distribution will further assist the conservation and protection of forest areas and threatened avian species. PMID:29206228

  11. A Review On Accuracy and Uncertainty of Spatial Data and Analyses with special reference to Urban and Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Devendran, A. A.; Lakshmanan, G.

    2014-11-01

    Data quality for GIS processing and analysis is becoming an increased concern due to the accelerated application of GIS technology for problem solving and decision making roles. Uncertainty in the geographic representation of the real world arises as these representations are incomplete. Identification of the sources of these uncertainties and the ways in which they operate in GIS based representations become crucial in any spatial data representation and geospatial analysis applied to any field of application. This paper reviews the articles on the various components of spatial data quality and various uncertainties inherent in them and special focus is paid to two fields of application such as Urban Simulation and Hydrological Modelling. Urban growth is a complicated process involving the spatio-temporal changes of all socio-economic and physical components at different scales. Cellular Automata (CA) model is one of the simulation models, which randomly selects potential cells for urbanisation and the transition rules evaluate the properties of the cell and its neighbour. Uncertainty arising from CA modelling is assessed mainly using sensitivity analysis including Monte Carlo simulation method. Likewise, the importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures. The Soil and Water Assessment Tool (SWAT) is a continuous time watershed model to evaluate various impacts of land use management and climate on hydrology and water quality. Hydrological model uncertainties using SWAT model are dealt primarily by Generalized Likelihood Uncertainty Estimation (GLUE) method.

  12. Factors controlling high-frequency radiation from extended ruptures

    NASA Astrophysics Data System (ADS)

    Beresnev, Igor A.

    2017-09-01

    Small-scale slip heterogeneity or variations in rupture velocity on the fault plane are often invoked to explain the high-frequency radiation from earthquakes. This view has no theoretical basis, which follows, for example, from the representation integral of elasticity, an exact solution for the radiated wave field. The Fourier transform, applied to the integral, shows that the seismic spectrum is fully controlled by that of the source time function, while the distribution of final slip and rupture acceleration/deceleration only contribute to directivity. This inference is corroborated by the precise numerical computation of the full radiated field from the representation integral. We compare calculated radiation from four finite-fault models: (1) uniform slip function with low slip velocity, (2) slip function spatially modulated by a sinusoidal function, (3) slip function spatially modulated by a sinusoidal function with random roughness added, and (4) uniform slip function with high slip velocity. The addition of "asperities," both regular and irregular, does not cause any systematic increase in the spectral level of high-frequency radiation, except for the creation of maxima due to constructive interference. On the other hand, an increase in the maximum rate of slip on the fault leads to highly amplified high frequencies, in accordance with the prediction on the basis of a simple point-source treatment of the fault. Hence, computations show that the temporal rate of slip, not the spatial heterogeneity on faults, is the predominant factor forming the high-frequency radiation and thus controlling the velocity and acceleration of the resulting ground motions.

  13. Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters

    NASA Astrophysics Data System (ADS)

    Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.

    2016-12-01

    Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.

  14. Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters

    NASA Astrophysics Data System (ADS)

    Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.

    2016-02-01

    Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.

  15. Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory

    USGS Publications Warehouse

    Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo

    2013-01-01

    Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.

  16. From fields to objects: A review of geographic boundary analysis

    NASA Astrophysics Data System (ADS)

    Jacquez, G. M.; Maruca, S.; Fortin, M.-J.

    Geographic boundary analysis is a relatively new approach unfamiliar to many spatial analysts. It is best viewed as a technique for defining objects - geographic boundaries - on spatial fields, and for evaluating the statistical significance of characteristics of those boundary objects. This is accomplished using null spatial models representative of the spatial processes expected in the absence of boundary-generating phenomena. Close ties to the object-field dialectic eminently suit boundary analysis to GIS data. The majority of existing spatial methods are field-based in that they describe, estimate, or predict how attributes (variables defining the field) vary through geographic space. Such methods are appropriate for field representations but not object representations. As the object-field paradigm gains currency in geographic information science, appropriate techniques for the statistical analysis of objects are required. The methods reviewed in this paper are a promising foundation. Geographic boundary analysis is clearly a valuable addition to the spatial statistical toolbox. This paper presents the philosophy of, and motivations for geographic boundary analysis. It defines commonly used statistics for quantifying boundaries and their characteristics, as well as simulation procedures for evaluating their significance. We review applications of these techniques, with the objective of making this promising approach accessible to the GIS-spatial analysis community. We also describe the implementation of these methods within geographic boundary analysis software: GEM.

  17. Spatial Irrigation Management Using Remote Sensing Water Balance Modeling and Soil Water Content Monitoring

    NASA Astrophysics Data System (ADS)

    Barker, J. Burdette

    Spatially informed irrigation management may improve the optimal use of water resources. Sub-field scale water balance modeling and measurement were studied in the context of irrigation management. A spatial remote-sensing-based evapotranspiration and soil water balance model was modified and validated for use in real-time irrigation management. The modeled ET compared well with eddy covariance data from eastern Nebraska. Placement and quantity of sub-field scale soil water content measurement locations was also studied. Variance reduction factor and temporal stability were used to analyze soil water content data from an eastern Nebraska field. No consistent predictor of soil water temporal stability patterns was identified. At least three monitoring locations were needed per irrigation management zone to adequately quantify the mean soil water content. The remote-sensing-based water balance model was used to manage irrigation in a field experiment. The research included an eastern Nebraska field in 2015 and 2016 and a western Nebraska field in 2016 for a total of 210 plot-years. The response of maize and soybean to irrigation using variations of the model were compared with responses from treatments using soil water content measurement and a rainfed treatment. The remote-sensing-based treatment prescribed more irrigation than the other treatments in all cases. Excessive modeled soil evaporation and insufficient drainage times were suspected causes of the model drift. Modifying evaporation and drainage reduced modeled soil water depletion error. None of the included response variables were significantly different between treatments in western Nebraska. In eastern Nebraska, treatment differences for maize and soybean included evapotranspiration and a combined variable including evapotranspiration and deep percolation. Both variables were greatest for the remote-sensing model when differences were found to be statistically significant. Differences in maize yield in 2015 were attributed to random error. Soybean yield was lowest for the remote-sensing-based treatment and greatest for rainfed, possibly because of overwatering and lodging. The model performed well considering that it did not include soil water content measurements during the season. Future work should improve the soil evaporation and drainage formulations, because of excessive precipitation and include aerial remote sensing imagery and soil water content measurement as model inputs.

  18. Spatial and temporal analysis of fatal off-piste and backcountry avalanche accidents in Austria with a comparison of results in Switzerland, France, Italy and the US

    NASA Astrophysics Data System (ADS)

    Pfeifer, Christian; Höller, Peter; Zeileis, Achim

    2018-02-01

    In this article we analyzed spatial and temporal patterns of fatal Austrian avalanche accidents caused by backcountry and off-piste skiers and snowboarders within the winter periods 1967/1968-2015/2016. The data were based on reports of the Austrian Board for Alpine Safety and reports of the information services of the federal states. Using the date and the location of the recorded avalanche accidents, we were able to carry out spatial and temporal analyses applying generalized additive models and Markov random-field models. As a result of the trend analysis we noticed an increasing trend of backcountry and off-piste avalanche fatalities within the winter periods 1967/1968-2015/2016 (although slightly decreasing in recent years), which is in contradiction to the widespread opinion in Austria that the number of fatalities is constant over time. Additionally, we compared Austrian results with results of Switzerland, France, Italy and the US based on data from the International Commission of Alpine Rescue (ICAR). As a result of the spatial analysis, we noticed two hot spots of avalanche fatalities (Arlberg-Silvretta and Sölden). Because of the increasing trend and the rather narrow regional distribution of the fatalities, initiatives aimed at preventing avalanche accidents were highly recommended.

  19. Scales of snow depth variability in high elevation rangeland sagebrush

    NASA Astrophysics Data System (ADS)

    Tedesche, Molly E.; Fassnacht, Steven R.; Meiman, Paul J.

    2017-09-01

    In high elevation semi-arid rangelands, sagebrush and other shrubs can affect transport and deposition of wind-blown snow, enabling the formation of snowdrifts. Datasets from three field experiments were used to investigate the scales of spatial variability of snow depth around big mountain sagebrush ( Artemisia tridentata Nutt.) at a high elevation plateau rangeland in North Park, Colorado, during the winters of 2002, 2003, and 2008. Data were collected at multiple resolutions (0.05 to 25 m) and extents (2 to 1000 m). Finer scale data were collected specifically for this study to examine the correlation between snow depth, sagebrush microtopography, the ground surface, and the snow surface, as well as the temporal consistency of snow depth patterns. Variograms were used to identify the spatial structure and the Moran's I statistic was used to determine the spatial correlation. Results show some temporal consistency in snow depth at several scales. Plot scale snow depth variability is partly a function of the nature of individual shrubs, as there is some correlation between the spatial structure of snow depth and sagebrush, as well as between the ground and snow depth. The optimal sampling resolution appears to be 25-cm, but over a large area, this would require a multitude of samples, and thus a random stratified approach is recommended with a fine measurement resolution of 5-cm.

  20. Correlations and analytical approaches to co-evolving voter models

    NASA Astrophysics Data System (ADS)

    Ji, M.; Xu, C.; Choi, C. W.; Hui, P. M.

    2013-11-01

    The difficulty in formulating analytical treatments in co-evolving networks is studied in light of the Vazquez-Eguíluz-San Miguel voter model (VM) and a modified VM (MVM) that introduces a random mutation of the opinion as a noise in the VM. The density of active links, which are links that connect the nodes of opposite opinions, is shown to be highly sensitive to both the degree k of a node and the active links n among the neighbors of a node. We test the validity in the formalism of analytical approaches and show explicitly that the assumptions behind the commonly used homogeneous pair approximation scheme in formulating a mean-field theory are the source of the theory's failure due to the strong correlations between k, n and n2. An improved approach that incorporates spatial correlation to the nearest-neighbors explicitly and a random approximation for the next-nearest neighbors is formulated for the VM and the MVM, and it gives better agreement with the simulation results. We introduce an empirical approach that quantifies the correlations more accurately and gives results in good agreement with the simulation results. The work clarifies why simply mean-field theory fails and sheds light on how to analyze the correlations in the dynamic equations that are often generated in co-evolving processes.

Top