Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
Adaptive Sampling Proxy Application
Energy Science and Technology Software Center (ESTSC)
2012-10-22
ASPA is an implementation of an adaptive sampling algorithm [1-3], which is used to reduce the computational expense of computer simulations that couple disparate physical scales. The purpose of ASPA is to encapsulate the algorithms required for adaptive sampling independently from any specific application, so that alternative algorithms and programming models for exascale computers can be investigated more easily.
ERIC Educational Resources Information Center
Flournoy, Nancy
Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…
NASA Astrophysics Data System (ADS)
Leethochawalit, Nicha; Jones, Tucker A.; Ellis, Richard S.; Stark, Daniel P.; Richard, Johan; Zitrin, Adi; Auger, Matthew
2016-04-01
We discuss spatially resolved emission line spectroscopy secured for a total sample of 15 gravitationally lensed star-forming galaxies at a mean redshift of z≃ 2 based on Keck laser-assisted adaptive optics observations undertaken with the recently improved OSIRIS integral field unit (IFU) spectrograph. By exploiting gravitationally lensed sources drawn primarily from the CASSOWARY survey, we sample these sub-L{}* galaxies with source-plane resolutions of a few hundred parsecs ensuring well-sampled 2D velocity data and resolved variations in the gas-phase metallicity. Such high spatial resolution data offer a critical check on the structural properties of larger samples derived with coarser sampling using multiple-IFU instruments. We demonstrate how kinematic complexities essential to understanding the maturity of an early star-forming galaxy can often only be revealed with better sampled data. Although we include four sources from our earlier work, the present study provides a more representative sample unbiased with respect to emission line strength. Contrary to earlier suggestions, our data indicate a more diverse range of kinematic and metal gradient behavior inconsistent with a simple picture of well-ordered rotation developing concurrently with established steep metal gradients in all but merging systems. Comparing our observations with the predictions of hydrodynamical simulations suggests that gas and metals have been mixed by outflows or other strong feedback processes, flattening the metal gradients in early star-forming galaxies.
Adaptation Driven by Spatial Heterogeneities
NASA Astrophysics Data System (ADS)
Hermsen, Rutger
2011-03-01
Biological evolution and ecology are intimately linked, because the reproductive success or ``fitness'' of an organism depends crucially on its ecosystem. Yet, most models of evolution (or population genetics) consider homogeneous, fixed-size populations subjected to a constant selection pressure. To move one step beyond such ``mean field'' descriptions, we discuss stochastic models of evolution driven by spatial heterogeneity. We imagine a population whose range is limited by a spatially varying environmental parameter, such as a temperature or the concentration of an antibiotic drug. Individuals in the population replicate, die and migrate stochastically. Also, by mutation, they can adapt to the environmental stress and expand their range. This way, adaptation and niche expansion go hand in hand. This mode of evolution is qualitatively different from the usual notion of a population climbing a fitness gradient. We analytically calculate the rate of adaptation by solving a first passage time problem. Interestingly, the joint effects of reproduction, death, mutation and migration result in two distinct parameter regimes depending on the relative time scales of mutation and migration. We argue that the proposed scenario may be relevant for the rapid evolution of antibiotic resistance. This work was supported by the Center for Theoretical Biological Physics sponsored by the National Science Foundation (NSF) (Grant PHY-0822283).
Adaptive sampling for noisy problems
Cantu-Paz, E
2004-03-26
The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.
Özer, Serap
2016-04-01
Behavioral regulation has recently become an important variable in research looking at kindergarten and first-grade achievement of children in private and public schools. The purpose of this study was to examine a measure of behavioral regulation, the Head Toes Knees Shoulders Task, and to evaluate its relationship with visual spatial maturity at the end of kindergarten. Later, in first grade, teachers were asked to rate the children (N = 82) in terms of academic and behavioral adaptation. Behavioral regulation and visual spatial maturity were significantly different between the two school types, but ratings by the teachers in the first grade were affected by children's visual spatial maturity rather than by behavioral regulation. Socioeducational opportunities provided by the two types of schools may be more important to school adaptation than behavioral regulation. PMID:27154368
Adaptive Sampling in Hierarchical Simulation
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
Spatial adaptation on video display terminals
Greenhouse, D.S.; Bailey, I.L.; Howarth, P.A.; Berman, S.M.
1989-01-01
Spatial adaptation, in the form of a frequency-specific reduction in contrast sensitivity, can occur when the visual system is exposed to certain stimuli. We employed vertical sinusoidal test gratings to investigate adaptation to the horizontal structure of text presented on a standard video display terminal. The parameters of the contrast sensitivity test were selected on the basis of waveform analysis of spatial luminance scans of the text stimulus. We found that subjects exhibited a small, but significant, frequency-specific adaptation consistent with the spatial frequency spectrum of the stimulus. Theoretical and practical significance of this finding are discussed. 6 refs., 4 figs.
Adaptive Assessment of Spatial Abilities. Final Report.
ERIC Educational Resources Information Center
Bejar, Isaac I.
This report summarizes the results of research designed to study the psychometric and technological feasibility of adaptive testing to assess spatial ability. Data was collected from high school students on two types of spatial items: three-dimensional cubes and hidden figure items. The analysis of the three-dimensional cubes focused on the fit of…
Compartmental Neural Simulations with Spatial Adaptivity
Rempe, Michael J.; Spruston, Nelson; Kath, William L.; Chopp, David L.
2009-01-01
Since their inception, computational models have become increasingly complex and useful counterparts to laboratory experiments within the field of neuroscience. Today several software programs exist to solve the underlying mathematical system of equations, but such programs typically solve these equations in all parts of a cell (or network of cells) simultaneously, regardless of whether or not all of the cell is active. This approach can be inefficient if only part of the cell is active and many simulations must be performed. We have previously developed a numerical method that provides a framework for spatial adaptivity by making the computations local to individual branches rather than entire cells (Rempe and Chopp, 2006). Once the computation is reduced to the level of branches instead of cells, spatial adaptivity is straightforward: the active regions of the cell are detected and computational effort is focused there, while saving computations in other regions of the cell that are at or near rest. Here we apply the adaptive method to four realistic neuronal simulation scenarios and demonstrate its improved efficiency over non-adaptive methods. We find that the computational cost of the method scales with the amount of activity present in the simulation, rather than the physical size of the system being simulated. For certain problems spatial adaptivity reduces the computation time by up to 80%. PMID:18459041
The spatial scale of local adaptation in a stochastic environment.
Hadfield, Jarrod D
2016-07-01
The distribution of phenotypes in space will be a compromise between adaptive plasticity and local adaptation increasing the fit of phenotypes to local conditions and gene flow reducing that fit. Theoretical models on the evolution of quantitative characters on spatially explicit landscapes have only considered scenarios where optimum trait values change as deterministic functions of space. Here, these models are extended to include stochastic spatially autocorrelated aspects to the environment, and consequently the optimal phenotype. Under these conditions, the regression of phenotype on the environmental variable becomes steeper as the spatial scale on which populations are sampled becomes larger. Under certain deterministic models - such as linear clines - the regression is constant. The way in which the regression changes with spatial scale is informative about the degree of phenotypic plasticity, the relative scale of effective gene flow and the environmental dependency of selection. Connections to temporal models are discussed. PMID:27188689
SPATIALLY-BALANCED SAMPLING OF NATURAL RESOURCES
The spatial distribution of a natural resource is an important consideration in designing an efficient survey or monitoring program for the resource. Generally, sample sites that are spatially-balanced, that is, more or less evenly dispersed over the extent of the resource, will ...
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Adaptive video compressed sampling in the wavelet domain
NASA Astrophysics Data System (ADS)
Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi
2016-07-01
In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.
Feature Adaptive Sampling for Scanning Electron Microscopy.
Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp
2016-01-01
A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131
Feature Adaptive Sampling for Scanning Electron Microscopy
Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp
2016-01-01
A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131
Feature Adaptive Sampling for Scanning Electron Microscopy
NASA Astrophysics Data System (ADS)
Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp
2016-05-01
A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning.
Spatial vision of the achromat: spatial frequency and orientation-specific adaptation.
Greenlee, M W; Magnussen, S; Nordby, K
1988-01-01
1. The psychophysical technique of selective adaptation to stationary sine-wave gratings of varying spatial frequency and orientation was used to investigate the central processing of spatial information in the visual system of the complete achromat. 2. For adapting spatial frequencies of 1 and 2 cycles/deg, the spatial frequency and orientation selectivity of contrast threshold elevation is similar for achromatic and trichromatic vision. 3. For adapting frequencies below 1 cycle/deg, the achromat shows threshold elevations of normal magnitude with symmetrical spatial frequency and orientation tuning for adapting frequencies as low as 0.09 cycles/deg with 'bandwidth' estimates similar to those found at high frequencies in the trichromat. Below 0.66 cycles/deg no after-effect could be obtained in the trichromat, and the frequency tuning at 0.66 cycles/deg was skewed towards higher frequencies. 4. The interocular transfer of low-frequency adaptation in the achromat was 50%, which is the same value obtained at higher frequencies. 5. The time course of the decay of low spatial frequency adaptation in the achromat was similar to that found at higher frequencies. 6. Control experiments show no low-frequency adaptation in peripheral vision or in central vision in the dark-adapted trichromat indicating that low spatial frequency adaptation cannot be elicited through the rod system of the trichromat. 7. It is proposed that the observed range shift of adaptable spatial frequency mechanisms in the achromat's visual cortex is the result of an arrest at an early stage of sensory development. The visual cortex of the achromat is comparable, with respect to spatial processing, to that of the young, visually normal human infant. PMID:3261791
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and
Interactive spatial tools for the design of regional adaptation strategies.
Eikelboom, T; Janssen, R
2013-09-01
Regional adaptation strategies are plans that consist of feasible measures to shift a region towards a system that is flexible and robust for future climate changes. They apply to regional impacts of climate change and are imbedded in broader planning. Multiple adaptation frameworks and guidelines exist that describe the development stages of regional adaptation strategies. Spatial information plays a key role in the design of adaptation measures as both the effects of climate change as well as many adaptation measures have spatial impacts. Interactive spatial support tools such as drawing, simulation and evaluation tools can assist the development process. This paper presents how to connect tasks derived from the actual development stages to spatial support tools in an interactive multi-stakeholder context. This link helps to decide what spatial tools are suited to support which stages in the development process of regional adaptation strategies. The practical implication of the link is illustrated for three case study workshops in the Netherlands. The regional planning workshops combine expertise from both scientists and stakeholders with an interactive mapping device. This approach triggered participants to share their expertise and stimulated integration of knowledge. PMID:23137917
Determination and optimization of spatial samples for distributed measurements.
Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong
2010-10-01
There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.
Lidar imaging with on-the-fly adaptable spatial resolution
NASA Astrophysics Data System (ADS)
Riu, J.; Royo, S.
2013-10-01
We present our work in the design and construction of a novel type of lidar device capable of measuring 3D range images with an spatial resolution which can be reconfigured through an on-the-fly configuration approach, adjustable by software and on the image area, and which can reach the 2Mpixel value. A double-patented novel concept of scanning system enables to change dynamically the image resolution depending on external information provided by the image captured in a previous cycle or on other sensors like greyscale or hyperspectral 2D imagers. A prototype of an imaging lidar system which can modify its spatial resolution on demand from one image to the next according to the target nature and state has been developed, and indoor and outdoor sample images showing its performance are presented. Applications in object detection, tracking and identification through a real-time adaptable scanning system for each situation and target behaviour are currently being pursued in different areas.
Application of adaptive cluster sampling to low-density populations of freshwater mussels
Smith, D.R.; Villella, R.F.; Lemarie, D.P.
2003-01-01
Freshwater mussels appear to be promising candidates for adaptive cluster sampling because they are benthic macroinvertebrates that cluster spatially and are frequently found at low densities. We applied adaptive cluster sampling to estimate density of freshwater mussels at 24 sites along the Cacapon River, WV, where a preliminary timed search indicated that mussels were present at low density. Adaptive cluster sampling increased yield of individual mussels and detection of uncommon species; however, it did not improve precision of density estimates. Because finding uncommon species, collecting individuals of those species, and estimating their densities are important conservation activities, additional research is warranted on application of adaptive cluster sampling to freshwater mussels. However, at this time we do not recommend routine application of adaptive cluster sampling to freshwater mussel populations. The ultimate, and currently unanswered, question is how to tell when adaptive cluster sampling should be used, i.e., when is a population sufficiently rare and clustered for adaptive cluster sampling to be efficient and practical? A cost-effective procedure needs to be developed to identify biological populations for which adaptive cluster sampling is appropriate.
Detecting spatial genetic signatures of local adaptation in heterogeneous landscapes.
Forester, Brenna R; Jones, Matthew R; Joost, Stéphane; Landguth, Erin L; Lasky, Jesse R
2016-01-01
The spatial structure of the environment (e.g. the configuration of habitat patches) may play an important role in determining the strength of local adaptation. However, previous studies of habitat heterogeneity and local adaptation have largely been limited to simple landscapes, which poorly represent the multiscale habitat structure common in nature. Here, we use simulations to pursue two goals: (i) we explore how landscape heterogeneity, dispersal ability and selection affect the strength of local adaptation, and (ii) we evaluate the performance of several genotype-environment association (GEA) methods for detecting loci involved in local adaptation. We found that the strength of local adaptation increased in spatially aggregated selection regimes, but remained strong in patchy landscapes when selection was moderate to strong. Weak selection resulted in weak local adaptation that was relatively unaffected by landscape heterogeneity. In general, the power of detection methods closely reflected levels of local adaptation. False-positive rates (FPRs), however, showed distinct differences across GEA methods based on levels of population structure. The univariate GEA approach had high FPRs (up to 55%) under limited dispersal scenarios, due to strong isolation by distance. By contrast, multivariate, ordination-based methods had uniformly low FPRs (0-2%), suggesting these approaches can effectively control for population structure. Specifically, constrained ordinations had the best balance of high detection and low FPRs and will be a useful addition to the GEA toolkit. Our results provide both theoretical and practical insights into the conditions that shape local adaptation and how these conditions impact our ability to detect selection. PMID:26576498
Prism adaptation for spatial neglect after stroke: translational practice gaps
Barrett, A. M.; Goedert, Kelly M.; Basso, Julia C.
2012-01-01
Spatial neglect increases hospital morbidity and costs in around 50% of the 795,000 people per year in the USA who survive stroke, and an urgent need exists to reduce the care burden of this condition. However, effective acute treatment for neglect has been elusive. In this article, we review 48 studies of a treatment of intense neuroscience interest: prism adaptation training. Due to its effects on spatial motor ‘aiming’, prism adaptation training may act to reduce neglect-related disability. However, research failed, first, to suggest methods to identify the 50–75% of patients who respond to treatment; second, to measure short-term and long-term outcomes in both mechanism-specific and functionally valid ways; third, to confirm treatment utility during the critical first 8 weeks poststroke; and last, to base treatment protocols on systematic dose–response data. Thus, considerable investment in prism adaptation research has not yet touched the fundamentals needed for clinical implementation. We suggest improved standards and better spatial motor models for further research, so as to clarify when, how and for whom prism adaptation should be applied. PMID:22926312
Radiotherapy Adapted to Spatial and Temporal Variability in Tumor Hypoxia
Sovik, Aste; Malinen, Eirik . E-mail: emalinen@fys.uio.no; Skogmo, Hege K.; Bentzen, Soren M.; Bruland, Oyvind S.; Olsen, Dag Rune
2007-08-01
Purpose: To explore the feasibility and clinical potential of adapting radiotherapy to temporal and spatial variations in tumor oxygenation. Methods and Materials: Repeated dynamic contrast enhanced magnetic resonance (DCEMR) images were taken of a canine sarcoma during the course of fractionated radiation therapy. The tumor contrast enhancement was assumed to represent the oxygen distribution. The IMRT plans were retrospectively adapted to the DCEMR images by employing tumor dose redistribution. Optimized nonuniform tumor dose distributions were calculated and compared with a uniform dose distribution delivering the same integral dose to the tumor. Clinical outcome was estimated from tumor control probability (TCP) and normal tissue complication probability (NTCP) modeling. Results: The biologically adapted treatment was found to give a substantial increase in TCP compared with conventional radiotherapy, even when only pretreatment images were used as basis for the treatment planning. The TCP was further increased by repeated replanning during the course of treatment, and replanning twice a week was found to give near optimal TCP. Random errors in patient positioning were found to give a small decrease in TCP, whereas systematic errors were found to reduce TCP substantially. NTCP for the adapted treatment was similar to or lower than for the conventional treatment, both for parallel and serial normal tissue structures. Conclusion: Biologically adapted radiotherapy is estimated to improve treatment outcome of tumors having spatial and temporal variations in radiosensitivity.
Photonic lantern adaptive spatial mode control in LMA fiber amplifiers.
Montoya, Juan; Aleshire, Chris; Hwang, Christopher; Fontaine, Nicolas K; Velázquez-Benítez, Amado; Martz, Dale H; Fan, T Y; Ripin, Dan
2016-02-22
We demonstrate adaptive-spatial mode control (ASMC) in few-moded double-clad large mode area (LMA) fiber amplifiers by using an all-fiber-based photonic lantern. Three single-mode fiber inputs are used to adaptively inject the appropriate superposition of input modes in a multimode gain fiber to achieve the desired mode at the output. By actively adjusting the relative phase of the single-mode inputs, near-unity coherent combination resulting in a single fundamental mode at the output is achieved. PMID:26906999
Spatially-Anisotropic Parallel Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg V.; Brown-Dymkoski, Eric
2015-11-01
Despite latest advancements in development of robust wavelet-based adaptive numerical methodologies to solve partial differential equations, they all suffer from two major ``curses'': 1) the reliance on rectangular domain and 2) the ``curse of anisotropy'' (i.e. homogeneous wavelet refinement and inability to have spatially varying aspect ratio of the mesh elements). The new method addresses both of these challenges by utilizing an adaptive anisotropic wavelet transform on curvilinear meshes that can be either algebraically prescribed or calculated on the fly using PDE-based mesh generation. In order to ensure accurate representation of spatial operators in physical space, an additional adaptation on spatial physical coordinates is also performed. It is important to note that when new nodes are added in computational space, the physical coordinates can be approximated by interpolation of the existing solution and additional local iterations to ensure that the solution of coordinate mapping PDEs is converged on the new mesh. In contrast to traditional mesh generation approaches, the cost of adding additional nodes is minimal, mainly due to localized nature of iterative mesh generation PDE solver requiring local iterations in the vicinity of newly introduced points. This work was supported by ONR MURI under grant N00014-11-1-069.
NASA Technical Reports Server (NTRS)
Kayanickupuram, A. J.; Ramos, K. A.; Cordova, M. L.; Wood, S. J.
2009-01-01
The need to resolve new patterns of sensory feedback in altered gravitoinertial environments requires cognitive processes to develop appropriate reference frames for spatial orientation awareness. The purpose of this study was to examine deficits in spatial cognitive performance during adaptation to conflicting tilt-translation stimuli. Fourteen subjects were tilted within a lighted enclosure that simultaneously translated at one of 3 frequencies. Tilt and translation motion was synchronized to maintain the resultant gravitoinertial force aligned with the longitudinal body axis, resulting in a mismatch analogous to spaceflight in which the canals and vision signal tilt while the otoliths do not. Changes in performance on different spatial cognitive tasks were compared 1) without motion, 2) with tilt motion alone (pitch at 0.15, 0.3 and 0.6 Hz or roll at 0.3 Hz), and 3) with conflicting tilt-translation motion. The adaptation paradigm was continued for up to 30 min or until the onset of nausea. The order of the adaptation conditions were counter-balanced across 4 different test sessions. There was a significant effect of stimulus frequency on both motion sickness and spatial cognitive performance. Only 3 of 14 were able to complete the full 30 min protocol at 0.15 Hz, while 7 of 14 completed 0.3 Hz and 13 of 14 completed 0.6 Hz. There were no changes in simple visual-spatial cognitive tests, e.g., mental rotation or match-to-sample. There were significant deficits during 0.15 Hz adaptation in both accuracy and reaction time during a spatial reference task in which subjects are asked to identify a match of a 3D reoriented cube assemblage. Our results are consistent with antidotal reports of cognitive impairment that are common during sensorimotor adaptation with G-transitions. We conclude that these cognitive deficits stem from the ambiguity of spatial reference frames for central processing of inertial motion cues.
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
Averaging analysis for discrete time and sampled data adaptive systems
NASA Technical Reports Server (NTRS)
Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.
1986-01-01
Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.
Study on architecture and implementation of adaptive spatial information service
NASA Astrophysics Data System (ADS)
Yu, Zhuoyuan; Wang, Yingjie; Luo, Bin
2007-06-01
More and more geo-spatial information has been disseminated to the Internet based on WebGIS architecture. Some of these online mapping applications have already been widely used in recent years, such as Google map, MapQuest, go2map, mapbar. However, due to the limitation of web map technology and transmit speed of large geo-spatial data through the Internet, most of these web map systems employ (pyramid-indexed) raster map modeling technology. This method can shorten server's response time but largely reduces the flexibility and visualization effect of the web map provided. It will be difficult for them to adaptively change the map contents or map styles for variant user demands. This paper propose a new system architecture for adaptive web map service by integrating latest network technology and web map technology, such as SVG, Ajax, user modeling. Its main advantages include: Firstly, it is user customized. In this proposed map system, user can design the map contents, styles and interfaces online by themselves; secondly, it is more intelligent. It can record user interactive actions with the system, analyze user profiles, predict user behavior. User's interests will be obtained and tasks will be suggested based on different user models, which are generated from the system. For instance, if a new user login in, the nearest user model will be matched and some interactive suggestions will be provided by the system for the user. It is a more powerful and efficient way for spatial information sharing. This paper first discusses the main system architecture of adaptive spatial information service which consists of three parts: user layer, map application layer and database layer. User layer is distributed on client side which includes Web map (SVG) browser, map renderer and map visualization component. Application layer includes map application server, user interface generation, user analysis and user modeling, etc. Based on user models, map content, style and user
Adaptive sampling program support for expedited site characterization
Johnson, R.
1993-10-01
Expedited site characterizations offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the ``real-time`` data generated by an expedited site characterization. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system for data fusion, management and display; and combined Bayesian/geostatistical methods for contamination extent estimation and sample location selection.
Spatial Compression Impairs Prism Adaptation in Healthy Individuals
Scriven, Rachel J.; Newport, Roger
2013-01-01
Neglect patients typically present with gross inattention to one side of space following damage to the contralateral hemisphere. While prism-adaptation (PA) is effective in ameliorating some neglect behaviors, the mechanisms involved and their relationship to neglect remain unclear. Recent studies have shown that conscious strategic control (SC) processes in PA may be impaired in neglect patients, who are also reported to show extraordinarily long aftereffects compared to healthy participants. Determining the underlying cause of these effects may be the key to understanding therapeutic benefits. Alternative accounts suggest that reduced SC might result from a failure to detect prism-induced reaching errors properly either because (a) the size of the error is underestimated in compressed visual space or (b) pathologically increased error-detection thresholds reduce the requirement for error correction. The purpose of this study was to model these two alternatives in healthy participants and to examine whether SC and subsequent aftereffects were abnormal compared to standard PA. Each participant completed three PA procedures within a MIRAGE mediated reality environment with direction errors recorded before, during and after adaptation. During PA, visual feedback of the reach could be compressed, perturbed by noise, or represented veridically. Compressed visual space significantly reduced SC and aftereffects compared to control and noise conditions. These results support recent observations in neglect patients, suggesting that a distortion of spatial representation may successfully model neglect and explain neglect performance while adapting to prisms. PMID:23675332
Spatial compression impairs prism adaptation in healthy individuals.
Scriven, Rachel J; Newport, Roger
2013-01-01
Neglect patients typically present with gross inattention to one side of space following damage to the contralateral hemisphere. While prism-adaptation (PA) is effective in ameliorating some neglect behaviors, the mechanisms involved and their relationship to neglect remain unclear. Recent studies have shown that conscious strategic control (SC) processes in PA may be impaired in neglect patients, who are also reported to show extraordinarily long aftereffects compared to healthy participants. Determining the underlying cause of these effects may be the key to understanding therapeutic benefits. Alternative accounts suggest that reduced SC might result from a failure to detect prism-induced reaching errors properly either because (a) the size of the error is underestimated in compressed visual space or (b) pathologically increased error-detection thresholds reduce the requirement for error correction. The purpose of this study was to model these two alternatives in healthy participants and to examine whether SC and subsequent aftereffects were abnormal compared to standard PA. Each participant completed three PA procedures within a MIRAGE mediated reality environment with direction errors recorded before, during and after adaptation. During PA, visual feedback of the reach could be compressed, perturbed by noise, or represented veridically. Compressed visual space significantly reduced SC and aftereffects compared to control and noise conditions. These results support recent observations in neglect patients, suggesting that a distortion of spatial representation may successfully model neglect and explain neglect performance while adapting to prisms. PMID:23675332
[Spatial distribution pattern of Pontania dolichura larvae and sampling technique].
Zhang, Feng; Chen, Zhijie; Zhang, Shulian; Zhao, Huiyan
2006-03-01
In this paper, the spatial distribution pattern of Pontania dolichura larvae was analyzed with Taylor's power law, Iwao's distribution function, and six aggregation indexes. The results showed that the spatial distribution pattern of P. dolichura larvae was of aggregated, and the basic component of the distribution was individual colony, with the aggregation intensity increased with density. On branches, the aggregation was caused by the adult behavior of laying eggs and the spatial position of leaves, while on leaves, the aggregation was caused by the spatial position of news leaves in spring when m < 2.37, and by the spatial position of news leaves in spring and the behavior of eclosion and laying eggs when m > 2.37. By using the parameters alpha and beta in Iwao's m * -m regression equation, the optimal and sequential sampling numbers were determined. PMID:16724746
Effects of spatial scale of sampling on food web structure
Wood, Spencer A; Russell, Roly; Hanson, Dieta; Williams, Richard J; Dunne, Jennifer A
2015-01-01
This study asks whether the spatial scale of sampling alters structural properties of food webs and whether any differences are attributable to changes in species richness and connectance with scale. Understanding how different aspects of sampling effort affect ecological network structure is important for both fundamental ecological knowledge and the application of network analysis in conservation and management. Using a highly resolved food web for the marine intertidal ecosystem of the Sanak Archipelago in the Eastern Aleutian Islands, Alaska, we assess how commonly studied properties of network structure differ for 281 versions of the food web sampled at five levels of spatial scale representing six orders of magnitude in area spread across the archipelago. Species (S) and link (L) richness both increased by approximately one order of magnitude across the five spatial scales. Links per species (L/S) more than doubled, while connectance (C) decreased by approximately two-thirds. Fourteen commonly studied properties of network structure varied systematically with spatial scale of sampling, some increasing and others decreasing. While ecological network properties varied systematically with sampling extent, analyses using the niche model and a power-law scaling relationship indicate that for many properties, this apparent sensitivity is attributable to the increasing S and decreasing C of webs with increasing spatial scale. As long as effects of S and C are accounted for, areal sampling bias does not have a special impact on our understanding of many aspects of network structure. However, attention does need be paid to some properties such as the fraction of species in loops, which increases more than expected with greater spatial scales of sampling. PMID:26380704
Contribution of Cerebellar Sensorimotor Adaptation to Hippocampal Spatial Memory
Passot, Jean-Baptiste; Sheynikhovich, Denis; Duvelle, Éléonore; Arleo, Angelo
2012-01-01
Complementing its primary role in motor control, cerebellar learning has also a bottom-up influence on cognitive functions, where high-level representations build up from elementary sensorimotor memories. In this paper we examine the cerebellar contribution to both procedural and declarative components of spatial cognition. To do so, we model a functional interplay between the cerebellum and the hippocampal formation during goal-oriented navigation. We reinterpret and complete existing genetic behavioural observations by means of quantitative accounts that cross-link synaptic plasticity mechanisms, single cell and population coding properties, and behavioural responses. In contrast to earlier hypotheses positing only a purely procedural impact of cerebellar adaptation deficits, our results suggest a cerebellar involvement in high-level aspects of behaviour. In particular, we propose that cerebellar learning mechanisms may influence hippocampal place fields, by contributing to the path integration process. Our simulations predict differences in place-cell discharge properties between normal mice and L7-PKCI mutant mice lacking long-term depression at cerebellar parallel fibre-Purkinje cell synapses. On the behavioural level, these results suggest that, by influencing the accuracy of hippocampal spatial codes, cerebellar deficits may impact the exploration-exploitation balance during spatial navigation. PMID:22485133
Adaptive spatial carrier frequency method for fast monitoring optical properties of fibres
NASA Astrophysics Data System (ADS)
Sokkar, T. Z. N.; El-Farahaty, K. A.; El-Bakary, M. A.; Omar, E. Z.; Agour, M.; Hamza, A. A.
2016-05-01
We present an extension of the adaptive spatial carrier frequency method which is proposed for fast measuring optical properties of fibrous materials. The method can be considered as a two complementary steps. In the first step, the support of the adaptive filter shall be defined. In the second step, the angle between the sample under test and the interference fringe system generated by the utilized interferometer has to be determined. Thus, the support of the optical filter associated with the implementation of the adaptive spatial carrier frequency method is accordingly rotated. This method is experimentally verified by measuring optical properties of polypropylene (PP) fibre with the help of a Mach-Zehnder interferometer. The results show that errors resulting from rotating the fibre with respect to the interference fringes of the interferometer are reduced compared with the traditional band pass filter method. This conclusion was driven by comparing results of the mean refractive index of drown PP fibre at parallel polarization direction obtained from the new and adaptive spatial carrier frequency method.
Visual sensitivity to spatially sampled modulation in human observers
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Macleod, Donald I. A.
1991-01-01
Thresholds were measured for detecting spatial luminance modulation in regular lattices of visually discrete dots. Thresholds for modulation of a lattice are generally higher than the corresponding threshold for modulation of a continuous field, and the size of the threshold elevation, which depends on the spacing of the lattice elements, can be as large as a one log unit. The largest threshold elevations are seen when the sample spacing is 12 min arc or greater. Theories based on response compression cannot explain the further observation that the threshold elevations due to spatial sampling are also dependent on modulation frequency: the greatest elevations occur with higher modulation frequencies. The idea that this is due to masking of the modulation frequency by the spatial frequencies in the sampling lattice is considered.
Adaptive Sampling for High Throughput Data Using Similarity Measures
Bulaevskaya, V.; Sales, A. P.
2015-05-06
The need for adaptive sampling arises in the context of high throughput data because the rates of data arrival are many orders of magnitude larger than the rates at which they can be analyzed. A very fast decision must therefore be made regarding the value of each incoming observation and its inclusion in the analysis. In this report we discuss one approach to adaptive sampling, based on the new data point’s similarity to the other data points being considered for inclusion. We present preliminary results for one real and one synthetic data set.
Adaptive importance sampling of random walks on continuous state spaces
Baggerly, K.; Cox, D.; Picard, R.
1998-11-01
The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.
On the Effect of Preferential Sampling in Spatial Prediction
The choice of the sampling locations in a spatial network is often guided by practical demands. In particular, typically, locations are preferentially chosen to capture high values of a response, for example, air pollution levels in environmental monitoring. Then, model estimatio...
VARIANCE ESTIMATION FOR SPATIALLY BALANCED SAMPLES OF ENVIRONMENTAL RESOURCES
The spatial distribution of a natural resource is an important consideration in designing an efficient survey or monitoring program for the resource. We review a unified strategy for designing probability samples of discrete, finite resource populations, such as lakes within som...
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
Adaptive Sampling approach to environmental site characterization: Phase 1 demonstration
Floran, R.J.; Bujewski, G.E.; Johnson, R.L.
1995-07-01
A technology demonstration that optimizes sampling strategies and real-time data collection was carried out at the Kirtland Air Force Base (KAFB) RB-11 Radioactive Burial Site, Albuquerque, New Mexico in August 1994. The project, which was funded by the Strategic Environmental Research and Development Program (SERDP), involved the application of a geostatistical-based Adaptive Sampling methodology and software with on-site field screening of soils for radiation, organic compounds and metals. The software, known as Plume{trademark}, was developed at Argonne National Laboratory as part of the DOE/OTD-funded Mixed Waste Landfill Integrated Demonstration (MWLID). The objective of the investigation was to compare an innovative Adaptive Sampling approach that stressed real-time decision-making with a conventional RCRA-driven site characterization carried out by the Air Force. The latter investigation used a standard drilling and sampling plan as mandated by the Environmental Protection Agency (EPA). To make the comparison realistic, the same contractors and sampling equipment (Geoprobe{reg_sign} soil samplers) were used. In both investigations, soil samples were collected at several depths at numerous locations adjacent to burial trenches that contain low-level radioactive waste and animal carcasses; some trenches may also contain mixed waste. Neither study revealed the presence of contaminants appreciably above risk based action levels, indicating that minimal to no migration has occurred away from the trenches. The combination of Adaptive Sampling with field screening achieved a similar level of confidence compared to the Resource Conservation and Recovery Act (RCRA) investigation regarding the potential migration of contaminants at the site.
Sampling and surface reconstruction with adaptive-size meshes
NASA Astrophysics Data System (ADS)
Huang, Wen-Chen; Goldgof, Dmitry B.
1992-03-01
This paper presents a new approach to sampling and surface reconstruction which uses the physically based models. We introduce adaptive-size meshes which automatically update the size of the meshes as the distance between the nodes changes. We have implemented the adaptive-size algorithm to the following three applications: (1) Sampling of the intensity data. (2) Surface reconstruction of the range data. (3) Surface reconstruction of the 3-D computed tomography left ventricle data. The LV data was acquired by the 3-D computed tomography (CT) scanner. It was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 X 128 X 118) images taken through the heart cycle.
NASA Astrophysics Data System (ADS)
Yushkov, Konstantin B.; Molchanov, Vladimir Y.; Belousov, Pavel V.; Abrosimov, Aleksander Y.
2016-01-01
We report a method for edge enhancement in the images of transparent samples using analog image processing in coherent light. The experimental technique is based on adaptive spatial filtering with an acousto-optic tunable filter in a telecentric optical system. We demonstrate processing of microscopic images of unstained and stained histological sections of human thyroid tumor with improved contrast.
Spatial sampling of head electrical fields: the geodesic sensor net.
Tucker, D M
1993-09-01
In studying brain electrical activity from scalp sensors (electrodes), the optimal measurement would sample the potential field over the entire surface of the braincase, with a sufficient density to avoid spatial aliasing of the surface electrical fields. The geodesic sensor net organizes an array of sensors, each enclosed in a saline sponge, in a geodesic tension structure comprised of elastic threads. By fixing a sensor pedestal at each geodesic vertex, the geometry of the tension structure insures insures that the sensor array is distributed evenly across the accessible head surface. Furthermore, the tension of the network is translated into compression that is divided equally among the sensor pedestals and directed along head-radial vectors. Various geodesic partitioning frequencies may be selected to provide an even surface distribution of the dense sensor arrays (e.g., 64, 128, or 256) that appear to be necessary to provide adequate spatial sampling of brain electrical events. PMID:7691542
Signal Adaptive System for Space/Spatial-Frequency Analysis
NASA Astrophysics Data System (ADS)
Ivanović, Veselin N.; Jovanovski, Srdjan
2010-12-01
This paper outlines the development of a multiple-clock-cycle implementation (MCI) of a signal adaptive two-dimensional (2D) system for space/spatial-frequency (S/SF) signal analysis. The design is based on a method for improved S/SF representation of the analyzed 2D signals, also proposed here. The proposed MCI design optimizes critical design performances related to hardware complexity, making it a suitable system for real time implementation on an integrated chip. Additionally, the design allows the implemented system to take a variable number of clock cycles (CLKs) (the only necessary ones regarding desirable—2D Wigner distribution-presentation of autoterms) in different frequency-frequency points during the execution. This ability represents a major advantage of the proposed design which helps to optimize the time required for execution and produce an improved, cross-terms-free S/SF signal representation. The design has been verified by a field-programmable gate array (FPGA) circuit design, capable of performing S/SF analysis of 2D signals in real time.
Postolache, Dragos; Lascoux, Martin; Drouzas, Andreas D.; Källman, Thomas; Leonarduzzi, Cristina; Liepelt, Sascha; Piotti, Andrea; Popescu, Flaviu; Roschanski, Anna M.; Zhelev, Peter; Fady, Bruno; Vendramin, Giovanni Giuseppe
2016-01-01
Background Local adaptation is a key driver of phenotypic and genetic divergence at loci responsible for adaptive traits variations in forest tree populations. Its experimental assessment requires rigorous sampling strategies such as those involving population pairs replicated across broad spatial scales. Methods A hierarchical Bayesian model of selection (HBM) that explicitly considers both the replication of the environmental contrast and the hierarchical genetic structure among replicated study sites is introduced. Its power was assessed through simulations and compared to classical ‘within-site’ approaches (FDIST, BAYESCAN) and a simplified, within-site, version of the model introduced here (SBM). Results HBM demonstrates that hierarchical approaches are very powerful to detect replicated patterns of adaptive divergence with low false-discovery (FDR) and false-non-discovery (FNR) rates compared to the analysis of different sites separately through within-site approaches. The hypothesis of local adaptation to altitude was further addressed by analyzing replicated Abies alba population pairs (low and high elevations) across the species’ southern distribution range, where the effects of climatic selection are expected to be the strongest. For comparison, a single population pair from the closely related species A. cephalonica was also analyzed. The hierarchical model did not detect any pattern of adaptive divergence to altitude replicated in the different study sites. Instead, idiosyncratic patterns of local adaptation among sites were detected by within-site approaches. Conclusion Hierarchical approaches may miss idiosyncratic patterns of adaptation among sites, and we strongly recommend the use of both hierarchical (multi-site) and classical (within-site) approaches when addressing the question of adaptation across broad spatial scales. PMID:27392065
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
St-Jean, Samuel; Coupé, Pierrick; Descoteaux, Maxime
2016-08-01
Diffusion magnetic resonance imaging (MRI) datasets suffer from low Signal-to-Noise Ratio (SNR), especially at high b-values. Acquiring data at high b-values contains relevant information and is now of great interest for microstructural and connectomics studies. High noise levels bias the measurements due to the non-Gaussian nature of the noise, which in turn can lead to a false and biased estimation of the diffusion parameters. Additionally, the usage of in-plane acceleration techniques during the acquisition leads to a spatially varying noise distribution, which depends on the parallel acceleration method implemented on the scanner. This paper proposes a novel diffusion MRI denoising technique that can be used on all existing data, without adding to the scanning time. We first apply a statistical framework to convert both stationary and non stationary Rician and non central Chi distributed noise to Gaussian distributed noise, effectively removing the bias. We then introduce a spatially and angular adaptive denoising technique, the Non Local Spatial and Angular Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D overlapping patches, thus capturing the spatial and angular structure of the diffusion data, and a dictionary of atoms is learned on those patches. A local sparse decomposition is then found by bounding the reconstruction error with the local noise variance. We compare against three other state-of-the-art denoising methods and show quantitative local and connectivity results on a synthetic phantom and on an in-vivo high resolution dataset. Overall, our method restores perceptual information, removes the noise bias in common diffusion metrics, restores the extracted peaks coherence and improves reproducibility of tractography on the synthetic dataset. On the 1.2 mm high resolution in-vivo dataset, our denoising improves the visual quality of the data and reduces the number of spurious tracts when compared to the noisy acquisition. Our
Adaptive sample map for Monte Carlo ray tracing
NASA Astrophysics Data System (ADS)
Teng, Jun; Luo, Lixin; Chen, Zhibo
2010-07-01
Monte Carlo ray tracing algorithm is widely used by production quality renderers to generate synthesized images in films and TV programs. Noise artifact exists in synthetic images generated by Monte Carlo ray tracing methods. In this paper, a novel noise artifact detection and noise level representation method is proposed. We first apply discrete wavelet transform (DWT) on a synthetic image; the high frequency sub-bands of the DWT result encode the noise information. The sub-bands coefficients are then combined to generate a noise level description of the synthetic image, which is called noise map in the paper. This noise map is then subdivided into blocks for robust noise level metric calculation. Increasing the samples per pixel in Monte Carlo ray tracer can reduce the noise of a synthetic image to visually unnoticeable level. A noise-to-sample number mapping algorithm is thus performed on each block of the noise map, higher noise value is mapped to larger sample number, and lower noise value is mapped to smaller sample number, the result of mapping is called sample map. Each pixel in a sample map can be used by Monte Carlo ray tracer to reduce the noise level in the corresponding block of pixels in a synthetic image. However, this block based scheme produces blocky artifact as appeared in video and image compression algorithms. We use Gaussian filter to smooth the sample map, the result is adaptive sample map (ASP). ASP serves two purposes in rendering process; its statistics information can be used as noise level metric in synthetic image, and it can also be used by a Monte Carlo ray tracer to refine the synthetic image adaptively in order to reduce the noise to unnoticeable level but with less rendering time than the brute force method.
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.
Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data
Liu, Zitao; Hauskrecht, Milos
2016-01-01
Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189
Distributed Database Kriging for Adaptive Sampling (D2 KAS)
NASA Astrophysics Data System (ADS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; Rouet-Leduc, Bertrand; McPherson, Allen L.; Germann, Timothy C.; Junghans, Christoph
2015-07-01
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5-25, while retaining high accuracy for various choices of the algorithm parameters.
Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling
2016-05-01
Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. PMID:27058283
Improving Wang-Landau sampling with adaptive windows
NASA Astrophysics Data System (ADS)
Cunha-Netto, A. G.; Caparica, A. A.; Tsai, Shan-Ho; Dickman, Ronald; Landau, D. P.
2008-11-01
Wang-Landau sampling (WLS) of large systems requires dividing the energy range into “windows” and joining the results of simulations in each window. The resulting density of states (and associated thermodynamic functions) is shown to suffer from boundary effects in simulations of lattice polymers and the five-state Potts model. Here, we implement WLS using adaptive windows. Instead of defining fixed energy windows (or windows in the energy-magnetization plane for the Potts model), the boundary positions depend on the set of energy values on which the histogram is flat at a given stage of the simulation. Shifting the windows each time the modification factor f is reduced, we eliminate border effects that arise in simulations using fixed windows. Adaptive windows extend significantly the range of system sizes that may be studied reliably using WLS.
Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks
Xu, Yunfei; Choi, Jongeun
2011-01-01
This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme. PMID:22163785
Binary hologram generation based on shape adaptive sampling
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.; Pan, Y.; Poon, T.-C.
2014-05-01
Past research has revealed that by down-sampling the projected intensity profile of a source object scene with a regular sampling lattice, a binary Fresnel hologram can be generated swiftly to preserve favorable quality on its reconstructed image. However, this method also results in a prominent textural pattern which is conflicting to the geometrical profile of the object scene, leading to an unnatural visual perception. In this paper, we shall overcome this problem with a down-sampling process that is adaptive to the geometry of the object. Experimental results demonstrate that by applying our proposed method to generate a binary hologram, the reconstructed image is rendered with a texture which abides with the shape of the three-dimensional object(s).
A random spatial sampling method in a rural developing nation
2014-01-01
Background Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. Methods We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. Results This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Conclusions Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available. PMID:24716473
Smith, David R; Gray, Brian R; Newton, Teresa J; Nichols, Doug
2010-11-01
Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs. PMID:19946742
Smith, D.R.; Gray, B.R.; Newton, T.J.; Nichols, D.
2010-01-01
Adaptive sampling designs are recommended where, as is typical with freshwater mussels, the outcome of interest is rare and clustered. However, the performance of adaptive designs has not been investigated when outcomes are not only rare and clustered but also imperfectly detected. We address this combination of challenges using data simulated to mimic properties of freshwater mussels from a reach of the upper Mississippi River. Simulations were conducted under a range of sample sizes and detection probabilities. Under perfect detection, efficiency of the adaptive sampling design increased relative to the conventional design as sample size increased and as density decreased. Also, the probability of sampling occupied habitat was four times higher for adaptive than conventional sampling of the lowest density population examined. However, imperfect detection resulted in substantial biases in sample means and variances under both adaptive sampling and conventional designs. The efficiency of adaptive sampling declined with decreasing detectability. Also, the probability of encountering an occupied unit during adaptive sampling, relative to conventional sampling declined with decreasing detectability. Thus, the potential gains in the application of adaptive sampling to rare and clustered populations relative to conventional sampling are reduced when detection is imperfect. The results highlight the need to increase or estimate detection to improve performance of conventional and adaptive sampling designs.
Elucidating Microbial Adaptation Dynamics via Autonomous Exposure and Sampling
NASA Astrophysics Data System (ADS)
Grace, J. M.; Verseux, C.; Gentry, D.; Moffet, A.; Thayabaran, R.; Wong, N.; Rothschild, L.
2013-12-01
The adaptation of micro-organisms to their environments is a complex process of interaction between the pressures of the environment and of competition. Reducing this multifactorial process to environmental exposure in the laboratory is a common tool for elucidating individual mechanisms of evolution, such as mutation rates[Wielgoss et al., 2013]. Although such studies inform fundamental questions about the way adaptation and even speciation occur, they are often limited by labor-intensive manual techniques[Wassmann et al., 2010]. Current methods for controlled study of microbial adaptation limit the length of time, the depth of collected data, and the breadth of applied environmental conditions. Small idiosyncrasies in manual techniques can have large effects on outcomes; for example, there are significant variations in induced radiation resistances following similar repeated exposure protocols[Alcántara-Díaz et al., 2004; Goldman and Travisano, 2011]. We describe here a project under development to allow rapid cycling of multiple types of microbial environmental exposure. The system allows continuous autonomous monitoring and data collection of both single species and sampled communities, independently and concurrently providing multiple types of controlled environmental pressure (temperature, radiation, chemical presence or absence, and so on) to a microbial community in dynamic response to the ecosystem's current status. When combined with DNA sequencing and extraction, such a controlled environment can cast light on microbial functional development, population dynamics, inter- and intra-species competition, and microbe-environment interaction. The project's goal is to allow rapid, repeatable iteration of studies of both natural and artificial microbial adaptation. As an example, the same system can be used both to increase the pH of a wet soil aliquot over time while periodically sampling it for genetic activity analysis, or to repeatedly expose a culture of
Spatial-frequency-contingent color aftereffects: adaptation with one-dimensional stimuli.
Day, R H; Webster, W R; Gillies, O; Crassini, B
1992-01-01
The McCollough effect was shown to be spatial-frequency selective by Lovegrove and Over (1972) after adaptation with vertical colored square-wave gratings separated by 1 octave. Adaptation with slide-presented red and green vertical square-wave gratings separated by 1 octave failed to produce contingent color aftereffects (CAEs). However, when each of these gratings was adapted alone, strong CAEs were produced. Adaptation with vertical colored sine-wave gratings separated by 1 octave also failed to produce CAEs, but strong effects were produced by adaptation with each grating alone. By varying the spatial frequency of the test sine wave, CAEs were found to be tuned for spatial frequency at 2.85 octaves after adaptation of 4 cycles per degree (cpd) and at 2.30 octaves after adaptation of 8 cpd. Adaptation of both vertical and horizontal sine-wave gratings produced strong CAEs, with bandwidths ranging from 1.96 to 2.90 octaves and with lower adapting contrast producing weaker CAEs. These results indicate that the McCollough effect is more broadly tuned for spatial frequency than are simple adaptation effects. PMID:1549425
The Lyman alpha reference sample. VII. Spatially resolved Hα kinematics
NASA Astrophysics Data System (ADS)
Herenz, Edmund Christian; Gruyters, Pieter; Orlitova, Ivana; Hayes, Matthew; Östlin, Göran; Cannon, John M.; Roth, Martin M.; Bik, Arjan; Pardy, Stephen; Otí-Floranes, Héctor; Mas-Hesse, J. Miguel; Adamo, Angela; Atek, Hakim; Duval, Florent; Guaita, Lucia; Kunth, Daniel; Laursen, Peter; Melinder, Jens; Puschnig, Johannes; Rivera-Thorsen, Thøger E.; Schaerer, Daniel; Verhamme, Anne
2016-03-01
We present integral field spectroscopic observations with the Potsdam Multi-Aperture Spectrophotometer of all 14 galaxies in the z ~ 0.1 Lyman Alpha Reference Sample (LARS). We produce 2D line-of-sight velocity maps and velocity dispersion maps from the Balmer α (Hα) emission in our data cubes. These maps trace the spectral and spatial properties of the LARS galaxies' intrinsic Lyα radiation field. We show our kinematic maps that are spatially registered onto the Hubble Space Telescope Hα and Lyman α (Lyα) images. We can conjecture a causal connection between spatially resolved Hα kinematics and Lyα photometry for individual galaxies, however, no general trend can be established for the whole sample. Furthermore, we compute the intrinsic velocity dispersion σ0, the shearing velocity vshear, and the vshear/σ0 ratio from our kinematic maps. In general LARS galaxies are characterised by high intrinsic velocity dispersions (54 km s-1 median) and low shearing velocities (65 km s-1 median). The vshear/σ0 values range from 0.5 to 3.2 with an average of 1.5. It is noteworthy that five galaxies of the sample are dispersion-dominated systems with vshear/σ0< 1, and are thus kinematically similar to turbulent star-forming galaxies seen at high redshift. When linking our kinematical statistics to the global LARS Lyα properties, we find that dispersion-dominated systems show higher Lyα equivalent widths and higher Lyα escape fractions than systems with vshear/σ0> 1. Our result indicates that turbulence in actively star-forming systems is causally connected to interstellar medium conditions that favour an escape of Lyα radiation. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC).The reduced data cubes (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130
Estimation of cosmological parameters using adaptive importance sampling
Wraith, Darren; Kilbinger, Martin; Benabed, Karim; Prunet, Simon; Cappe, Olivier; Fort, Gersende; Cardoso, Jean-Francois; Robert, Christian P.
2009-07-15
We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.
Structured estimation - Sample size reduction for adaptive pattern classification
NASA Technical Reports Server (NTRS)
Morgera, S.; Cooper, D. B.
1977-01-01
The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.
Complementary adaptive processes contribute to the developmental plasticity of spatial hearing
Keating, Peter; Dahmen, Johannes C.; King, Andrew J.
2014-01-01
Spatial hearing evolved independently in mammals and birds, and is thought to adapt to altered developmental input in different ways. We found, however, that ferrets possess multiple forms of plasticity that are expressed according to which spatial cues are available, suggesting that the basis for adaptation may be similar across species. Our results also provide insight into the way sound source location is represented by populations of cortical neurons. PMID:25581359
Macpherson, J. Michael; Sella, Guy; Davis, Jerel C.; Petrov, Dmitri A.
2007-01-01
The effect of recurrent selective sweeps is a spatially heterogeneous reduction in neutral polymorphism throughout the genome. The pattern of reduction depends on the selective advantage and recurrence rate of the sweeps. Because many adaptive substitutions responsible for these sweeps also contribute to nonsynonymous divergence, the spatial distribution of nonsynonymous divergence also reflects the distribution of adaptive substitutions. Thus, the spatial correspondence between neutral polymorphism and nonsynonymous divergence may be especially informative about the process of adaptation. Here we study this correspondence using genomewide polymorphism data from Drosophila simulans and the divergence between D. simulans and D. melanogaster. Focusing on highly recombining portions of the autosomes, at a spatial scale appropriate to the study of selective sweeps, we find that neutral polymorphism is both lower and, as measured by a new statistic QS, less homogeneous where nonsynonymous divergence is higher and that the spatial structure of this correlation is best explained by the action of strong recurrent selective sweeps. We introduce a method to infer, from the spatial correspondence between polymorphism and divergence, the rate and selective strength of adaptation. Our results independently confirm a high rate of adaptive substitution (∼1/3000 generations) and newly suggest that many adaptations are of surprisingly great selective effect (∼1%), reducing the effective population size by ∼15% even in highly recombining regions of the genome. PMID:18073425
Yan, Shengye; Xu, Xinxing; Xu, Dong; Lin, Stephen; Li, Xuelong
2015-03-01
We present a framework for image classification that extends beyond the window sampling of fixed spatial pyramids and is supported by a new learning algorithm. Based on the observation that fixed spatial pyramids sample a rather limited subset of the possible image windows, we propose a method that accounts for a comprehensive set of windows densely sampled over location, size, and aspect ratio. A concise high-level image feature is derived to effectively deal with this large set of windows, and this higher level of abstraction offers both efficient handling of the dense samples and reduced sensitivity to misalignment. In addition to dense window sampling, we introduce generalized adaptive l(p)-norm multiple kernel learning (GA-MKL) to learn a robust classifier based on multiple base kernels constructed from the new image features and multiple sets of prelearned classifiers from other classes. With GA-MKL, multiple levels of image features are effectively fused, and information is shared among different classifiers. Extensive evaluation on benchmark datasets for object recognition (Caltech256 and Caltech101) and scene recognition (15Scenes) demonstrate that the proposed method outperforms the state-of-the-art under a broad range of settings. PMID:24968365
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
Spatial considerations during cryopreservation of a large volume sample.
Kilbride, Peter; Lamb, Stephen; Milne, Stuart; Gibbons, Stephanie; Erro, Eloy; Bundy, James; Selden, Clare; Fuller, Barry; Morris, John
2016-08-01
There have been relatively few studies on the implications of the physical conditions experienced by cells during large volume (litres) cryopreservation - most studies have focused on the problem of cryopreservation of smaller volumes, typically up to 2 ml. This study explores the effects of ice growth by progressive solidification, generally seen during larger scale cryopreservation, on encapsulated liver hepatocyte spheroids, and it develops a method to reliably sample different regions across the frozen cores of samples experiencing progressive solidification. These issues are examined in the context of a Bioartificial Liver Device which requires cryopreservation of a 2 L volume in a strict cylindrical geometry for optimal clinical delivery. Progressive solidification cannot be avoided in this arrangement. In such a system optimal cryoprotectant concentrations and cooling rates are known. However, applying these parameters to a large volume is challenging due to the thermal mass and subsequent thermal lag. The specific impact of this to the cryopreservation outcome is required. Under conditions of progressive solidification, the spatial location of Encapsulated Liver Spheroids had a strong impact on post-thaw recovery. Cells in areas first and last to solidify demonstrated significantly impaired post-thaw function, whereas areas solidifying through the majority of the process exhibited higher post-thaw outcome. It was also found that samples where the ice thawed more rapidly had greater post-thaw viability 24 h post-thaw (75.7 ± 3.9% and 62.0 ± 7.2% respectively). These findings have implications for the cryopreservation of large volumes with a rigid shape and for the cryopreservation of a Bioartificial Liver Device. PMID:27256662
Lotterhos, Katie E; Whitlock, Michael C
2015-03-01
Although genome scans have become a popular approach towards understanding the genetic basis of local adaptation, the field still does not have a firm grasp on how sampling design and demographic history affect the performance of genome scans on complex landscapes. To explore these issues, we compared 20 different sampling designs in equilibrium (i.e. island model and isolation by distance) and nonequilibrium (i.e. range expansion from one or two refugia) demographic histories in spatially heterogeneous environments. We simulated spatially complex landscapes, which allowed us to exploit local maxima and minima in the environment in 'pair' and 'transect' sampling strategies. We compared F(ST) outlier and genetic-environment association (GEA) methods for each of two approaches that control for population structure: with a covariance matrix or with latent factors. We show that while the relative power of two methods in the same category (F(ST) or GEA) depended largely on the number of individuals sampled, overall GEA tests had higher power in the island model and F(ST) had higher power under isolation by distance. In the refugia models, however, these methods varied in their power to detect local adaptation at weakly selected loci. At weakly selected loci, paired sampling designs had equal or higher power than transect or random designs to detect local adaptation. Our results can inform sampling designs for studies of local adaptation and have important implications for the interpretation of genome scans based on landscape data. PMID:25648189
Development of Climate Change Adaptation Platform using Spatial Information
NASA Astrophysics Data System (ADS)
Lee, J.; Oh, K. Y.; Lee, M. J.; Han, W. J.
2014-12-01
Climate change adaptation has attracted growing attention with the recent extreme weather conditions that affect people around the world. More and more countries, including the Republic of Korea, have begun to hatch adaptation plan to resolve these matters of great concern. They all, meanwhile, have mentioned that it should come first to integrate climate information in all analysed areas. That's because climate information is not independently made through one source; that is to say, the climate information is connected one another in a complicated way. That is the reason why we have to promote integrated climate change adaptation platform before setting up climate change adaptation plan. Therefore, the large-scaled project has been actively launched and worked on. To date, we researched 620 literatures and interviewed 51 government organizations. Based on the results of the researches and interviews, we obtained 2,725 impacts about vulnerability assessment information such as Monitoring and Forecasting, Health, Disaster, Agriculture, Forest, Water Management, Ecosystem, Ocean/Fisheries, Industry/Energy. Among 2,725 impacts, 995 impacts are made into a database until now. This database is made up 3 sub categories like Climate-Exposure, Sensitivity, Adaptive capacity, presented by IPCC. Based on the constructed database, vulnerability assessments were carried out in order to evaluate climate change capacity of local governments all over the country. These assessments were conducted by using web-based vulnerability assessment tool which was newly developed through this project. These results have shown that, metropolitan areas like Seoul, Pusan, Inchon, and so on have high risks more than twice than rural areas. Acknowledgements: The authors appreciate the support that this study has received from "Development of integrated model for climate change impact and vulnerability assessment and strengthening the framework for model implementation ", an initiative of the
Spatial analysis of NDVI readings with difference sampling density
Technology Transfer Automated Retrieval System (TEKTRAN)
Advanced remote sensing technologies provide research an innovative way of collecting spatial data for use in precision agriculture. Sensor information and spatial analysis together allow for a complete understanding of the spatial complexity of a field and its crop. The objective of the study was...
Ally, Dilara; Wiss, Valorie R.; Deckert, Gail E.; Green, Danielle; Roychoudhury, Pavitra; Wichman, Holly A.; Brown, Celeste J.; Krone, Stephen M.
2014-01-01
Background Most clinical and natural microbial communities live and evolve in spatially structured environments. When changes in environmental conditions trigger evolutionary responses, spatial structure can impact the types of adaptive response and the extent to which they spread. In particular, localized competition in a spatial landscape can lead to the emergence of a larger number of different adaptive trajectories than would be found in well-mixed populations. Our goal was to determine how two levels of spatial structure affect genomic diversity in a population and how this diversity is manifested spatially. Methodology/Principal Findings We serially transferred bacteriophage populations growing at high temperatures (40°C) on agar plates for 550 generations at two levels of spatial structure. The level of spatial structure was determined by whether the physical locations of the phage subsamples were preserved or disrupted at each passage to fresh bacterial host populations. When spatial structure of the phage populations was preserved, there was significantly greater diversity on a global scale with restricted and patchy distribution. When spatial structure was disrupted with passaging to fresh hosts, beneficial mutants were spread across the entire plate. This resulted in reduced diversity, possibly due to clonal interference as the most fit mutants entered into competition on a global scale. Almost all substitutions present at the end of the adaptation in the populations with disrupted spatial structure were also present in the populations with structure preserved. Conclusions/Significance Our results are consistent with the patchy nature of the spread of adaptive mutants in a spatial landscape. Spatial structure enhances diversity and slows fixation of beneficial mutants. This added diversity could be beneficial in fluctuating environments. We also connect observed substitutions and their effects on fitness to aspects of phage biology, and we provide
Savin, Douglas N.; Tseng, Shih-Chiao; Whitall, Jill; Morton, Susanne M.
2015-01-01
Background Persons with stroke and hemiparesis walk with a characteristic pattern of spatial and temporal asymmetry that is resistant to most traditional interventions. It was recently shown in nondisabled persons that the degree of walking symmetry can be readily altered via locomotor adaptation. However, it is unclear whether stroke-related brain damage affects the ability to adapt spatial or temporal gait symmetry. Objective Determine whether locomotor adaptation to a novel swing phase perturbation is impaired in persons with chronic stroke and hemiparesis. Methods Participants with ischemic stroke (14) and nondisabled controls (12) walked on a treadmill before, during, and after adaptation to a unilateral perturbing weight that resisted forward leg movement. Leg kinematics were measured bilaterally, including step length and single-limb support (SLS) time symmetry, limb angle center of oscillation, and interlimb phasing, and magnitude of “initial” and “late” locomotor adaptation rates were determined. Results All participants had similar magnitudes of adaptation and similar initial adaptation rates both spatially and temporally. All 14 participants with stroke and baseline asymmetry temporarily walked with improved SLS time symmetry after adaptation. However, late adaptation rates poststroke were decreased (took more strides to achieve adaptation) compared with controls. Conclusions Mild to moderate hemiparesis does not interfere with the initial acquisition of novel symmetrical gait patterns in both the spatial and temporal domains, though it does disrupt the rate at which “late” adaptive changes are produced. Impairment of the late, slow phase of learning may be an important rehabilitation consideration in this patient population. PMID:22367915
Preflight Adaptation Training for Spatial Orientation and Space Motion Sickness
NASA Technical Reports Server (NTRS)
Harm, Deborah L.; Parker, Donald E.
1994-01-01
Two part-task preflight adaptation trainers (PATs) are being developed at the NASA Johnson Space Center to preadapt astronauts to novel sensory stimulus conditions similar to those present in microgravity to facilitate adaptation to microgravity and readaptation to Earth. This activity is a major component of a general effort to develop countermeasures aimed at minimizing sensory and sensorimotor disturbances and Space Motion Sickness (SMS) associated with adaptation to microgravity and readaptation to Earth. Design principles for the development of the two trainers are discussed, along with a detailed description of both devices. In addition, a summary of four ground-based investigations using one of the trainers to determine the extent to which various novel sensory stimulus conditions produce changes in compensatory eye movement responses, postural equilibrium, motion sickness symptoms, and electrogastric responses are presented. Finally, a brief description of the general concept of dual-adopted states that underly the development of the PATs, and ongoing and future operational and basic research activities are presented.
Spatial-frequency-contingent color aftereffects: adaptation with two-dimensional stimulus patterns.
Webster, W R; Day, R H; Gillies, O; Crassini, B
1992-01-01
The spatial-frequency theory of vision has been supported by adaptation studies using checkerboards in which contingent color aftereffects (CAEs) were produced at fundamental frequencies oriented at 45 degrees to the edges. A replication of this study failed to produce CAEs at the orientation of either the edges or the fundamentals. Using a computer-generated display, no CAEs were produced by adaptation of a square or an oblique checkerboard. But when one type of checkerboard (4 cpd) was adapted alone, CAEs were produced on the adapted checkerboard and on sine-wave gratings aligned with the fundamental and third harmonics of the checkerboard spectrum. Adaptation of a coarser checkerboard (0.80 cpd) produced CAEs aligned with both the edges and the harmonic frequencies. With checkerboards of both frequencies, CAEs were also found on the other type of checkerboard that had not been adapted. This observation raises problems for any edge-detector theory of vision, because there was no adaptation to edges. It was concluded that spatial-frequency mechanisms are operating at both low- and high-spatial frequencies and that an edge mechanism is operative at lower frequencies. The implications of these results are assessed for other theories of spatial vision. PMID:1549426
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Innovation and adaptation in a Turkish sample: a preliminary study.
Oner, B
2000-11-01
The aim of this study was to examine the representations of adaptation and innovation among adults in Turkey. Semi-structured interviews were carried out with a sample of 20 Turkish adults (10 men, 10 women) from various occupations. The participants' ages ranged from 21 to 58 years. Results of content analysis showed that the representation of innovation varied with the type of context. Innovation was not preferred within the family and interpersonal relationship contexts, whereas it was relatively more readily welcomed within the contexts of work, science, and technology. This finding may indicate that the concept of innovation that is assimilated in traditional Turkish culture has limits. Contents of the interviews were also analyzed with respect to M. J. Kirton's (1976) subscales of originality, efficiency, and rule-group conformity. The participants favored efficient innovators, whereas they thought that the risk of failure was high in cases of inefficient innovation. The reasons for and indications of the representations of innovativeness among Turkish people are discussed in relation to their social structure and cultural expectations. PMID:11092420
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
Goedert, Kelly M.; Chen, Peii; Boston, Raymond C.; Foundas, Anne L.; Barrett, A. M.
2013-01-01
Spatial neglect is a debilitating disorder for which there is no agreed upon course of rehabilitation. The lack of consensus on treatment may result from systematic differences in the syndromes’ characteristics, with spatial cognitive deficits potentially affecting perceptual-attentional Where or motor-intentional Aiming spatial processing. Heterogeneity of response to treatment might be explained by different treatment impact on these dissociated deficits: prism adaptation, for example, might reduce Aiming deficits without affecting Where spatial deficits. Here, we tested the hypothesis that classifying patients by their profile of Where-vs-Aiming spatial deficit would predict response to prism adaptation, and specifically that patients with Aiming bias would have better recovery than those with isolated Where bias. We classified the spatial errors of 24 sub-acute right-stroke survivors with left spatial neglect as: 1) isolated Where bias, 2) isolated Aiming bias or 3) both. Participants then completed two weeks of prism adaptation treatment. They also completed the Behavioral Inattention Test (BIT) and Catherine Bergego Scale (CBS) tests of neglect recovery weekly for six weeks. As hypothesized, participants with only Aiming deficits improved on the CBS, whereas, those with only Where deficits did not improve. Participants with both deficits demonstrated intermediate improvement. These results support behavioral classification of spatial neglect patients as a potential valuable tool for assigning targeted, effective early rehabilitation. PMID:24376064
High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM
Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne
2015-01-01
Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666
High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM
NASA Astrophysics Data System (ADS)
Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne
2015-11-01
Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.
Smart adaptive optic systems using spatial light modulators.
Clark, N; Banish, M; Ranganath, H S
1999-01-01
Many factors contribute to the aberrations induced in an optical system. Atmospheric turbulence between the object and the imaging system, physical or thermal perturbations in optical elements degrade the system's point spread function, and misaligned optics are the primary sources of aberrations that affect image quality. The design of a nonconventional real-time adaptive optic system using a micro-mirror device for wavefront correction is presented. The unconventional compensated imaging system presented offers advantages in speed, cost, power consumption, and weight. A pulsed-coupled neural network is used to as a preprocessor to enhance the performance of the wavefront sensor for low-light applications. Modeling results that characterize the system performance are presented. PMID:18252558
COST-EFFECTIVE SAMPLING FOR SPATIALLY DISTRIBUTED PHENOMENA
Various measures of sampling plan cost and loss are developed and analyzed as they relate to a variety of multidisciplinary sampling techniques. The sampling choices examined include methods from design-based sampling, model-based sampling, and geostatistics. Graphs and tables ar...
Computational Characterization of Visually Induced Auditory Spatial Adaptation
Wozny, David R.; Shams, Ladan
2011-01-01
Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory–visual spatial conflict, known as the ventriloquist aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution), a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution), a shift in the auditory bias (i.e., shift in prior distribution), or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution), or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory–visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error. PMID:22069383
Bayesian symmetrical EEG/fMRI fusion with spatially adaptive priors
Luessi, Martin; Babacan, S. Derin; Molina, Rafael; Booth, James R.; Katsaggelos, Aggelos K.
2011-01-01
In this paper, we propose a novel symmetrical EEG/fMRI fusion method which combines EEG and fMRI by means of a common generative model. We use a total variation (TV) prior to model the spatial distribution of the cortical current responses and hemodynamic response functions, and utilize spatially adaptive temporal priors to model their temporal shapes. The spatial adaptivity of the prior model allows for adaptation to the local characteristics of the estimated responses and leads to high estimation performance for the cortical current distribution and the hemodynamic response functions. We utilize a Bayesian formulation with a variational Bayesian framework and obtain a fully automatic fusion algorithm. Simulations with synthetic data and experiments with real data from a multimodal study on face perception demonstrate the performance of the proposed method. PMID:21130173
Research on test of product based on spatial sampling criteria and variable step sampling mechanism
NASA Astrophysics Data System (ADS)
Li, Ruihong; Han, Yueping
2014-09-01
This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.
Adaptive spatial combining for passive time-reversed communications.
Gomes, João; Silva, António; Jesus, Sérgio
2008-08-01
Passive time reversal has aroused considerable interest in underwater communications as a computationally inexpensive means of mitigating the intersymbol interference introduced by the channel using a receiver array. In this paper the basic technique is extended by adaptively weighting sensor contributions to partially compensate for degraded focusing due to mismatch between the assumed and actual medium impulse responses. Two algorithms are proposed, one of which restores constructive interference between sensors, and the other one minimizes the output residual as in widely used equalization schemes. These are compared with plain time reversal and variants that employ postequalization and channel tracking. They are shown to improve the residual error and temporal stability of basic time reversal with very little added complexity. Results are presented for data collected in a passive time-reversal experiment that was conducted during the MREA'04 sea trial. In that experiment a single acoustic projector generated a 24-PSK (phase-shift keyed) stream at 200400 baud, modulated at 3.6 kHz, and received at a range of about 2 km on a sparse vertical array with eight hydrophones. The data were found to exhibit significant Doppler scaling, and a resampling-based preprocessing method is also proposed here to compensate for that scaling. PMID:18681595
Inostroza, Luis; Palme, Massimo; de la Barrera, Francisco
2016-01-01
Climate change will worsen the high levels of urban vulnerability in Latin American cities due to specific environmental stressors. Some impacts of climate change, such as high temperatures in urban environments, have not yet been addressed through adaptation strategies, which are based on poorly supported data. These impacts remain outside the scope of urban planning. New spatially explicit approaches that identify highly vulnerable urban areas and include specific adaptation requirements are needed in current urban planning practices to cope with heat hazards. In this paper, a heat vulnerability index is proposed for Santiago, Chile. The index was created using a GIS-based spatial information system and was constructed from spatially explicit indexes for exposure, sensitivity and adaptive capacity levels derived from remote sensing data and socio-economic information assessed via principal component analysis (PCA). The objective of this study is to determine the levels of heat vulnerability at local scales by providing insights into these indexes at the intra city scale. The results reveal a spatial pattern of heat vulnerability with strong variations among individual spatial indexes. While exposure and adaptive capacities depict a clear spatial pattern, sensitivity follows a complex spatial distribution. These conditions change when examining PCA results, showing that sensitivity is more robust than exposure and adaptive capacity. These indexes can be used both for urban planning purposes and for proposing specific policies and measures that can help minimize heat hazards in highly dynamic urban areas. The proposed methodology can be applied to other Latin American cities to support policy making. PMID:27606592
Sparse sampling: Spatial design for monitoring stream networks
Spatial designs for monitoring stream networks, especially ephemeral systems, are typically non-standard, ‘sparse’ and can be very complex, reflecting the complexity of the ecosystem being monitored, the scale of the population, and the competing multiple monitoring objectives. ...
Technology Transfer Automated Retrieval System (TEKTRAN)
Spatial variability has a profound influence on solute transport in the vadose zone, soil quality assessment, and site-specific crop management. Directed soil sampling based on geospatial measurements of apparent soil electrical conductivity (ECa) is a potential means of characterizing the spatial ...
Adaptive Spatial Filtering with Principal Component Analysis for Biomedical Photoacoustic Imaging
NASA Astrophysics Data System (ADS)
Nagaoka, Ryo; Yamazaki, Rena; Saijo, Yoshifumi
Photoacoustic (PA) signal is very sensitive to noise generated by peripheral equipment such as power supply, stepping motor or semiconductor laser. Band-pass filter is not effective because the frequency bandwidth of the PA signal also covers the noise frequency. The objective of the present study is to reduce the noise by using an adaptive spatial filter with principal component analysis (PCA).
Signal processing through a generalized module of adaptation and spatial sensing.
Krishnan, J
2009-07-01
Signal transduction in many cellular processes is accompanied by the feature of adaptation, which allows certain key signalling components to respond to temporal and/or spatial variation of external signals, independent of the absolute value of the signal. We extend and formulate a more general module which accounts for robust temporal adaptation and spatial response. In this setting, we examine various aspects of spatial and temporal signalling, as well as the signalling consequences and restrictions imposed by virtue of adaptation. This module is able to exhibit a variety of behaviour in response to temporal, spatial and spatio-temporal inputs. We carefully examine the roles of various parameters in this module and how they affect signal processing and propagation. Overall, we demonstrate how a simple module can account for a range downstream responses to a variety of input signals, and how elucidating the downstream response of many cellular components in systems with such adaptive signalling can be consequently very non-trivial. PMID:19254728
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
Werner, Annette
2014-11-01
Illumination in natural scenes changes at multiple temporal and spatial scales: slow changes in global illumination occur in the course of a day, and we encounter fast and localised illumination changes when visually exploring the non-uniform light field of three-dimensional scenes; in addition, very long-term chromatic variations may come from the environment, like for example seasonal changes. In this context, I consider the temporal and spatial properties of chromatic adaptation and discuss their functional significance for colour constancy in three-dimensional scenes. A process of fast spatial tuning in chromatic adaptation is proposed as a possible sensory mechanism for linking colour constancy to the spatial structure of a scene. The observed middlewavelength selectivity of this process is particularly suitable for adaptation to the mean chromaticity and the compensation of interreflections in natural scenes. Two types of sensory colour constancy are distinguished, based on the functional differences of their temporal and spatial scales: a slow type, operating at a global scale for the compensation of the ambient illumination; and a fast colour constancy, which is locally restricted and well suited to compensate region-specific variations in the light field of three dimensional scenes. PMID:25449338
Spatial frequency analysis of anisotropic drug transport in tumor samples
Russell, Stewart; Samkoe, Kimberley S.; Gunn, Jason R.; Hoopes, P. Jack; Nguyen, Thienan A.; Russell, Milo J.; Alfano, Robert R.; Pogue, Brian W.
2014-01-01
Abstract. Directional Fourier spatial frequency analysis was used on standard histological sections to identify salient directional bias in the spatial frequencies of stromal and epithelial patterns within tumor tissue. This directional bias is shown to be correlated to the pathway of reduced fluorescent tracer transport. Optical images of tumor specimens contain a complex distribution of randomly oriented aperiodic features used for neoplastic grading that varies with tumor type, size, and morphology. The internal organization of these patterns in frequency space is shown to provide a precise fingerprint of the extracellular matrix complexity, which is well known to be related to the movement of drugs and nanoparticles into the parenchyma, thereby identifying the characteristic spatial frequencies of regions that inhibit drug transport. The innovative computational methodology and tissue validation techniques presented here provide a tool for future investigation of drug and particle transport in tumor tissues, and could potentially be used a priori to identify barriers to transport, and to analyze real-time monitoring of transport with respect to therapeutic intervention. PMID:24395585
Spatial frequency analysis of anisotropic drug transport in tumor samples
NASA Astrophysics Data System (ADS)
Russell, Stewart; Samkoe, Kimberley S.; Gunn, Jason R.; Hoopes, P. Jack; Nguyen, Thienan A.; Russell, Milo J.; Alfano, Robert R.; Pogue, Brian W.
2014-01-01
Directional Fourier spatial frequency analysis was used on standard histological sections to identify salient directional bias in the spatial frequencies of stromal and epithelial patterns within tumor tissue. This directional bias is shown to be correlated to the pathway of reduced fluorescent tracer transport. Optical images of tumor specimens contain a complex distribution of randomly oriented aperiodic features used for neoplastic grading that varies with tumor type, size, and morphology. The internal organization of these patterns in frequency space is shown to provide a precise fingerprint of the extracellular matrix complexity, which is well known to be related to the movement of drugs and nanoparticles into the parenchyma, thereby identifying the characteristic spatial frequencies of regions that inhibit drug transport. The innovative computational methodology and tissue validation techniques presented here provide a tool for future investigation of drug and particle transport in tumor tissues, and could potentially be used a priori to identify barriers to transport, and to analyze real-time monitoring of transport with respect to therapeutic intervention.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Analysis of SWOT spatial and temporal samplings over continents
NASA Astrophysics Data System (ADS)
Biancamaria, Sylvain; Lamy, Alain; Mognard, Nelly
2014-05-01
The future Surface Water and Ocean Topography (SWOT) satellite mission, collaboratively developed by NASA, CNES and CSA, is a joint oceanography/continental hydrology mission planned for launch in 2020. In June 2013, a new SWOT orbit has been selected with a 77.6° inclination, a 21 days repeat cycle and a 891 km altitude. The main satellite payload (a Ka-band SAR Interferometer), will provide 2D maps of water elevation, mask and slope over two swaths, both having a 50 km extent. These two swaths will be separated by a 20 km nadir gap. Most of the studies concerning SWOT published since 2007 have considered a former orbit with a 78° inclination, 22 day repeat orbit and a 970 km altitude and a 60 km extent for each swath. None of them have studied the newly selected orbit and the impact of the 20 km nadir gap on the spatial coverage has not been much explored. The purpose of the work presented here is to investigate the spatial and temporal coverage given this new orbit and the actual swath extent (2*50 km swaths with the 20 km nadir gap in between) and compare it to the former SWOT configuration. It is shown that the new configuration will have almost no impact on the computation of monthly averages, however it will impact the spatial coverage. Because of the nadir gap, the orbit repeatitivity and the swaths extent, 3.6% of the continental surfaces in between 78°S and 78°N will never be observed by SWOT (which was previously equal to 2.2% with the former SWOT configuration). The equatorial regions will be the most impacted, as uncovered area could go up to ~14% locally, whereas it never exceeded 9% with the previous SWOT configuration.
Ensembles of adaptive spatial filters increase BCI performance: an online evaluation
NASA Astrophysics Data System (ADS)
Sannelli, Claudia; Vidaurre, Carmen; Müller, Klaus-Robert; Blankertz, Benjamin
2016-08-01
Objective: In electroencephalographic (EEG) data, signals from distinct sources within the brain are widely spread by volume conduction and superimposed such that sensors receive mixtures of a multitude of signals. This reduction of spatial information strongly hampers single-trial analysis of EEG data as, for example, required for brain–computer interfacing (BCI) when using features from spontaneous brain rhythms. Spatial filtering techniques are therefore greatly needed to extract meaningful information from EEG. Our goal is to show, in online operation, that common spatial pattern patches (CSPP) are valuable to counteract this problem. Approach: Even though the effect of spatial mixing can be encountered by spatial filters, there is a trade-off between performance and the requirement of calibration data. Laplacian derivations do not require calibration data at all, but their performance for single-trial classification is limited. Conversely, data-driven spatial filters, such as common spatial patterns (CSP), can lead to highly distinctive features; however they require a considerable amount of training data. Recently, we showed in an offline analysis that CSPP can establish a valuable compromise. In this paper, we confirm these results in an online BCI study. In order to demonstrate the paramount feature that CSPP requires little training data, we used them in an adaptive setting with 20 participants and focused on users who did not have success with previous BCI approaches. Main results: The results of the study show that CSPP adapts faster and thereby allows users to achieve better feedback within a shorter time than previous approaches performed with Laplacian derivations and CSP filters. The success of the experiment highlights that CSPP has the potential to further reduce BCI inefficiency. Significance: CSPP are a valuable compromise between CSP and Laplacian filters. They allow users to attain better feedback within a shorter time and thus reduce BCI
Large spatial, temporal, and algorithmic adaptivity for implicit nonlinear finite element analysis
Engelmann, B.E.; Whirley, R.G.
1992-07-30
The development of effective solution strategies to solve the global nonlinear equations which arise in implicit finite element analysis has been the subject of much research in recent years. Robust algorithms are needed to handle the complex nonlinearities that arise in many implicit finite element applications such as metalforming process simulation. The authors experience indicates that robustness can best be achieved through adaptive solution strategies. In the course of their research, this adaptivity and flexibility has been refined into a production tool through the development of a solution control language called ISLAND. This paper discusses aspects of adaptive solution strategies including iterative procedures to solve the global equations and remeshing techniques to extend the domain of Lagrangian methods. Examples using the newly developed ISLAND language are presented to illustrate the advantages of embedding temporal, algorithmic, and spatial adaptivity in a modem implicit nonlinear finite element analysis code.
POF-Darts: Geometric adaptive sampling for probability of failure
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; Romero, Vicente J.; Rushdi, Ahmad A.
2016-06-18
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
The Effects of Adapted Tango on Spatial Cognition and Disease Severity in Parkinson’s Disease
McKee, Kathleen E.; Hackney, Madeleine E.
2013-01-01
This study determined effects of community-based adapted tango upon spatial cognition and disease severity in Parkinson’s disease (PD) while controlling for the effects of social interaction. Thirty-three individuals with mild-moderate PD (stage I–III) were assigned to twenty, 90-minute Tango (n=24) or Education (n=9) lessons over 12 weeks. Disease severity, spatial cognition, balance, and fall incidence were evaluated pre-, post-, and 10–12 weeks post-intervention. T-tests and ANOVAs evaluated differences. Twenty-three Tango and 8 Education participants finished. Tango participants improved on disease severity (p=0.008), and spatial cognition (p=0.021) compared to Education participants. Tango participants also improved in balance (p=0.038), and executive function (p=0.012). Gains were maintained 10–12 weeks post-intervention. Multimodal exercise with structured syllabi may improve disease severity and spatial cognition. PMID:24116748
NASA Astrophysics Data System (ADS)
Blaen, Phillip; Khamis, Kieran; Lloyd, Charlotte; Bradley, Chris
2016-04-01
Excessive nutrient concentrations in river waters threaten aquatic ecosystem functioning and can pose substantial risks to human health. Robust monitoring strategies are therefore required to generate reliable estimates of river nutrient loads and to improve understanding of the catchment processes that drive spatiotemporal patterns in nutrient fluxes. Furthermore, these data are vital for prediction of future trends under changing environmental conditions and thus the development of appropriate mitigation measures. In recent years, technological developments have led to an increase in the use of continuous in-situ nutrient analysers, which enable measurements at far higher temporal resolutions than can be achieved with discrete sampling and subsequent laboratory analysis. However, such instruments can be costly to run and difficult to maintain (e.g. due to high power consumption and memory requirements), leading to trade-offs between temporal and spatial monitoring resolutions. Here, we highlight how adaptive monitoring strategies, comprising a mixture of temporal sample frequencies controlled by one or more 'trigger variables' (e.g. river stage, turbidity, or nutrient concentration), can advance our understanding of catchment nutrient dynamics while simultaneously overcoming many of the practical and economic challenges encountered in typical in-situ river nutrient monitoring applications. We present examples of short-term variability in river nutrient dynamics, driven by complex catchment behaviour, which support our case for the development of monitoring systems that can adapt in real-time to rapid environmental changes. In addition, we discuss the advantages and disadvantages of current nutrient monitoring techniques, and suggest new research directions based on emerging technologies and highlight how these might improve: 1) monitoring strategies, and 2) understanding of linkages between catchment processes and river nutrient fluxes.
Deployment of spatial attention without moving the eyes is boosted by oculomotor adaptation
Habchi, Ouazna; Rey, Elodie; Mathieu, Romain; Urquizar, Christian; Farnè, Alessandro; Pélisson, Denis
2015-01-01
Vertebrates developed sophisticated solutions to select environmental visual information, being capable of moving attention without moving the eyes. A large body of behavioral and neuroimaging studies indicate a tight coupling between eye movements and spatial attention. The nature of this link, however, remains highly debated. Here, we demonstrate that deployment of human covert attention, measured in stationary eye conditions, can be boosted across space by changing the size of ocular saccades to a single position via a specific adaptation paradigm. These findings indicate that spatial attention is more widely affected by oculomotor plasticity than previously thought. PMID:26300755
NASA Astrophysics Data System (ADS)
Ding, Quanxin; Guo, Chunjie; Cai, Meng; Liu, Hua
2007-12-01
Adaptive Optics Expand System is a kind of new concept spatial equipment, which concerns system, cybernetics and informatics deeply, and is key way to improve advanced sensors ability. Traditional Zernike Phase Contrast Method is developed, and Accelerated High-level Phase Contrast Theory is established. Integration theory and mathematical simulation is achieved. Such Equipment, which is based on some crucial components, such as, core optical system, multi mode wavefront sensor and so on, is established for AOES advantageous configuration and global design. Studies on Complicated Spatial Multisensor System Integratation and measurement Analysis including error analysis are carried out.
Storer, Nicholas P
2003-10-01
A stochastic spatially explicit computer model is described that simulates the adaptation by western corn rootworm, Diabrotica virgifera virgifera LeConte, to rootworm-resistance traits in maize. The model reflects the ecology of the rootworm in much of the corn belt of the United States. It includes functions for crop development, egg and larval mortality, adult emergence, mating, egg laying, mortality and dispersal, and alternative methods of rootworm control, to simulate the population dynamics of the rootworm. Adaptation to the resistance trait is assumed to be controlled by a monogenic diallelic locus, whereby the allele for adaptation varies from incompletely recessive to incompletely dominant, depending on the efficacy of the resistance trait. The model was used to compare the rate at which the adaptation allele spread through the population under different nonresistant maize refuge deployment scenarios, and under different levels of crop resistance. For a given refuge size, the model indicated that placing the nonresistant refuge in a block within a rootworm-resistant field would be likely to delay rootworm adaptation rather longer than planting the refuge in separate fields in varying locations. If a portion of the refuge were to be planted in the same fields or in-field blocks each year, rootworm adaptation would be delayed substantially. Rootworm adaptation rates are also predicted to be greatly affected by the level of crop resistance, because of the expectation of dependence of functional dominance on dose. If the dose of the insecticidal protein in the maize is sufficiently high to kill >90% of heterozygotes and approximately 100% of susceptible homozygotes, the trait is predicted to be much more durable than if the dose is lower. A partial sensitivity analysis showed that parameters relating to adult dispersal affected the rate of pest adaptation. Partial validation of the model was achieved by comparing output of the model with field data on
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza
The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work
Adaptive optics for deeper imaging of biological samples.
Girkin, John M; Poland, Simon; Wright, Amanda J
2009-02-01
Optical microscopy has been a cornerstone of life science investigations since its first practical application around 400 years ago with the goal being subcellular resolution, three-dimensional images, at depth, in living samples. Nonlinear microscopy brought this dream a step closer, but as one images more deeply the material through which you image can greatly distort the view. By using optical devices, originally developed for astronomy, whose optical properties can be changed in real time, active compensation for sample-induced aberrations is possible. Submicron resolution images are now routinely recorded from depths over 1mm into tissue. Such active optical elements can also be used to keep conventional microscopes, both confocal and widefield, in optimal alignment. PMID:19272766
Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Thompson, David R.; Hsiang, Kian
2013-01-01
This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of
NASA Technical Reports Server (NTRS)
Mcgwire, K.; Friedl, M.; Estes, J. E.
1993-01-01
This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.
Improving brain-computer interface classification using adaptive common spatial patterns.
Song, Xiaomu; Yoon, Suk-Chung
2015-06-01
Common Spatial Patterns (CSP) is a widely used spatial filtering technique for electroencephalography (EEG)-based brain-computer interface (BCI). It is a two-class supervised technique that needs subject-specific training data. Due to EEG nonstationarity, EEG signal may exhibit significant intra- and inter-subject variation. As a result, spatial filters learned from a subject may not perform well for data acquired from the same subject at a different time or from other subjects performing the same task. Studies have been performed to improve CSP's performance by adding regularization terms into the training. Most of them require target subjects' training data with known class labels. In this work, an adaptive CSP (ACSP) method is proposed to analyze single trial EEG data from single and multiple subjects. The method does not estimate target data's class labels during the adaptive learning and updates spatial filters for both classes simultaneously. The proposed method was evaluated based on a comparison study with the classic CSP and several CSP-based adaptive methods using motor imagery EEG data from BCI competitions. Experimental results indicate that the proposed method can improve the classification performance as compared to the other methods. For circumstances where true class labels of target data are not instantly available, it was examined if adding classified target data to training data would improve the ACSP learning. Experimental results show that it would be better to exclude them from the training data. The proposed ACSP method can be performed in real-time and is potentially applicable to various EEG-based BCI applications. PMID:25909828
Validation of Sensor-Directed Spatial Simulated Annealing Soil Sampling Strategy.
Scudiero, Elia; Lesch, Scott M; Corwin, Dennis L
2016-07-01
Soil spatial variability has a profound influence on most agronomic and environmental processes at field and landscape scales, including site-specific management, vadose zone hydrology and transport, and soil quality. Mobile sensors are a practical means of mapping spatial variability because their measurements serve as a proxy for many soil properties, provided a sensor-soil calibration is conducted. A viable means of calibrating sensor measurements over soil properties is through linear regression modeling of sensor and target property data. In the present study, two sensor-directed, model-based, sampling scheme delineation methods were compared to validate recent applications of soil apparent electrical conductivity (EC)-directed spatial simulated annealing against the more established EC-directed response surface sampling design (RSSD) approach. A 6.8-ha study area near San Jacinto, CA, was surveyed for EC, and 30 soil sampling locations per sampling strategy were selected. Spatial simulated annealing and RSSD were compared for sensor calibration to a target soil property (i.e., salinity) and for evenness of spatial coverage of the study area, which is beneficial for mapping nontarget soil properties (i.e., those not correlated with EC). The results indicate that the linear modeling EC-salinity calibrations obtained from the two sampling schemes provided salinity maps characterized by similar errors. The maps of nontarget soil properties show similar errors across sampling strategies. The Spatial Simulated Annealing methodology is, therefore, validated, and its use in agronomic and environmental soil science applications is justified. PMID:27380070
Adapting geostatistics to analyze spatial and temporal trends in weed populations
Technology Transfer Automated Retrieval System (TEKTRAN)
Geostatistics were originally developed in mining to estimate the location, abundance and quality of ore over large areas from soil samples to optimize future mining efforts. Here, some of these methods were adapted to weeds to account for a limited distribution area (i.e., inside a field), variatio...
Van Berkel, Gary J.
2015-10-06
A system and method for analyzing a chemical composition of a specimen are described. The system can include at least one pin; a sampling device configured to contact a liquid with a specimen on the at least one pin to form a testing solution; and a stepper mechanism configured to move the at least one pin and the sampling device relative to one another. The system can also include an analytical instrument for determining a chemical composition of the specimen from the testing solution. In particular, the systems and methods described herein enable chemical analysis of specimens, such as tissue, to be evaluated in a manner that the spatial-resolution is limited by the size of the pins used to obtain tissue samples, not the size of the sampling device used to solubilize the samples coupled to the pins.
Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050
McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.; Huynh, Timmy N.; Bhaduri, Budhendra L.
2015-02-03
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection modelmore » departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.« less
Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050
McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.; Huynh, Timmy N.; Bhaduri, Budhendra L.
2015-02-03
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.
Human Topological Task Adapted for Rats: Spatial Information Processes of the Parietal Cortex
Goodrich-Hunsaker, Naomi J.; Howard, Brian P.; Hunsaker, Michael R.; Kesner, Raymond P.
2008-01-01
Human research has shown that lesions of the parietal cortex disrupt spatial information processing, specifically topological information. Similar findings have been found in nonhumans. It has been difficult to determine homologies between human and non-human mnemonic mechanisms for spatial information processing because methodologies and neuropathology differ. The first objective of the present study was to adapt a previously established human task for rats. The second objective was to better characterize the role of parietal cortex (PC) and dorsal hippocampus (dHPC) for topological spatial information processing. Rats had to distinguish whether a ball inside a ring or a ball outside a ring was the correct, rewarded object. After rats reached criterion on the task (>95%) they were randomly assigned to a lesion group (control, PC, dHPC). Animals were then re-tested. Post-surgery data show that controls were 94% correct on average, dHPC rats were 89% correct on average, and PC rats were 56% correct on average. The results from the present study suggest that the parietal cortex, but not the dHPC processes topological spatial information. The present data are the first to support comparable topological spatial information processes of the parietal cortex in humans and rats. PMID:18571941
Locally adaptive, spatially explicit projection of US population for 2030 and 2050
McKee, Jacob J.; Rose, Amy N.; Bright, Edward A.; Huynh, Timmy; Bhaduri, Budhendra L.
2015-01-01
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census’s projection methodology, with the US Census’s official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations. PMID:25605882
A spatially adaptive total variation regularization method for electrical resistance tomography
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2015-12-01
The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.
Prism adaptation and spatial neglect: the need for dose-finding studies
Goedert, Kelly M.; Zhang, Jeffrey Y.; Barrett, A. M.
2015-01-01
Spatial neglect is a devastating disorder in 50–70% of right-brain stroke survivors, who have problems attending to, or making movements towards, left-sided stimuli, and experience a high risk of chronic dependence. Prism adaptation is a promising treatment for neglect that involves brief, daily visuo-motor training sessions while wearing optical prisms. Its benefits extend to functional behaviors such as dressing, with effects lasting 6 months or longer. Because one to two sessions of prism adaptation induce adaptive changes in both spatial-motor behavior (Fortis et al., 2011) and brain function (Saj et al., 2013), it is possible stroke patients may benefit from treatment periods shorter than the standard, intensive protocol of ten sessions over two weeks—a protocol that is impractical for either US inpatient or outpatient rehabilitation. Demonstrating the effectiveness of a lower dose will maximize the availability of neglect treatment. We present preliminary data suggesting that four to six sessions of prism treatment may induce a large treatment effect, maintained three to four weeks post-treatment. We call for a systematic, randomized clinical trial to establish the minimal effective dose suitable for stroke intervention. PMID:25983688
Prism adaptation and spatial neglect: the need for dose-finding studies.
Goedert, Kelly M; Zhang, Jeffrey Y; Barrett, A M
2015-01-01
Spatial neglect is a devastating disorder in 50-70% of right-brain stroke survivors, who have problems attending to, or making movements towards, left-sided stimuli, and experience a high risk of chronic dependence. Prism adaptation is a promising treatment for neglect that involves brief, daily visuo-motor training sessions while wearing optical prisms. Its benefits extend to functional behaviors such as dressing, with effects lasting 6 months or longer. Because one to two sessions of prism adaptation induce adaptive changes in both spatial-motor behavior (Fortis et al., 2011) and brain function (Saj et al., 2013), it is possible stroke patients may benefit from treatment periods shorter than the standard, intensive protocol of ten sessions over two weeks-a protocol that is impractical for either US inpatient or outpatient rehabilitation. Demonstrating the effectiveness of a lower dose will maximize the availability of neglect treatment. We present preliminary data suggesting that four to six sessions of prism treatment may induce a large treatment effect, maintained three to four weeks post-treatment. We call for a systematic, randomized clinical trial to establish the minimal effective dose suitable for stroke intervention. PMID:25983688
NASA Astrophysics Data System (ADS)
Fang, Hao; Li, Qian; Huang, Zhenghua
2015-12-01
Denoising algorithms based on gradient dependent energy functionals, such as Perona-Malik, total variation and adaptive total variation denoising, modify images towards piecewise constant functions. Although edge sharpness and location is well preserved, important information, encoded in image features like textures or certain details, is often compromised in the process of denoising. In this paper, We propose a novel Spatially Adaptive Guide-Filtering Total Variation (SAGFTV) regularization with image restoration algorithm for denoising images. The guide-filter is extended to the variational formulations of imaging problem, and the spatially adaptive operator can easily distinguish flat areas from texture areas. Our simulating experiments show the improvement of peak signal noise ratio (PSNR), root mean square error (RMSE) and structure similarity increment measurement (SSIM) over other prior algorithms. The results of both simulating and practical experiments are more appealing visually. This type of processing can be used for a variety of tasks in PDE-based image processing and computer vision, and is stable and meaningful from a mathematical viewpoint.
Adaptive Spatial Filtering of Interferometric Data Stack Oriented to Distributed Scatterers
NASA Astrophysics Data System (ADS)
Zhang, Y.; Xie, C.; Shao, Y.; Yuan, M.
2013-07-01
Standard interferometry poses a challenge in non-urban areas due to temporal and spatial decorrelation of the radar signal, where there is high signal noise. Techniques such as Small Baseline Subset Algorithm (SBAS) have been proposed to make use of multiple interferometric combinations to alleviate the problem. However, the interferograms used in SBAS are multilooked with a boxcar (rectangle) filter to reduce phase noise, resulting in a loss of resolution and signal superstition from different objects. In this paper, we proposed a modified adaptive spatial filtering algorithm for accurate estimation of interferogram and coherence without resolution loss even in rural areas, to better support the deformation monitoring with time series interferometric synthetic aperture radar (InSAR) technique. The implemented method identifies the statistically homogenous pixels in a neighbourhood based on the goodness-of-fit test, and then applies an adaptive spatial filtering of interferograms. Three statistical tests for the identification of distributed targets will be presented, applied to real data. PALSAR data of the yellow river delta in China is used for demonstrating the effectiveness of this algorithm in rural areas.
Prismatic Adaptation Induces Plastic Changes onto Spatial and Temporal Domains in Near and Far Space
Patané, Ivan; Farnè, Alessandro; Frassinetti, Francesca
2016-01-01
A large literature has documented interactions between space and time suggesting that the two experiential domains may share a common format in a generalized magnitude system (ATOM theory). To further explore this hypothesis, here we measured the extent to which time and space are sensitive to the same sensorimotor plasticity processes, as induced by classical prismatic adaptation procedures (PA). We also exanimated whether spatial-attention shifts on time and space processing, produced through PA, extend to stimuli presented beyond the immediate near space. Results indicated that PA affected both temporal and spatial representations not only in the near space (i.e., the region within which the adaptation occurred), but also in the far space. In addition, both rightward and leftward PA directions caused opposite and symmetrical modulations on time processing, whereas only leftward PA biased space processing rightward. We discuss these findings within the ATOM framework and models that account for PA effects on space and time processing. We propose that the differential and asymmetrical effects following PA may suggest that temporal and spatial representations are not perfectly aligned. PMID:26981286
Spatially adaptive Bayesian wavelet thresholding for speckle removal in medical ultrasound images
NASA Astrophysics Data System (ADS)
Hou, Jianhua; Xiong, Chengyi; Chen, Shaoping; He, Xiang
2007-12-01
In this paper, a novel spatially adaptive wavelet thresholding method based on Bayesian maximum a posteriori (MAP) criterion is proposed for speckle removal in medical ultrasound (US) images. The method firstly performs logarithmical transform to original speckled ultrasound image, followed by redundant wavelet transform. The proposed method uses the Rayleigh distribution for speckle wavelet coefficients and Laplacian distribution for modeling the statistics of wavelet coefficients due to signal. A Bayesian estimator with analytical formula is derived from MAP estimation, and the resulting formula is proven to be equivalent to soft thresholding in nature which makes the algorithm very simple. In order to exploit the correlation among wavelet coefficients, the parameters of Laplacian model are assumed to be spatially correlated and can be computed from the coefficients in a neighboring window, thus making our method spatially adaptive in wavelet domain. Theoretical analysis and simulation experiment results show that this proposed method can effectively suppress speckle noise in medical US images while preserving as much as possible important signal features and details.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least
Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat
2011-01-01
This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption. PMID:22254775
Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin
2015-01-01
Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis. PMID:25619991
Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin
2015-01-01
Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis. PMID:25619991
Spatial orientation, adaptation, and motion sickness in real and virtual environments
NASA Technical Reports Server (NTRS)
Dizio, Paul; Lackner, James R.
1992-01-01
Reason and Brand (1975) noted that motion sickness occurs in many situations involving either passive body motion or active interaction with the world via indirect sensorimotor interfaces (e.g., prism spectacles). As might be expected, motion sickness is being reported in VEs that involve apparent self-motion through space, the best known examples being flight simulators (Kennedy et al., 1990). The goals of this paper are to introduce the motion-sickness symptomatology; to outline some concepts that are central to theories of motion sickness, spatial orientation, and adaptation; and to discuss the implications of some trends in VE research and development.
Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max
2014-01-01
Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our
Building blocks for developing spatial skills: evidence from a large, representative U.S. sample.
Jirout, Jamie J; Newcombe, Nora S
2015-03-01
There is evidence suggesting that children's play with spatial toys (e.g., puzzles and blocks) correlates with spatial development. Females play less with spatial toys than do males, which arguably accounts for males' spatial advantages; children with high socioeconomic status (SES) also show an advantage, though SES-related differences in spatial play have been less studied than gender-related differences. Using a large, nationally representative sample from the standardization study of the Wechsler Preschool and Primary Scale of Intelligence-Fourth Edition, and controlling for other cognitive abilities, we observed a specific relation between parent-reported frequency of spatial play and Block Design scores that was invariant across gender and SES. Reported spatial play was higher for boys than for girls, but controlling for spatial play did not eliminate boys' relative advantage on this subtest. SES groups did not differ in reported frequency of spatial play. Future research should consider quality as well as quantity of play, and should explore underlying mechanisms to evaluate causality. PMID:25626442
NASA Astrophysics Data System (ADS)
Bo, Yizhou; Shifa, Naima
2013-09-01
An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.
Malinina, E S
2014-01-01
The spatial specificity of auditory approaching and withdrawing aftereffects was investigated in an anechoic chamber. The adapting and testing stimuli were presented from loudspeakers located in front of the subject at the distance of 1.1 m (near) and 4.5 m (far) from the listener's head. Approach and withdrawal of stimuli were simulated by increasing or decreasing the amplitude of the wide-noise impulse sequence. The listeners were required to determine the movement direction of test stimulus following each 5-s adaptation period. The listeners' "withdrawal" responses were used for psychometric functions plotting and for quantitative assessment of auditory aftereffect. The data summarized for all 8 participants indicated that the asymmetry of approaching and withdrawing aftereffects depended on spatial localization of adaptor and test. The asymmetry of aftereffects was largest when adaptor and test were presented from the same loudspeaker (either near or far). Adaptation to the approach induced a directionally dependent displacement of the psychometric functions relative to control condition without adaptation and adaptation to the withdrawal was not. The magnitude of approaching aftereffect was greater when adaptor and test were located in near spatial domain than when they came from far domain. When adaptor and test were presented from the distinct loudspeakers, magnitude approaching aftereffect was decreasing in comparison to the same spatial localization, but after adaptation to withdrawal it was increasing. As a result, the directionally dependent displacements of the psychometric functions relative to control condition were observed after adaptation as to approach and to withdrawal. The discrepancy of the psychometric functions received after adaptation to approach and to withdrawal at near and far spatial domains was greater under the same localization of adaptor and test in comparison to their distinct localization. We assume that the peculiarities of
Reducing Spatial Heterogeneity of MALDI Samples with Marangoni Flows During Sample Preparation.
Lai, Yin-Hung; Cai, Yi-Hong; Lee, Hsun; Ou, Yu-Meng; Hsiao, Chih-Hao; Tsao, Chien-Wei; Chang, Huan-Tsung; Wang, Yi-Sheng
2016-08-01
This work demonstrates a method to prepare homogeneous distributions of analytes to improve data reproducibility in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS). Natural-air drying processes normally result in unwanted heterogeneous spatial distributions of analytes in MALDI crystals and make quantitative analysis difficult. This study demonstrates that inducing Marangoni flows within drying droplets can significantly reduce the heterogeneity problem. The Marangoni flows are accelerated by changing substrate temperatures to create temperature gradients across droplets. Such hydrodynamic flows are analyzed semi-empirically. Using imaging mass spectrometry, changes of heterogeneity of molecules with the change of substrate temperature during drying processes are demonstrated. The observed heterogeneities of the biomolecules reduce as predicted Marangoni velocities increase. In comparison to conventional methods, drying droplets on a 5 °C substrate while keeping the surroundings at ambient conditions typically reduces the heterogeneity of biomolecular ions by 65%-80%. The observation suggests that decreasing substrate temperature during droplet drying processes is a simple and effective means to reduce analyte heterogeneity for quantitative applications. Graphical Abstract ᅟ. PMID:27126469
Reducing Spatial Heterogeneity of MALDI Samples with Marangoni Flows During Sample Preparation
NASA Astrophysics Data System (ADS)
Lai, Yin-Hung; Cai, Yi-Hong; Lee, Hsun; Ou, Yu-Meng; Hsiao, Chih-Hao; Tsao, Chien-Wei; Chang, Huan-Tsung; Wang, Yi-Sheng
2016-04-01
This work demonstrates a method to prepare homogeneous distributions of analytes to improve data reproducibility in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS). Natural-air drying processes normally result in unwanted heterogeneous spatial distributions of analytes in MALDI crystals and make quantitative analysis difficult. This study demonstrates that inducing Marangoni flows within drying droplets can significantly reduce the heterogeneity problem. The Marangoni flows are accelerated by changing substrate temperatures to create temperature gradients across droplets. Such hydrodynamic flows are analyzed semi-empirically. Using imaging mass spectrometry, changes of heterogeneity of molecules with the change of substrate temperature during drying processes are demonstrated. The observed heterogeneities of the biomolecules reduce as predicted Marangoni velocities increase. In comparison to conventional methods, drying droplets on a 5 °C substrate while keeping the surroundings at ambient conditions typically reduces the heterogeneity of biomolecular ions by 65%-80%. The observation suggests that decreasing substrate temperature during droplet drying processes is a simple and effective means to reduce analyte heterogeneity for quantitative applications.
Reducing Spatial Heterogeneity of MALDI Samples with Marangoni Flows During Sample Preparation
NASA Astrophysics Data System (ADS)
Lai, Yin-Hung; Cai, Yi-Hong; Lee, Hsun; Ou, Yu-Meng; Hsiao, Chih-Hao; Tsao, Chien-Wei; Chang, Huan-Tsung; Wang, Yi-Sheng
2016-08-01
This work demonstrates a method to prepare homogeneous distributions of analytes to improve data reproducibility in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS). Natural-air drying processes normally result in unwanted heterogeneous spatial distributions of analytes in MALDI crystals and make quantitative analysis difficult. This study demonstrates that inducing Marangoni flows within drying droplets can significantly reduce the heterogeneity problem. The Marangoni flows are accelerated by changing substrate temperatures to create temperature gradients across droplets. Such hydrodynamic flows are analyzed semi-empirically. Using imaging mass spectrometry, changes of heterogeneity of molecules with the change of substrate temperature during drying processes are demonstrated. The observed heterogeneities of the biomolecules reduce as predicted Marangoni velocities increase. In comparison to conventional methods, drying droplets on a 5 °C substrate while keeping the surroundings at ambient conditions typically reduces the heterogeneity of biomolecular ions by 65%-80%. The observation suggests that decreasing substrate temperature during droplet drying processes is a simple and effective means to reduce analyte heterogeneity for quantitative applications.
NASA Astrophysics Data System (ADS)
Capitán, José A.; Manrubia, Susanna
2015-12-01
The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.
Adaptive grid artifact reduction in the frequency domain with spatial properties for x-ray images
NASA Astrophysics Data System (ADS)
Kim, Dong Sik; Lee, Sanggyun
2012-03-01
By applying band-rejection filters (BRFs) in the frequency domain, we can efficiently reduce the grid artifacts, which are caused by using the antiscatter grid in obtaining x-ray digital images. However, if the frequency component of the grid artifact is relatively close to that of the object, then simply applying a BRF may seriously distort the object and cause the ringing artifacts. Since the ringing artifacts are quite dependent on the shape of the object to be recovered in the spatial domain, the spatial property of the x-ray image should be considered in applying BRFs. In this paper, we propose an adaptive filtering scheme, which can cooperate such different properties in the spatial domain. In the spatial domain, we compare several approaches, such as the mangnitude, edge, and frequency-modulation (FM) model-based algorithms, to detect the ringing artifact or the grid artifact component. In order to perform a robust detection whether the ringing artifact is strong or not, we employ the FM model for the extracted signal, which corresponds to a specific grid artifact. A detection of the position for the ringing artifact is then conducted based on the slope detection algorithm, which is commonly used as an FM discriminator in the communication area. However, the detected position of the ringing artifact is not accurate. Hence, in order to obtain an accurate detection result, we combine the edge-based approach with the FM model approach. Numerical result for real x-ray images shows that applying BRFs in the frequency domain in conjunction with the spatial property of the ringing artifact can successfully remove the grid artifact, distorting the object less.
Quantitative analysis of spatial sampling error in the infant and adult electroencephalogram.
Grieve, Philip G; Emerson, Ronald G; Isler, Joseph R; Stark, Raymond I
2004-04-01
The purpose of this report was to determine the required number of electrodes to record the infant and adult electroencephalogram (EEG) with a specified amount of spatial sampling error. We first developed mathematical theory that governs the spatial sampling of EEG data distributed on a spherical approximation to the scalp. We then used a concentric sphere model of current flow in the head to simulate realistic EEG data. Quantitative spatial sampling error was calculated for the simulated EEG, with additive measurement noise, for 64, 128, and 256 electrodes equally spaced over the surface of the sphere corresponding to the coverage of the human scalp by commercially available "geodesic" electrode arrays. We found the sampling error for the infant to be larger than that for the adult. For example, a sampling error of less than 10% for the adult was obtained with a 64-electrode array but a 256-electrode array was needed for the infant to achieve the same level of error. With the addition of measurement noise, with power 10 times less than that of the EEG, the sampling error increased to 25% for both the infant and adult, for these numbers of electrodes. These results show that accurate measurement of the spatial properties of the infant EEG requires more electrodes than for the adult. PMID:15050554
Spectral Doppler estimation utilizing 2-D spatial information and adaptive signal processing.
Ekroll, Ingvild K; Torp, Hans; Løvstakken, Lasse
2012-06-01
The trade-off between temporal and spectral resolution in conventional pulsed wave (PW) Doppler may limit duplex/triplex quality and the depiction of rapid flow events. It is therefore desirable to reduce the required observation window (OW) of the Doppler signal while preserving the frequency resolution. This work investigates how the required observation time can be reduced by adaptive spectral estimation utilizing 2-D spatial information obtained by parallel receive beamforming. Four adaptive estimation techniques were investigated, the power spectral Capon (PSC) method, the amplitude and phase estimation (APES) technique, multiple signal classification (MUSIC), and a projection-based version of the Capon technique. By averaging radially and laterally, the required covariance matrix could successfully be estimated without temporal averaging. Useful PW spectra of high resolution and contrast could be generated from ensembles corresponding to those used in color flow imaging (CFI; OW = 10). For a given OW, the frequency resolution could be increased compared with the Welch approach, in cases in which the transit time was higher or comparable to the observation time. In such cases, using short or long pulses with unfocused or focused transmit, an increase in temporal resolution of up to 4 to 6 times could be obtained in in vivo examples. It was further shown that by using adaptive signal processing, velocity spectra may be generated without high-pass filtering the Doppler signal. With the proposed approach, spectra retrospectively calculated from CFI may become useful for unfocused as well as focused imaging. This application may provide new clinical information by inspection of velocity spectra simultaneously from several spatial locations. PMID:22711413
Prototype adaptive bow-tie filter based on spatial exposure time modulation
NASA Astrophysics Data System (ADS)
Badal, Andreu
2016-03-01
In recent years, there has been an increased interest in the development of dynamic bow-tie filters that are able to provide patient-specific x-ray beam shaping. We introduce the first physical prototype of a new adaptive bow-tie filter design based on the concept of "spatial exposure time modulation." While most existing bow-tie filters operate by attenuating the radiation beam differently in different locations using partially attenuating objects, the presented filter shapes the radiation field using two movable completely radio-opaque collimators. The aperture and speed of the collimators is modulated in synchrony with the x-ray exposure to selectively block the radiation emitted to different parts of the object. This mode of operation does not allow the reproduction of every possible attenuation profile, but it can reproduce the profile of any object with an attenuation profile monotonically decreasing from the center to the periphery, such as an object with an elliptical cross section. Therefore, the new adaptive filter provides the same advantages as the currently existing static bow-tie filters, which are typically designed to work for a pre-determined cylindrical object at a fixed distance from the source, and provides the additional capability to adapt its performance at image acquisition time to better compensate for the actual diameter and location of the imaged object. A detailed description of the prototype filter, the implemented control methods, and a preliminary experimental validation of its performance are presented.
Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong
2015-04-01
Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375
Yurkowski, David J; Ferguson, Steven H; Semeniuk, Christina A D; Brown, Tanya M; Muir, Derek C G; Fisk, Aaron T
2016-03-01
Spatial and temporal variation can confound interpretations of relationships within and between species in terms of diet composition, niche size, and trophic position (TP). The cause of dietary variation within species is commonly an ontogenetic niche shift, which is a key dynamic influencing community structure. We quantified spatial and temporal variations in ringed seal (Pusa hispida) diet, niche size, and TP during ontogeny across the Arctic-a rapidly changing ecosystem. Stable carbon and nitrogen isotope analysis was performed on 558 liver and 630 muscle samples from ringed seals and on likely prey species from five locations ranging from the High to the Low Arctic. A modest ontogenetic diet shift occurred, with adult ringed seals consuming more forage fish (approximately 80 versus 60 %) and having a higher TP than subadults, which generally decreased with latitude. However, the degree of shift varied spatially, with adults in the High Arctic presenting a more restricted niche size and consuming more Arctic cod (Boreogadus saida) than subadults (87 versus 44 %) and adults at the lowest latitude (29 %). The TPs of adult and subadult ringed seals generally decreased with latitude (4.7-3.3), which was mainly driven by greater complexity in trophic structure within the zooplankton communities. Adult isotopic niche size increased over time, likely due to the recent circumpolar increases in subarctic forage fish distribution and abundance. Given the spatial and temporal variability in ringed seal foraging ecology, ringed seals exhibit dietary plasticity as a species, suggesting adaptability in terms of their diet to climate change. PMID:26210748
Spatial frequency sampling look-up table method for computer-generated hologram
NASA Astrophysics Data System (ADS)
Zhao, Kai; Huang, Yingqing; Jiang, Xiaoyu; Yan, Xingpeng
2016-04-01
A spatial frequency sampling look-up table method is proposed to generate a hologram. The three-dimensional (3-D) scene is sampled as several intensity images by computer rendering. Each object point on the rendered images has a defined spatial frequency. The basis terms for calculating fringe patterns are precomputed and stored in a table to improve the calculation speed. Both numerical simulations and optical experiments are performed. The results show that the proposed approach can easily realize color reconstructions of a 3-D scene with a low computation cost. The occlusion effects and depth information are all provided accurately.
Goedert, Kelly M.; Shah, Priyanka; Foundas, Anne L.; Barrett, A. M.
2013-01-01
Prism adaptation treatment (PAT) is a promising rehabilitative method for functional recovery in persons with spatial neglect. Previous research suggests that PAT improves motor-intentional “aiming” deficits that frequently occur with frontal lesions. To test whether presence of frontal lesions predicted better improvement of spatial neglect after PAT, the current study evaluated neglect-specific improvement in functional activities (assessment with the Catherine Bergego Scale) over time in 21 right-brain-damaged stroke survivors with left-sided spatial neglect. The results demonstrated that neglect patients' functional activities improved after two weeks of PAT and continued improving for four weeks. Such functional improvement did not occur equally in all of the participants: Neglect patients with lesions involving the frontal cortex (n=13) experienced significantly better functional improvement than did those without frontal lesions (n=8). More importantly, voxel-based lesion-behavior mapping (VLBM) revealed that in comparison to the group of patients without frontal lesions, the frontal-lesioned neglect patients had intact regions in the medial temporal areas, the superior temporal areas, and the inferior longitudinal fasciculus. The medial cortical and subcortical areas in the temporal lobe were especially distinguished in the “frontal lesion” group. The findings suggest that the integrity of medial temporal structures may play an important role in supporting functional improvement after PAT. PMID:22941243
Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics
Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park
2007-04-02
Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.
Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S
2015-01-01
Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224
Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.
2015-01-01
Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Spatial effects, sampling errors, and task specialization in the honey bee.
Johnson, B R
2010-05-01
Task allocation patterns should depend on the spatial distribution of work within the nest, variation in task demand, and the movement patterns of workers, however, relatively little research has focused on these topics. This study uses a spatially explicit agent based model to determine whether such factors alone can generate biases in task performance at the individual level in the honey bees, Apis mellifera. Specialization (bias in task performance) is shown to result from strong sampling error due to localized task demand, relatively slow moving workers relative to nest size, and strong spatial variation in task demand. To date, specialization has been primarily interpreted with the response threshold concept, which is focused on intrinsic (typically genotypic) differences between workers. Response threshold variation and sampling error due to spatial effects are not mutually exclusive, however, and this study suggests that both contribute to patterns of task bias at the individual level. While spatial effects are strong enough to explain some documented cases of specialization; they are relatively short term and not explanatory for long term cases of specialization. In general, this study suggests that the spatial layout of tasks and fluctuations in their demand must be explicitly controlled for in studies focused on identifying genotypic specialists. PMID:20351761
ERIC Educational Resources Information Center
Rossier, Jerome; Zecca, Gregory; Stauffer, Sarah D.; Maggiori, Christian; Dauwalder, Jean-Pierre
2012-01-01
The aim of this study was to analyze the psychometric properties of the Career Adapt-Abilities Scale (CAAS) in a French-speaking Swiss sample and its relationship with personality dimensions and work engagement. The heterogeneous sample of 391 participants (M[subscript age] = 39.59, SD = 12.30) completed the CAAS-International and a short version…
Adaptation of the Athlete Burnout Questionnaire in a Spanish sample of athletes.
Arce, Constantino; De Francisco, Cristina; Andrade, Elena; Seoane, Gloria; Raedeke, Thomas
2012-11-01
In this paper, we offer a general version of the Spanish adaptation of Athlete Burnout Questionnaire (ABQ) designed to measure the syndrome of burnout in athletes of different sports. In previous works, the Spanish version of ABQ was administered to different samples of soccer players. Its psychometric properties were appropriate and similar to the findings in original ABQ. The purpose of this study was to examine the generalization to others sports of the Spanish adaptation. We started from this adaptation, but we included three alternative statements (one for each dimension of the questionnaire), and we replaced the word "soccer" with the word "sport". An 18-item version was administered to a sample of 487 athletes aged 13 and 29 years old. Confirmatory factor analyses replicated the factor structure, but two items modification were necessary in order to obtain a good overall fit of the model. The internal consistency and test-retest reliability of the questionnaire were satisfactory. PMID:23156955
Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration
Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.
2009-01-01
This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.
NASA Astrophysics Data System (ADS)
Man, Tianlong; Wan, Yuhong; Wu, Fan; Wang, Dayong
2015-11-01
We present a new method for the four-dimensional tracking of a spatially incoherent illuminated object. Self-interference digital holography is utilized for recording the hologram of the spatially incoherent illuminated object. Three-dimensional spatial coordinates encoded in the hologram are extracted by holographic reconstruction procedure and tracking algorithms, while the time information is reserved by the single-shot configuration. Applications of the holographic tracking methods are expanded to the incoherent imaging areas. Speckles and potential damage to the samples of the coherent illuminated tracking methods are overcome. Results on the quantitative tracking of three-dimensional spatial position over time are reported. In practical, living zebra fish larva is used to demonstrate one of the applications of the method.
Ritter, André
2014-10-20
The shifted angular spectrum method allows a reduction of the number of samples required for numerical off-axis propagation of scalar wave fields. In this work, a modification of the shifted angular spectrum method is presented. It allows a further reduction of the spatial sampling rate for certain wave fields. We calculate the benefit of this method for spherical waves. Additionally, a working implementation is presented showing the example of a spherical wave propagating through a circular aperture. PMID:25401659
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Broekhuis, Femke; Gopalaswamy, Arjun M.
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614
Broekhuis, Femke; Gopalaswamy, Arjun M
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
NASA Astrophysics Data System (ADS)
Tsai, Chi-Yi; Song, Kai-Tai
2006-02-01
A novel heterogeneity-projection hard-decision adaptive interpolation (HPHD-AI) algorithm is proposed in this paper for color reproduction from Bayer mosaic images. The proposed algorithm aims to estimate the optimal interpolation direction and perform hard-decision interpolation, in which the decision is made before interpolation. To do so, a new heterogeneity-projection scheme based on spectral-spatial correlation is proposed to decide the best interpolation direction from the original mosaic image directly. Exploiting the proposed heterogeneity-projection scheme, a hard-decision rule can be designed easily to perform the interpolation. We have compared this technique with three recently proposed demosaicing techniques: Lu's, Gunturk's and Li's methods, by utilizing twenty-five natural images from Kodak PhotoCD. The experimental results show that HPHD-AI outperforms all of them in both PSNR values and S-CIELab ▵Ε* ab measures.
Shin, Hyun-Ho; Yoon, Woong-Sup
2008-07-01
An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size. PMID:19051924
Adaptive electron beam shaping using a photoemission gun and spatial light modulator
NASA Astrophysics Data System (ADS)
Maxson, Jared; Lee, Hyeri; Bartnik, Adam C.; Kiefer, Jacob; Bazarov, Ivan
2015-02-01
The need for precisely defined beam shapes in photoelectron sources has been well established. In this paper, we use a spatial light modulator and simple shaping algorithm to create arbitrary, detailed transverse laser shapes with high fidelity. We transmit this shaped laser to the photocathode of a high voltage dc gun. Using beam currents where space charge is negligible, and using an imaging solenoid and fluorescent viewscreen, we show that the resultant beam shape preserves these detailed features with similar fidelity. Next, instead of transmitting a shaped laser profile, we use an active feedback on the unshaped electron beam image to create equally accurate and detailed shapes. We demonstrate that this electron beam feedback has the added advantage of correcting for electron optical aberrations, yielding shapes without skew. The method may serve to provide precisely defined electron beams for low current target experiments, space-charge dominated beam commissioning, as well as for online adaptive correction of photocathode quantum efficiency degradation.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Image slicing with a twist: spatial and spectral Nyquist sampling without anamorphic optics
NASA Astrophysics Data System (ADS)
Tecza, Matthias
2014-07-01
Integral field spectrographs have become mainstream instruments at modern telescopes because of their efficient way of collecting data-cubes. Image slicer based integral field spectrographs achieve the highest fill-factor on the detector, but due to the need to Nyquist-sample the spectra, their spatial sampling on the sky is rectangular. Using anamorphic pre-optics before the image slicer overcomes this effect further maximising the fill-factor, but introduces optical aberrations, throughput losses, and additional alignment and calibration requirements, compromising overall instrument performance. In this paper I present a concept for an image-slicer that achieves both spatial and spectral Nyquist-sampling without anamorphic pre-optics. Rotating each slitlet by 45° with respect to the dispersion direction, and arranging them into a saw-tooth pseudo-slit, leads to a lozenge shaped sampling element on the sky, however, the centres of the lozenges lie on a regular and square grid, satisfying the Nyquist sampling criterion in both spatial directions.
Region and edge-adaptive sampling and boundary completion for segmentation
Dillard, Scott E; Prasad, Lakshman; Grazzini, Jacopo A
2010-01-01
Edge detection produces a set of points that are likely to lie on discontinuities between objects within an image. We consider faces of the Gabriel graph of these points, a sub-graph of the Delaunay triangulation. Features are extracted by merging these faces using size, shape and color cues. We measure regional properties of faces using a novel shape-dependant sampling method that overcomes undesirable sampling bias of the Delaunay triangles. Instead, sampling is biased so as to smooth regional statistics within the detected object boundaries, and this smoothing adapts to local geometric features of the shape such as curvature, thickness and straightness.
NASA Astrophysics Data System (ADS)
Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.
2016-02-01
Spatial filtering is an important technique for reducing sky background noise in a satellite quantum key distribution downlink receiver. Atmospheric turbulence limits the extent to which spatial filtering can reduce sky noise without introducing signal losses. Using atmospheric propagation and compensation simulations, the potential benefit of adaptive optics (AO) to secure key generation (SKG) is quantified. Simulations are performed assuming optical propagation from a low-Earth-orbit satellite to a terrestrial receiver that includes AO. Higher-order AO correction is modeled assuming a Shack-Hartmann wavefront sensor and a continuous-face-sheet deformable mirror. The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain wave-optics hardware emulator. SKG rates are calculated for a decoy-state protocol as a function of the receiver field of view for various strengths of turbulence, sky radiances, and pointing angles. The results show that at fields of view smaller than those discussed by others, AO technologies can enhance SKG rates in daylight and enable SKG where it would otherwise be prohibited as a consequence of background optical noise and signal loss due to propagation and turbulence effects.
Mehta, Cyrus; Liu, Lingyun
2016-02-10
Over the past 25 years, adaptive designs have gradually gained acceptance and are being used with increasing frequency in confirmatory clinical trials. Recent surveys of submissions to the regulatory agencies reveal that the most popular type of adaptation is unblinded sample size re-estimation. Concerns have nevertheless been raised that this type of adaptation is inefficient.We intend to show in our discussion that such concerns are greatly exaggerated in any practical setting and that the advantages of adaptive sample size re-estimation usually outweigh any minor loss of efficiency. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26757953
Rohlfs, Marko; Hoffmeister, Thomas S
2004-08-01
Although an increase in competition is a common cost associated with intraspecific crowding, spatial aggregation across food-limited resource patches is a widespread phenomenon in many insect communities. Because intraspecific aggregation of competing insect larvae across, e.show $132#g. fruits, dung, mushrooms etc., is an important means by which many species can coexist (aggregation model of species coexistence), there is a strong need to explore the mechanisms that contribute to the maintenance of this kind of spatial resource exploitation. In the present study, by using Drosophila-parasitoid interactions as a model system, we tested the hypothesis whether intraspecific aggregation reflects an adaptive response to natural enemies. Most of the studies that have hitherto been carried out on Drosophila-parasitoid interactions used an almost two-dimensional artificial host environment, where host larvae could not escape from parasitoid attacks, and have demonstrated positive density-dependent parasitism risk. To test whether these studies captured the essence of such interactions, we used natural breeding substrates (decaying fruits). In a first step, we analysed the parasitism risk of Drosophila larvae on a three-dimensional substrate in natural fly communities in the field, and found that the risk of parasitism decreased with increasing host larval density (inverse density dependence). In a second step, we analysed the parasitism risk of Drosophila subobscura larvae on three breeding substrate types exposed to the larval parasitoids Asobara tabida and Leptopilina heterotoma. We found direct density-dependent parasitism on decaying sloes, inverse density dependence on plums, and a hump-shaped relationship between fly larval density and parasitism risk on crab apples. On crab apples and plums, fly larvae benefited from a density-dependent refuge against the parasitoids. While the proportion of larvae feeding within the fruit tissues increased with larval density
Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann
2016-04-01
The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling. PMID:26719593
Using continuous in-situ measurements to adaptively trigger urban storm water samples
NASA Astrophysics Data System (ADS)
Wong, B. P.; Kerkez, B.
2015-12-01
Until cost-effective in-situ sensors are available for biological parameters, nutrients and metals, automated samplers will continue to be the primary source of reliable water quality measurements. Given limited samples bottles, however, autosamplers often obscure insights on nutrient sources and biogeochemical processes which would otherwise be captured using a continuous sampling approach. To that end, we evaluate the efficacy a novel method to measure first-flush nutrient dynamics in flashy, urban watersheds. Our approach reduces the number of samples required to capture water quality dynamics by leveraging an internet-connected sensor node, which is equipped with a suite of continuous in-situ sensors and an automated sampler. To capture both the initial baseflow as well as storm concentrations, a cloud-hosted adaptive algorithm analyzes the high-resolution sensor data along with local weather forecasts to optimize a sampling schedule. The method was tested in a highly developed urban catchment in Ann Arbor, Michigan and collected samples of nitrate, phosphorus, and suspended solids throughout several storm events. Results indicate that the watershed does not exhibit first flush dynamics, a behavior that would have been obscured when using a non-adaptive sampling approach.
Spatial and sampling analysis for a sensor viewing a pixelized projector
NASA Astrophysics Data System (ADS)
Sieglinger, Breck A.; Flynn, David S.; Coker, Charles F.
1997-07-01
This paper presents an analysis of spatial blurring and sampling effects for a sensor viewing a pixelized scene projector. It addresses the ability of a projector to simulate an arbitrary continuous radiance scene using a field of discrete elements. The spatial fidelity of the projector as seen by an imaging sensor is shown to depend critically on the width of the sensor MTF or spatial response function, and the angular spacing between projector pixels. Quantitative results are presented based on a simulation that compares the output of a sensor viewing a reference scene to the output of the sensor viewing a projector display of the reference scene. Dependence on the blur of the sensor and projector, the scene content, and alignment both of features in the scene and sensor samples with the projector pixel locations are addressed. We attempt to determine the projector characteristics required to perform hardware-in-the-loop testing with adequate spatial realism to evaluate seeker functions like autonomous detection, measuring radiant intensities and angular positions or unresolved objects, or performing autonomous recognition and aimpoint selection for resolved objects.
NASA Astrophysics Data System (ADS)
Herfort, L.; Seaton, C. M.; Wilkin, M.; Baptista, A. M.; Roman, B.; Preston, C. M.; Scholin, C. A.; Melançon, C.; Simon, H. M.
2013-12-01
An autonomous microbial sampling device was integrated with a long-term (endurance) environmental sensor system to investigate variation in microbial composition and activities related to complex estuarine dynamics. This integration was a part of ongoing efforts in the Center for Coastal Margin Observation and Prediction (CMOP) to study estuarine carbon and nitrogen cycling using an observation and prediction system (SATURN, http://www.stccmop.org/saturn) as foundational infrastructure. The two endurance stations fitted with physical and biogeochemical sensors that were used in this study are located in the SATURN observation network. The microbial sampler is the Environmental Sample Processor (ESP), a commercially available electromechanical/fluidic system designed for automated collection, preservation and in situ analyses of marine water samples. The primary goal of the integration was to demonstrate that the ESP, developed for sampling of pelagic oceanic environments, could be successfully deployed for autonomous sample acquisition in the highly dynamic and turbid Columbia River estuary. The ability of the ESP to collect material at both pre-determined times and automatically in response to local conditions was tested. Pre-designated samples were acquired at specific times to capture variability in the tidal cycle. Autonomous, adaptive sampling was triggered when conditions associated with specific water masses were detected in real-time by the SATURN station's sensors and then communicated to the ESP via the station computer to initiate sample collection. Triggering criteria were based on our understanding of estuary dynamics, as provided by the analysis of extensive archives of high-resolution, long-term SATURN observations and simulations. In this manner, we used the ESP to selectively sample various microbial consortia in the estuary to facilitate the study of ephemeral microbial-driven processes. For example, during the summer of 2013 the adaptive sampling
Van der Heyden, H; Dutilleul, P; Brodeur, L; Carisse, O
2014-06-01
Spatial distribution of single-nucleotide polymorphisms (SNPs) related to fungicide resistance was studied for Botrytis cinerea populations in vineyards and for B. squamosa populations in onion fields. Heterogeneity in this distribution was characterized by performing geostatistical analyses based on semivariograms and through the fitting of discrete probability distributions. Two SNPs known to be responsible for boscalid resistance (H272R and H272Y), both located on the B subunit of the succinate dehydrogenase gene, and one SNP known to be responsible for dicarboximide resistance (I365S) were chosen for B. cinerea in grape. For B. squamosa in onion, one SNP responsible for dicarboximide resistance (I365S homologous) was chosen. One onion field was sampled in 2009 and another one was sampled in 2010 for B. squamosa, and two vineyards were sampled in 2011 for B. cinerea, for a total of four sampled sites. Cluster sampling was carried on a 10-by-10 grid, each of the 100 nodes being the center of a 10-by-10-m quadrat. In each quadrat, 10 samples were collected and analyzed by restriction fragment length polymorphism polymerase chain reaction (PCR) or allele specific PCR. Mean SNP incidence varied from 16 to 68%, with an overall mean incidence of 43%. In the geostatistical analyses, omnidirectional variograms showed spatial autocorrelation characterized by ranges of 21 to 1 m. Various levels of anisotropy were detected, however, with variograms computed in four directions (at 0°, 45°, 90°, and 135° from the within-row direction used as reference), indicating that spatial autocorrelation was prevalent or characterized by a longer range in one direction. For all eight data sets, the β-binomial distribution was found to fit the data better than the binomial distribution. This indicates local aggregation of fungicide resistance among sampling units, as supported by estimates of the parameter θ of the β-binomial distribution of 0.09 to 0.23 (overall median value = 0
NASA Astrophysics Data System (ADS)
Corwin, D. L.
2006-05-01
Characterizing spatial variability is an important consideration of any landscape-scale soil-related problem. Geospatial measurements of apparent soil electrical conductivity (ECa) are useful for characterizing spatial variability by directing soil sampling. The objective of this presentation is to discuss equipment, protocols, sampling designs, and a case study of an ECa survey to characterize spatial variability. Specifically, a preliminary spatio-temporal study of management-induced changes to soil quality will be demonstrated for a drainage water reuse study site. The spatio-temporal study used electromagnetic induction ECa data and a response surface sampling design to select 40 sites that reflected the spatial variability of soil properties (i.e., salinity, Na levels, Mo, and B) impacting the intended agricultural use of a saline-sodic field in California's San Joaquin Valley. Soil samples were collected in August 1999 and April 2002. Data from 1999 indicate the presence of high salinity, which increased with depth, high sodium adsorption ratio (SAR), which also increased with depth, and moderate to high B and Mo, which showed no specific trends with depth. The application of drainage water for 32 months resulted in leaching of B from the top 0.3 of soil, leaching of salinity from the top 0.6 m of soil, and leaching of Na and Mo from the top 1.2 m of soil. The leaching fraction over the time period from 1999-2002 was estimated to be 0.10. The level of salinity in the reused drainage water (i.e., 3-5 dS/m) allowed infiltration and leaching to occur even though high sodium and high expanding-lattice clay levels posed potential water flow problems. The leaching of salinity, Na, Mo, and B has resulted in increased forage yield and improved quality of those yields. Preliminary spatio-temporal analyses indicate at least short-term feasibility of drainage water reuse from the perspective of soil quality when the goal is forage production for grazing livestock. The
Hopkins, Carl
2011-05-01
In architectural acoustics, noise control and environmental noise, there are often steady-state signals for which it is necessary to measure the spatial average, sound pressure level inside rooms. This requires using fixed microphone positions, mechanical scanning devices, or manual scanning. In comparison with mechanical scanning devices, the human body allows manual scanning to trace out complex geometrical paths in three-dimensional space. To determine the efficacy of manual scanning paths in terms of an equivalent number of uncorrelated samples, an analytical approach is solved numerically. The benchmark used to assess these paths is a minimum of five uncorrelated fixed microphone positions at frequencies above 200 Hz. For paths involving an operator walking across the room, potential problems exist with walking noise and non-uniform scanning speeds. Hence, paths are considered based on a fixed standing position or rotation of the body about a fixed point. In empty rooms, it is shown that a circle, helix, or cylindrical-type path satisfy the benchmark requirement with the latter two paths being highly efficient at generating large number of uncorrelated samples. In furnished rooms where there is limited space for the operator to move, an efficient path comprises three semicircles with 45°-60° separations. PMID:21568406
NASA Technical Reports Server (NTRS)
Strahler, A. H.; Woodcock, C. E.; Logan, T. L.
1983-01-01
A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.
Adaptation and Validation of the Sexual Assertiveness Scale (SAS) in a Sample of Male Drug Users.
Vallejo-Medina, Pablo; Sierra, Juan Carlos
2015-01-01
The aim of the present study was to adapt and validate the Sexual Assertiveness Scale (SAS) in a sample of male drug users. A sample of 326 male drug users and 322 non-clinical males was selected by cluster sampling and convenience sampling, respectively. Results showed that the scale had good psychometric properties and adequate internal consistency reliability (Initiation = .66, Refusal = .74 and STD-P = .79). An evaluation of the invariance showed strong factor equivalence between both samples. A high and moderate effect of Differential Item Functioning was only found in items 1 and 14 (∆R 2 Nagelkerke = .076 and .037, respectively). We strongly recommend not using item 1 if the goal is to compare the scores of both groups, otherwise the comparison will be biased. Correlations obtained between the CSFQ-14 and the safe sex ratio and the SAS subscales were significant (CI = 95%) and indicated good concurrent validity. Scores of male drug users were similar to those of non-clinical males. Therefore, the adaptation of the SAS to drug users provides enough guarantees for reliable and valid use in both clinical practice and research, although care should be taken with item 1. PMID:25896498
Conductivity image enhancement in MREIT using adaptively weighted spatial averaging filter
2014-01-01
Background In magnetic resonance electrical impedance tomography (MREIT), we reconstruct conductivity images using magnetic flux density data induced by externally injected currents. Since we extract magnetic flux density data from acquired MR phase images, the amount of measurement noise increases in regions of weak MR signals. Especially for local regions of MR signal void, there may occur excessive amounts of noise to deteriorate the quality of reconstructed conductivity images. In this paper, we propose a new conductivity image enhancement method as a postprocessing technique to improve the image quality. Methods Within a magnetic flux density image, the amount of noise varies depending on the position-dependent MR signal intensity. Using the MR magnitude image which is always available in MREIT, we estimate noise levels of measured magnetic flux density data in local regions. Based on the noise estimates, we adjust the window size and weights of a spatial averaging filter, which is applied to reconstructed conductivity images. Without relying on a partial differential equation, the new method is fast and can be easily implemented. Results Applying the novel conductivity image enhancement method to experimental data, we could improve the image quality to better distinguish local regions with different conductivity contrasts. From phantom experiments, the estimated conductivity values had 80% less variations inside regions of homogeneous objects. Reconstructed conductivity images from upper and lower abdominal regions of animals showed much less artifacts in local regions of weak MR signals. Conclusion We developed the fast and simple method to enhance the conductivity image quality by adaptively adjusting the weights and window size of the spatial averaging filter using MR magnitude images. Since the new method is implemented as a postprocessing step, we suggest adopting it without or with other preprocessing methods for application studies where conductivity
MODIS-VIIRS Continuity: The Impact of Spatial Sampling on Global Land (Level-2) Products
NASA Astrophysics Data System (ADS)
Pahlevan, N.; Devadiga, S.; Lin, G.; Wolfe, R. E.; Roman, M. O.; Xiong, X.
2014-12-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard Suomi-NPP (S-NPP) has been providing daily global observations of the Earth surface since early 2012. With the decade-long observations made by MODIS onboard Terra and Aqua, one of the goals of the S-NPP mission is to provide continuity in producing land products that have been generated using heritage MODIS observations. The Land Data Operational Products Evaluation (LDOPE) team uses MODIS-derived products to evaluate land products obtained from VIIRS top-of-atmosphere (TOA) measurements generated through the Land Product Evaluation and Analysis Tool Element (LPEATE). However, due to inherent differences in their observation methods and the corresponding algorithms and post-processing techniques, the products generated from MODIS and VIIRS retain some discrepancies. Amongst all the differences between the two wide-swath radiometers, this study aims at analyzing the impact of differences in the corresponding spatial sampling. In particular, the VIIRS unique sampling scheme can introduce relative biases when comparing products (or observations) obtained from the two sensors. We use Landsat-8's Operational Land Imager (Level-1T data) scenes acquired within a set of 10 x 10 degree, i.e., "Golden tiles" (used for evaluation purposes by LDOPE) to examine how the discrepancies in the spatial responses manifest in measured radiances on a daily basis (for 16 days). The (band-detector averaged) prelaunch Line Spread Functions (LSFs) were used to represent spatial responses for each sensor. Although the impact of differences in sensors' spatial responses depends heavily on the spatial heterogeneity of a region-of-interest, the initial results, on average, indicate up to 0.8% and 5% difference (at the swath level) in the TOA radiances and TOA-based NDVI, respectively. The disparity (calculated for three sample scenes collected over the Golden sites) differs for different days (orbital configurations) and for
Benner, Philipp; Elze, Tobias
2012-01-01
We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem. PMID:22822269
Chu, Hone-Jay; Lin, Yu-Pin; Huang, Yu-Long; Wang, Yung-Chieh
2009-01-01
The objectives of the study are to integrate the conditional Latin Hypercube Sampling (cLHS), sequential Gaussian simulation (SGS) and spatial analysis in remotely sensed images, to monitor the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial heterogeneity and variability. The multiple NDVI images demonstrate that spatial patterns of disturbed landscapes were successfully delineated by spatial analysis such as variogram, Moran'I and landscape metrics in the study area. The hybrid method delineates the spatial patterns and spatial variability of landscapes caused by these large disturbances. The cLHS approach is applied to select samples from Normalized Difference Vegetation Index (NDVI) images from SPOT HRV images in the Chenyulan watershed of Taiwan, and then SGS with sufficient samples is used to generate maps of NDVI images. In final, the NDVI simulated maps are verified using indexes such as the correlation coefficient and mean absolute error (MAE). Therefore, the statistics and spatial structures of multiple NDVI images present a very robust behavior, which advocates the use of the index for the quantification of the landscape spatial patterns and land cover change. In addition, the results transferred by Open Geospatial techniques can be accessed from web-based and end-user applications of the watershed management. PMID:22399972
Rijal, Jhalendra P; Wilson, Rob; Godfrey, Larry D
2016-02-01
Twospotted spider mite, Tetranychus urticae Koch, is an important pest of peppermint in California, USA. Spider mite feeding on peppermint leaves causes physiological changes in the plant, which coupling with the favorable environmental condition can lead to increased mite infestations. Significant yield loss can occur in absence of pest monitoring and timely management. Understating the within-field spatial distribution of T. urticae is critical for the development of reliable sampling plan. The study reported here aims to characterize the spatial distribution of mite infestation in four commercial peppermint fields in northern California using spatial techniques, variogram and Spatial Analysis by Distance IndicEs (SADIE). Variogram analysis revealed that there was a strong evidence for spatially dependent (aggregated) mite population in 13 of 17 sampling dates and the physical distance of the aggregation reached maximum to 7 m in peppermint fields. Using SADIE, 11 of 17 sampling dates showed aggregated distribution pattern of mite infestation. Combining results from variogram and SADIE analysis, the spatial aggregation of T. urticae was evident in all four fields for all 17 sampling dates evaluated. Comparing spatial association using SADIE, ca. 62% of the total sampling pairs showed a positive association of mite spatial distribution patterns between two consecutive sampling dates, which indicates a strong spatial and temporal stability of mite infestation in peppermint fields. These results are discussed in relation to behavior of spider mite distribution within field, and its implications for improving sampling guidelines that are essential for effective pest monitoring and management. PMID:26692381
Johnson, R.L.
1993-11-01
Adaptive sampling programs offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the real-time data generated by an adaptive sampling program. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system (SitePlanner{trademark} ) for data fusion, management, and display and combined Bayesian/geostatistical methods (PLUME) for contamination-extent estimation and sample location selection. This approach is applied in a retrospective study of a subsurface chromium plume at Sandia National Laboratories` chemical waste landfill. Retrospective analyses suggest the potential for characterization cost savings on the order of 60% through a reduction in the number of sampling programs, total number of soil boreholes, and number of samples analyzed from each borehole.
Longmire, M S; Milton, A F; Takken, E H
1982-11-01
Several 1-D signal processing techniques have been evaluated by simulation with a digital computer using high-spatial-resolution (0.15 mrad) noise data gathered from back-lit clouds and uniform sky with a scanning data collection system operating in the 4.0-4.8-microm spectral band. Two ordinary bandpass filters and a least-mean-square (LMS) spatial filter were evaluated in combination with a fixed or adaptive threshold algorithm. The combination of a 1-D LMS filter and a 1-D adaptive threshold sensor was shown to reject extreme cloud clutter effectively and to provide nearly equal signal detection in a clear and cluttered sky, at least in systems whose NEI (noise equivalent irradiance) exceeds 1.5 x 10(-13) W/cm(2) and whose spatial resolution is better than 0.15 x 0.36 mrad. A summary gives highlights of the work, key numerical results, and conclusions. PMID:20396326
Adaptive Bessel-autocorrelation of ultrashort pulses with phase-only spatial light modulators
NASA Astrophysics Data System (ADS)
Huferath-von Luepke, Silke; Bock, Martin; Grunwald, Ruediger
2009-06-01
Recently, we proposed a new approach of a noncollinear correlation technique for ultrashort-pulsed coherent optical signals which was referred to as Bessel-autocorrelator (BAC). The BAC-principle combines the advantages of Bessellike nondiffracting beams like stable propagation, angular robustness and self-reconstruction with the principle of temporal autocorrelation. In comparison to other phase-sensitive measuring techniques, autocorrelation is most straightforward and time-effective because of non-iterative data processing. The analysis of nonlinearly converted fringe patterns of pulsed Bessel-like beams reveals their temporal signature from details of fringe envelopes. By splitting the beams with axicon arrays into multiple sub-beams, transversal resolution is approximated. Here we report on adaptive implementations of BACs with improved phase resolution realized by phase-only liquid-crystal-on-silicon spatial light modulators (LCoS-SLMs). Programming microaxicon phase functions in gray value maps enables for a flexible variation of phase and geometry. Experiments on the diagnostics of few-cycle pulses emitted by a mode-locked Ti:sapphire laser oscillator at wavelengths around 800 nm with 2D-BAC and angular tuned BAC were performed. All-optical phase shift BAC and fringe free BAC approaches are discussed.
Evolution of cooperation in the spatial public goods game with adaptive reputation assortment
NASA Astrophysics Data System (ADS)
Chen, Mei-huan; Wang, Li; Sun, Shi-wen; Wang, Juan; Xia, Cheng-yi
2016-01-01
We present a new spatial public goods game model, which takes the individual reputation and behavior diversity into account at the same time, to investigate the evolution of cooperation. Initially, each player x will be endowed with an integer Rx between 1 and Rmax to characterize his reputation value, which will be adaptively varied according to the strategy action at each time step. Then, the agents play the game and the system proceeds in accordance with a Fermi-like rule, in which a multiplicative factor (wy) to denote the individual difference to perform the strategy transfer will be placed before the traditional Fermi probability. For influential participants, wy is set to be 1.0, but be a smaller value w (0 < w < 1) for non-influential ones. Large quantities of simulations demonstrate that the cooperation behavior will be obviously influenced by the reputation threshold (RC), and the greater the threshold, the higher the fraction of cooperators. The origin of promotion of cooperation will be attributed to the fact that the larger reputation threshold renders the higher heterogeneity in the fraction of two types of players and strategy spreading capability. Our work is conducive to a better understanding of the emergence of cooperation within many real-world systems.
Fine-granularity and spatially-adaptive regularization for projection-based image deblurring.
Li, Xin
2011-04-01
This paper studies two classes of regularization strategies to achieve an improved tradeoff between image recovery and noise suppression in projection-based image deblurring. The first is based on a simple fact that r-times Landweber iteration leads to a fixed level of regularization, which allows us to achieve fine-granularity control of projection-based iterative deblurring by varying the value r. The regularization behavior is explained by using the theory of Lagrangian multiplier for variational schemes. The second class of regularization strategy is based on the observation that various regularized filters can be viewed as nonexpansive mappings in the metric space. A deeper understanding about different regularization filters can be gained by probing into their asymptotic behavior--the fixed point of nonexpansive mappings. By making an analogy to the states of matter in statistical physics, we can observe that different image structures (smooth regions, regular edges and textures) correspond to different fixed points of nonexpansive mappings when the temperature(regularization) parameter varies. Such an analogy motivates us to propose a deterministic annealing based approach toward spatial adaptation in projection-based image deblurring. Significant performance improvements over the current state-of-the-art schemes have been observed in our experiments, which substantiates the effectiveness of the proposed regularization strategies. PMID:20876018
Adaptive spatial compounding for improving ultrasound images of the epidural space on human subjects
NASA Astrophysics Data System (ADS)
Tran, Denis; Hor, King-Wei; Kamani, Allaudin; Lessoway, Vickie; Rohling, Robert N.
2008-03-01
Administering epidural anesthesia can be a difficult procedure, especially for inexperienced physicians. The use of ultrasound imaging can help by showing the location of the key surrounding structures: the ligamentum flavum and the lamina of the vertebrae. The anatomical depiction of the interface between ligamentum flavum and epidural space is currently limited by speckle and anisotropic reflection. Previous work on phantoms showed that adaptive spatial compounding with non-rigid registration can improve the depiction of these features. This paper describes the development of an updated compounding algorithm and results from a clinical study. Average-based compounding may obscure anisotropic reflectors that only appear at certain beam angles, so a new median-based compounding technique is developed. In order to reduce the computational cost of the registration process, a linear prediction algorithm is used to reduce the search space for registration. The algorithms are tested on 20 human subjects. Comparisons are made among the reference image plus combinations of different compounding methods, warping and linear prediction. The gradient of the bone surfaces, the Laplacian of the ligamentum flavum, and the SNR and CNR are used to quantitatively assess the visibility of the features in the processed images. The results show a significant improvement in quality when median-based compounding with warping is used to align the set of beam-steered images and combine them. The improvement of the features makes detection of the epidural space easier.
Advantages and limitations of the spatially adaptive program SAPRO in clinical perimetry.
Fankhauser, F; Funkhouser, A; Kwasniewska, S
1986-05-01
The SAPRO program devised for the OCTOPUS 201 automated perimeter, consists of a number of program components. It is designed to be used on the Octopus 201 computer. In its measurement mode, it employs an algorithm which achieves high speed and efficiency. This is made possible by a threshold bracketing strategy which is simpler than the normal OCTOPUS bracketing. Moreover, three grids with test location distributions of increasing resolution are superimposed in succession on the whole or on part of the visual field to be analyzed. Out of the distribution of test locations, only those which fulfill a number of criteria are actually utilized. These criteria must be given and are adaptable to any given clinical problem. As a result, despite the high spatial resolution achieved, only a fraction of the test locations are utilized using SAPRO as compared with a program using a fixed pattern of test locations. The algorithm is thus able to imitate human intelligence, which tends to concentrate stimuli at places which appear to be relevant for the solution of a problem. The results of program SAPRO are disturbed by short- and long-term fluctuations. Their validity is limited, in a manner similar to that encountered in any other threshold determination procedure. A number of printout modes is available which are oriented towards an optimal understanding of the information contained in various examinations. These principles will be illustrated by one case of inactive disseminated chorioretinitis. PMID:3755124
Bujewski, G.E.; Johnson, R.L.
1996-04-01
Adaptive sampling programs provide real opportunities to save considerable time and money when characterizing hazardous waste sites. This Strategic Environmental Research and Development Program (SERDP) project demonstrated two decision-support technologies, SitePlanner{trademark} and Plume{trademark}, that can facilitate the design and deployment of an adaptive sampling program. A demonstration took place at Joliet Army Ammunition Plant (JAAP), and was unique in that it was tightly coupled with ongoing Army characterization work at the facility, with close scrutiny by both state and federal regulators. The demonstration was conducted in partnership with the Army Environmental Center`s (AEC) Installation Restoration Program and AEC`s Technology Development Program. AEC supported researchers from Tufts University who demonstrated innovative field analytical techniques for the analysis of TNT and DNT. SitePlanner{trademark} is an object-oriented database specifically designed for site characterization that provides an effective way to compile, integrate, manage and display site characterization data as it is being generated. Plume{trademark} uses a combination of Bayesian analysis and geostatistics to provide technical staff with the ability to quantitatively merge soft and hard information for an estimate of the extent of contamination. Plume{trademark} provides an estimate of contamination extent, measures the uncertainty associated with the estimate, determines the value of additional sampling, and locates additional samples so that their value is maximized.
Determination of the optimum sampling frequency of noisy images by spatial statistics
Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio
2005-06-01
In optical metrology the final experimental result is normally an image acquired with a CCD camera. Owing to the sampling at the image, an interpolation is usually required. For determining the error in the measured parameters with that image, knowledge of the uncertainty at the interpolation is essential. We analyze how kriging, an estimator used in spatial statistics, can generate convolution kernels for filtering noise in regularly sampled images. The convolution kernel obtained with kriging explicitly depends on the spatial correlation and also on metrological conditions, such as the random fluctuations of the measured quantity, and the resolution of the measuring devices. Kriging, in addition, allows us to determine the uncertainty of the interpolation, and we have analyzed it in terms of the sampling frequency and the random fluctuations of the image, comparing it with Nyquist criterion. By use of kriging, it is possible to determine the optimum-required sampling frequency for a noisy image so that the uncertainty at interpolation is below a threshold value.
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2015-04-01
Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation
Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.
Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J
2014-06-01
We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made. PMID:24921545
Spatial pattern and sequential sampling of squash bug (Heteroptera: Coreidae) adults in watermelon.
Dogramaci, Mahmut; Shrefler, James W; Giles, Kristopher; Edelson, J V
2006-04-01
Spatial distribution patterns of adult squash bugs were determined in watermelon, Citrullus lanatus (Thunberg) Matsumura and Nakai, during 2001 and 2002. Results of analysis using Taylor's power law regression model indicated that squash bugs were aggregated in watermelon. Taylor's power law provided a good fit with r2 = 0.94. A fixed precision sequential sampling plan was developed for estimating adult squash bug density at fixed precision levels in watermelon. The plan was tested using a resampling simulation method on nine and 13 independent data sets ranging in density from 0.15 to 2.52 adult squash bugs per plant. Average estimated means obtained in 100 repeated simulation runs were within the 95% CI of the true means for all the data. Average estimated levels of precision were similar to the desired level of precision, particularly when the sampling plan was tested on data having an average mean density of 1.19 adult squash bugs per plant. Also, a sequential sampling for classifying adult squash bug density as below or above economic threshold was developed to assist in the decision-making process. The classification sampling plan is advantageous in that it requires smaller sample sizes to estimate the population status when the population density differs greatly from the action threshold. However, the plan may require excessively large sample sizes when the density is close to the threshold. Therefore, an integrated sequential sampling plan was developed using a combination of a fixed precision and classification sequential sampling plans. The integration of sampling plans can help reduce sampling requirements. PMID:16686160
Dorazio, R.M.; Jelks, H.L.; Jordan, F.
2005-01-01
A statistical modeling framework is described for estimating the abundances of spatially distinct subpopulations of animals surveyed using removal sampling. To illustrate this framework, hierarchical models are developed using the Poisson and negative-binomial distributions to model variation in abundance among subpopulations and using the beta distribution to model variation in capture probabilities. These models are fitted to the removal counts observed in a survey of a federally endangered fish species. The resulting estimates of abundance have similar or better precision than those computed using the conventional approach of analyzing the removal counts of each subpopulation separately. Extension of the hierarchical models to include spatial covariates of abundance is straightforward and may be used to identify important features of an animal's habitat or to predict the abundance of animals at unsampled locations.
Insights into a spatially embedded social network from a large-scale snowball sample
NASA Astrophysics Data System (ADS)
Illenberger, J.; Kowald, M.; Axhausen, K. W.; Nagel, K.
2011-12-01
Much research has been conducted to obtain insights into the basic laws governing human travel behaviour. While the traditional travel survey has been for a long time the main source of travel data, recent approaches to use GPS data, mobile phone data, or the circulation of bank notes as a proxy for human travel behaviour are promising. The present study proposes a further source of such proxy-data: the social network. We collect data using an innovative snowball sampling technique to obtain details on the structure of a leisure-contacts network. We analyse the network with respect to its topology, the individuals' characteristics, and its spatial structure. We further show that a multiplication of the functions describing the spatial distribution of leisure contacts and the frequency of physical contacts results in a trip distribution that is consistent with data from the Swiss travel survey.
Will a perfect model agree with perfect observations? The impact of spatial sampling
NASA Astrophysics Data System (ADS)
Schutgens, Nick A. J.; Gryspeerdt, Edward; Weigum, Natalie; Tsyro, Svetlana; Goto, Daisuke; Schulz, Michael; Stier, Philip
2016-05-01
The spatial resolution of global climate models with interactive aerosol and the observations used to evaluate them is very different. Current models use grid spacings of ˜ 200 km, while satellite observations of aerosol use so-called pixels of ˜ 10 km. Ground site or airborne observations relate to even smaller spatial scales. We study the errors incurred due to different resolutions by aggregating high-resolution simulations (10 km grid spacing) over either the large areas of global model grid boxes ("perfect" model data) or small areas corresponding to the pixels of satellite measurements or the field of view of ground sites ("perfect" observations). Our analysis suggests that instantaneous root-mean-square (RMS) differences of perfect observations from perfect global models can easily amount to 30-160 %, for a range of observables like AOT (aerosol optical thickness), extinction, black carbon mass concentrations, PM2.5, number densities and CCN (cloud condensation nuclei). These differences, due entirely to different spatial sampling of models and observations, are often larger than measurement errors in real observations. Temporal averaging over a month of data reduces these differences more strongly for some observables (e.g. a threefold reduction for AOT), than for others (e.g. a twofold reduction for surface black carbon concentrations), but significant RMS differences remain (10-75 %). Note that this study ignores the issue of temporal sampling of real observations, which is likely to affect our present monthly error estimates. We examine several other strategies (e.g. spatial aggregation of observations, interpolation of model data) for reducing these differences and show their effectiveness. Finally, we examine consequences for the use of flight campaign data in global model evaluation and show that significant biases may be introduced depending on the flight strategy used.
HydroCrowd: Citizen-empowered snapshot sampling to assess the spatial distribution of stream
NASA Astrophysics Data System (ADS)
Kraft, Philipp; Breuer, Lutz; Bach, Martin; Aubert, Alice H.; Frede, Hans-Georg
2016-04-01
Large parts of groundwater bodies in Central Europe shows elevated nitrate concentrations. While groundwater samplings characterize the water quality for a longer period, surface water resources, in particular streams, may be subject of fast concentration fluctuations and measurements distributed in time cannot by compared. Thus, sampling should be done in a short time frame (snapshot sampling). To describe the nitrogen status of streams in Germany, we organized a crowdsourcing experiment in the form of a snapshot sampling at a distinct day. We selected a national holiday in fall 2013 (Oct, 3rd) to ensure that a) volunteers have time to take a sample, b) stream water is unlikely to be influenced by recent agricultural fertilizer application, and c) low flow conditions are likely. We distributed 570 cleaned sample flasks to volunteers and got 280 filled flasks back with coordinates and other meta data about the sampled stream. The volunteers were asked to visit any stream outside of settlements and fill the flask with water from that stream. The samples were analyzed in our lab for concentration of nitrate, ammonium and dissolved organic nitrogen (DON), results are presented as a map on the web site http://www.uni-giessen.de/hydrocrowd. The measured results are related to catchment features such as population density, soil properties, and land use derived from national geodata sources. The statistical analyses revealed a significant correlation between nitrate and fraction of arable land (0.46), as well as soil humus content (0.37), but a weak correlation with population density (0.12). DON correlations were weak but significant with humus content (0.14) and arable land (0.13). The mean contribution of DON to total dissolved nitrogen was 22%. Crowdsourcing turned out to be a useful method to assess the spatial distribution of stream solutes, as considerable amounts of samples were collected with comparatively little effort at a single day.
An Adaptive Sampling System for Sensor Nodes in Body Area Networks.
Rieger, R; Taylor, J
2014-04-23
The importance of body sensor networks to monitor patients over a prolonged period of time has increased with an advance in home healthcare applications. Sensor nodes need to operate with very low-power consumption and under the constraint of limited memory capacity. Therefore, it is wasteful to digitize the sensor signal at a constant sample rate, given that the frequency contents of the signals vary with time. Adaptive sampling is established as a practical method to reduce the sample data volume. In this paper a low-power analog system is proposed, which adjusts the converter clock rate to perform a peak-picking algorithm on the second derivative of the input signal. The presented implementation does not require an analog-to-digital converter or a digital processor in the sample selection process. The criteria for selecting a suitable detection threshold are discussed, so that the maximum sampling error can be limited. A circuit level implementation is presented. Measured results exhibit a significant reduction in the average sample frequency and data rate of over 50% and 38% respectively. PMID:24760918
Nie Xiaobo; Liang Jian; Yan Di
2012-12-15
Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h
NASA Astrophysics Data System (ADS)
Zhang, Xiaofeng; Badea, Cristian T.; Hood, Greg; Wetzel, Arthur W.; Stiles, Joel R.; Johnson, G. Allan
2010-02-01
Image reconstruction is one of the main challenges for fluorescence tomography. For in vivo experiments on small animals, in particular, the inhomogeneous optical properties and irregular surface of the animal make free-space image reconstruction challenging because of the difficulties in accurately modeling the forward problem and the finite dynamic range of the photodetector. These two factors are fundamentally limited by the currently available forward models and photonic technologies. Nonetheless, both limitations can be significantly eased using a signal processing approach. We have recently constructed a free-space panoramic fluorescence diffuse optical tomography system to take advantage of co-registered microCT data acquired from the same animal. In this article, we present a data processing strategy that adaptively selects the optical sampling points in the raw 2-D fluorescent CCD images. Specifically, the general sampling area and sampling density are initially specified to create a set of potential sampling points sufficient to cover the region of interest. Based on 3-D anatomical information from the microCT and the fluorescent CCD images, data points are excluded from the set when they are located in an area where either the forward model is known to be problematic (e.g., large wrinkles on the skin) or where the signal is unreliable (e.g., saturated or low signal-to-noise ratio). Parallel Monte Carlo software was implemented to compute the sensitivity function for image reconstruction. Animal experiments were conducted on a mouse cadaver with an artificial fluorescent inclusion. Compared to our previous results using a finite element method, the newly developed parallel Monte Carlo software and the adaptive sampling strategy produced favorable reconstruction results.
Li, Hongdong; Liang, Yizeng; Xu, Qingsong; Cao, Dongsheng
2009-08-19
By employing the simple but effective principle 'survival of the fittest' on which Darwin's Evolution Theory is based, a novel strategy for selecting an optimal combination of key wavelengths of multi-component spectral data, named competitive adaptive reweighted sampling (CARS), is developed. Key wavelengths are defined as the wavelengths with large absolute coefficients in a multivariate linear regression model, such as partial least squares (PLS). In the present work, the absolute values of regression coefficients of PLS model are used as an index for evaluating the importance of each wavelength. Then, based on the importance level of each wavelength, CARS sequentially selects N subsets of wavelengths from N Monte Carlo (MC) sampling runs in an iterative and competitive manner. In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly selected to establish a calibration model. Next, based on the regression coefficients, a two-step procedure including exponentially decreasing function (EDF) based enforced wavelength selection and adaptive reweighted sampling (ARS) based competitive wavelength selection is adopted to select the key wavelengths. Finally, cross validation (CV) is applied to choose the subset with the lowest root mean square error of CV (RMSECV). The performance of the proposed procedure is evaluated using one simulated dataset together with one near infrared dataset of two properties. The results reveal an outstanding characteristic of CARS that it can usually locate an optimal combination of some key wavelengths which are interpretable to the chemical property of interest. Additionally, our study shows that better prediction is obtained by CARS when compared to full spectrum PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE) and moving window partial least squares regression (MWPLSR). PMID:19616692
NASA Astrophysics Data System (ADS)
Shirai, Tomohiro; Takeno, Kohei; Arimoto, Hidenobu; Furukawa, Hiromitsu
2009-07-01
An adaptive optics system with a brand-new device of a liquid-crystal-on-silicon (LCOS) spatial light modulator (SLM) and its behavior in in vivo imaging of the human retina are described. We confirmed by experiments that closed-loop correction of ocular aberrations of the subject's eye was successfully achieved at the rate of 16.7 Hz in our system to obtain a clear retinal image in real time. The result suggests that an LCOS SLM is one of the promising candidates for a wavefront corrector in a prospective commercial ophthalmic instrument with adaptive optics.
Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation
Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan
2016-01-01
Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset. PMID:27273091
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Sherrill, C. David
2014-07-01
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in
Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation.
Cheng, Jun; Yang, Wei; Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan
2016-01-01
Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset. PMID:27273091
Parrish, Robert M.; Sherrill, C. David
2014-07-28
We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.
Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P
2014-01-01
Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. PMID:23614628
Muramoto, Shin; Forbes, Thomas P; van Asten, Arian C; Gillen, Greg
2015-01-01
A novel test sample for the spatially resolved quantification of illicit drugs on the surface of a fingerprint using time-of-flight secondary ion mass spectrometry (ToF-SIMS) and desorption electrospray ionization mass spectrometry (DESI-MS) was demonstrated. Calibration curves relating the signal intensity to the amount of drug deposited on the surface were generated from inkjet-printed arrays of cocaine, methamphetamine, and heroin with a deposited-mass ranging nominally from 10 pg to 50 ng per spot. These curves were used to construct concentration maps that visualized the spatial distribution of the drugs on top of a fingerprint, as well as being able to quantify the amount of drugs in a given area within the map. For the drugs on the fingerprint on silicon, ToF-SIMS showed great success, as it was able to generate concentration maps of all three drugs. On the fingerprint on paper, only the concentration map of cocaine could be constructed using ToF-SIMS and DESI-MS, as the signals of methamphetamine and heroin were completely suppressed by matrix and substrate effects. Spatially resolved quantification of illicit drugs using imaging mass spectrometry is possible, but the choice of substrates could significantly affect the results. PMID:25915085
Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.
2016-01-01
The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334
Kendall, William L.; White, Gary C.
2009-01-01
1. Assessing the probability that a given site is occupied by a species of interest is important to resource managers, as well as metapopulation or landscape ecologists. Managers require accurate estimates of the state of the system, in order to make informed decisions. Models that yield estimates of occupancy, while accounting for imperfect detection, have proven useful by removing a potentially important source of bias. To account for detection probability, multiple independent searches per site for the species are required, under the assumption that the species is available for detection during each search of an occupied site. 2. We demonstrate that when multiple samples per site are defined by searching different locations within a site, absence of the species from a subset of these spatial subunits induces estimation bias when locations are exhaustively assessed or sampled without replacement. 3. We further demonstrate that this bias can be removed by choosing sampling locations with replacement, or if the species is highly mobile over a short period of time. 4. Resampling an existing data set does not mitigate bias due to exhaustive assessment of locations or sampling without replacement. 5. Synthesis and applications. Selecting sampling locations for presence/absence surveys with replacement is practical in most cases. Such an adjustment to field methods will prevent one source of bias, and therefore produce more robust statistical inferences about species occupancy. This will in turn permit managers to make resource decisions based on better knowledge of the state of the system.
A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification
Li, Weixuan; Zhang, Dongxiao; Lin, Guang
2015-02-25
A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a sample of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching
Severtson, Dustin; Flower, Ken; Nansen, Christian
2016-08-01
The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact. PMID:27371709
Adaptive sampling in two-phase designs: a biomarker study for progression in arthritis
McIsaac, Michael A; Cook, Richard J
2015-01-01
Response-dependent two-phase designs are used increasingly often in epidemiological studies to ensure sampling strategies offer good statistical efficiency while working within resource constraints. Optimal response-dependent two-phase designs are difficult to implement, however, as they require specification of unknown parameters. We propose adaptive two-phase designs that exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations. The frequency properties of estimators arising from this design are assessed through simulation, and they are shown to be similar to those from optimal designs. The design procedure is then illustrated through application to a motivating biomarker study in an ongoing rheumatology research program. Copyright © 2015 © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25951124
Othmer, Hans G.; Xin, Xiangrong; Xue, Chuan
2013-01-01
The machinery for transduction of chemotactic stimuli in the bacterium E. coli is one of the most completely characterized signal transduction systems, and because of its relative simplicity, quantitative analysis of this system is possible. Here we discuss models which reproduce many of the important behaviors of the system. The important characteristics of the signal transduction system are excitation and adaptation, and the latter implies that the transduction system can function as a “derivative sensor” with respect to the ligand concentration in that the DC component of a signal is ultimately ignored if it is not too large. This temporal sensing mechanism provides the bacterium with a memory of its passage through spatially- or temporally-varying signal fields, and adaptation is essential for successful chemotaxis. We also discuss some of the spatial patterns observed in populations and indicate how cell-level behavior can be embedded in population-level descriptions. PMID:23624608
Adaptive sampling dual terahertz comb spectroscopy using dual free-running femtosecond lasers
Yasui, Takeshi; Ichikawa, Ryuji; Hsieh, Yi-Da; Hayashi, Kenta; Cahyadi, Harsono; Hindle, Francis; Sakaguchi, Yoshiyuki; Iwata, Tetsuo; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Minoshima, Kaoru; Inaba, Hajime
2015-01-01
Terahertz (THz) dual comb spectroscopy (DCS) is a promising method for high-accuracy, high-resolution, broadband THz spectroscopy because the mode-resolved THz comb spectrum includes both broadband THz radiation and narrow-line CW-THz radiation characteristics. In addition, all frequency modes of a THz comb can be phase-locked to a microwave frequency standard, providing excellent traceability. However, the need for stabilization of dual femtosecond lasers has often hindered its wide use. To overcome this limitation, here we have demonstrated adaptive-sampling THz-DCS, allowing the use of free-running femtosecond lasers. To correct the fluctuation of the time and frequency scales caused by the laser timing jitter, an adaptive sampling clock is generated by dual THz-comb-referenced spectrum analysers and is used for a timing clock signal in a data acquisition board. The results not only indicated the successful implementation of THz-DCS with free-running lasers but also showed that this configuration outperforms standard THz-DCS with stabilized lasers due to the slight jitter remained in the stabilized lasers. PMID:26035687
NASA Astrophysics Data System (ADS)
Woolliams, Peter D.; Tomlins, Peter H.
2011-06-01
Optical coherence tomography (OCT) is becoming increasingly widespread as an experimental tool for clinical investigation, facilitated by the development of commercial instruments. In situ performance evaluation of such 'black box' systems presents a challenge, where the instrument hardware and software can limit access to important configuration parameters and raw data. Two key performance metrics for imaging systems are the point-spread function (PSF) and the associated modulation transfer function (MTF). However, previously described experimental measurement techniques assume user-variable spatial sampling and may not be appropriate for the characterization of deployed commercial instruments. Characterization methods developed for other modalities do not address this issue and rely upon experimental accuracy. Therefore, in this paper we propose a method to characterize the PSF of a commercial OCT microscope that uses OCT images of three-dimensional PSF phantoms to produce an oversampled estimate of the system PSF by combining spatially coincident measurements. This method does not rely upon any strong a priori assumption of the PSF morphology, requires no modification to the system sampling configuration or additional experimental procedure. We use our results to determine the PSF and MTF across the B-scan image plane of a commercial OCT system.
Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology
NASA Technical Reports Server (NTRS)
Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan
2012-01-01
A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.
Optimal sampling strategy for estimation of spatial genetic structure in tree populations.
Cavers, S; Degen, B; Caron, H; Lemes, M R; Margis, R; Salgueiro, F; Lowe, A J
2005-10-01
Fine-scale spatial genetic structure (SGS) in natural tree populations is largely a result of restricted pollen and seed dispersal. Understanding the link between limitations to dispersal in gene vectors and SGS is of key interest to biologists and the availability of highly variable molecular markers has facilitated fine-scale analysis of populations. However, estimation of SGS may depend strongly on the type of genetic marker and sampling strategy (of both loci and individuals). To explore sampling limits, we created a model population with simulated distributions of dominant and codominant alleles, resulting from natural regeneration with restricted gene flow. SGS estimates from subsamples (simulating collection and analysis with amplified fragment length polymorphism (AFLP) and microsatellite markers) were correlated with the 'real' estimate (from the full model population). For both marker types, sampling ranges were evident, with lower limits below which estimation was poorly correlated and upper limits above which sampling became inefficient. Lower limits (correlation of 0.9) were 100 individuals, 10 loci for microsatellites and 150 individuals, 100 loci for AFLPs. Upper limits were 200 individuals, five loci for microsatellites and 200 individuals, 100 loci for AFLPs. The limits indicated by simulation were compared with data sets from real species. Instances where sampling effort had been either insufficient or inefficient were identified. The model results should form practical boundaries for studies aiming to detect SGS. However, greater sample sizes will be required in cases where SGS is weaker than for our simulated population, for example, in species with effective pollen/seed dispersal mechanisms. PMID:16030529
NASA Astrophysics Data System (ADS)
Arieira, J.; Karssenberg, D.; de Jong, S. M.; Addink, E. A.; Couto, E. G.; Nunes da Cunha, C.; Skøien, J. O.
2010-09-01
To improve the protection of wetlands, it is imperative to have a thorough understanding of their structuring elements and of the identification of efficient methods to describe and monitor them. This article uses sophisticated statistical classification, interpolation and error propagation techniques, in order to describe vegetation spatial patterns, map plant community distribution and evaluate the capability of statistical approaches to produce high-quality vegetation maps. The approach results in seven vegetation communities with a known floral composition that can be mapped over large areas using remotely sensed data. The relations between remotely sensing data and vegetation patterns, captured in four factorial axes, were formalized mathematically in multiple linear regression models and used in a universal kriging procedure to reduce the uncertainty in mapped communities. Universal kriging has shown to be a valuable interpolation technique because parts of vegetation variability not explained by the images could be modeled as spatially correlated residuals, increasing prediction accuracy. Differences in spatial dependence of the vegetation gradients evidenced the multi-scale nature of vegetation communities. Cross validation procedures and Monte Carlo simulations were used to quantify the uncertainty in the resulting map. Cross-validation showed that accuracy in classification varies according with the community type, as a result of sampling density and configuration. A map of uncertainty resulted from Monte Carlo simulations displayed the spatial variation in classification accuracy, showing that the quality of classification varies spatially, even though the proportion and arrangement of communities observed in the original map is preserved to a great extent. These results suggested that mapping improvement could be achieved by increasing the number of field observations of those communities with a scattered and small patch size distribution; or by
Adapting Existing Spatial Data Sets to New Uses: An Example from Energy Modeling
Johanesson, G; Stewart, J S; Barr, C; Sabeff, L B; George, R; Heimiller, D; Milbrandt, A
2006-06-23
Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, and economic projections. These data are available at various spatial and temporal scales, which may be different from those needed by the energy modeling community. If the translation from the original format to the format required by the energy researcher is incorrect, then resulting models can produce misleading conclusions. This is of increasing importance, because of the fine resolution data required by models for new alternative energy sources such as wind and distributed generation. This paper addresses the matter by applying spatial statistical techniques which improve the usefulness of spatial data sets (maps) that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) imputing missing data and (3) merging spatial data sets.
NASA Astrophysics Data System (ADS)
Pereira, P.; Cepanko, V.; Vaitkute, D.; Pundyte, N.; Pranskevicius, M.; Ubeda, X.; Mataix-Solera, J.; Cerda, A.
2012-04-01
After fire ash is distributed heterogeneously in the soil surface, providing different levels of soil protection and nutrient inputs. In the immediate period post-fire ash is the most valuable soil protection against erosion and understand ash distribution patterns is of major importance, because, because allow us to identify the most vulnerable areas to soil erosion. Modelling accuracy depends on the data density and the best method for data interpolation. In this communication we aim to study the effects of ash thickness samples, separated by 20, 40, 60, 80 and 100 cm on the modelling performance in a west faced slope with 15 % of inclination in an area of 80 m2. We tested the experimental variogram for each data density and some well known interpolation methods as Inverse Distance to a Weight (IDW) (with the power of 1,2,3,4 and 5), Local Polynomial with the first and second polynomial order, Polynomial Regression (PR), Radial Basis Functions (RBF) as Multilog (MTG), Natural Cubic Spline (NCS), Multiquadratic (MTQ), Inverse Multiquadratic (IMTQ) and Thin plate Spline (TPS) and Ordinary Kriging. Overall we tested 16 methods of interpolation. Interpolation accuracy was observed with the cross-validation method that is achieved by taking each observation in turn out of the sample and estimating from the remaining ones. The errors produced in each interpolation allowed us to calculate the Root Mean Square Error (RMSE). The method with smaller RMSE is the most accurate to interpolation of ash thickness in each considered distance. The results showed that ash sampling distance has important implications on variogram properties. Spherical model fits better with the sampling distance of 20 cm, Gaussian model with the distance of 40 and 100 cm, Linear model with the distance of 60 cm and Wave Hole Effect model with the distance of 80 cm. This means that sample designing had implications on the spatial structure and evolution of ash thickness properties across the studied
Parker, Donald E
2003-01-01
Preparation for extended travel by astronauts within the Solar System, including a possible manned mission to Mars, requires more complete understanding of adaptation to altered inertial environments. Improved understanding is needed to support development and evaluation of interventions to facilitate adaptations during transitions between those environments. Travel to another planet escalates the adaptive challenge because astronauts will experience prolonged exposure to microgravity before encountering a novel gravitational environment. This challenge would have to be met without ground support at the landing site. Evaluation of current adaptive status as well as intervention efficacy can be performed using perceptual, eye movement and postural measures. Due to discrepancies of adaptation magnitude and time-course among these measures, complete understanding of adaptation processes, as well as intervention evaluation, requires examination of all three. Previous research and theory that provide models for comprehending adaptation to altered inertial environments are briefly examined. Reports from astronauts of selected pre- in- and postflight self-motion illusions are described. The currently controversial tilt-translation reinterpretation hypothesis is reviewed and possible resolutions to the controversy are proposed. Finally, based on apparent gaps in our current knowledge, further research is proposed to achieve a more complete understanding of adaptation as well as to develop effective counter-measures. PMID:15096676
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
NASA Astrophysics Data System (ADS)
Sawvel, Eric J.; Willis, Robert; West, Roger R.; Casuccio, Gary S.; Norris, Gary; Kumar, Naresh; Hammond, Davyda; Peters, Thomas M.
2015-03-01
Passive samplers deployed at 25 sites for three, week-long intervals were used to characterize spatial variability in the mass and composition of coarse particulate matter (PM10-2.5) in Cleveland, OH in summer 2008. The size and composition of individual particles determined using computer-controlled scanning electron microscopy with energy-dispersive X-ray spectroscopy (CCSEM-EDS) was then used to estimate PM10-2.5 concentrations (μg m-3) and its components in 13 particle classes. The highest PM10-2.5 mean mass concentrations were observed at three central industrial sites (35 μg m-3, 43 μg m-3, and 48 μg m-3), whereas substantially lower mean concentrations were observed to the west and east of this area at suburban background sites (13 μg m-3 and 15 μg m-3). PM10-2.5 mass and components associated with steel and cement production (Fe-oxide and Ca-rich) exhibited substantial heterogeneity with elevated concentrations observed in the river valley, stretching from Lake Erie south through the central industrial area and in the case of Fe-oxide to a suburban valley site. Other components (e.g., Si/Al-rich typical of crustal material) were considerably less heterogeneous. This work shows that some species of coarse particles are considerably more spatially heterogeneous than others in an urban area with a strong industrial core. It also demonstrates that passive sampling coupled with analysis by CCSEM-EDS is a useful tool to assess the spatial variability of particulate pollutants by composition.
Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach
Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei
2016-01-01
Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The methodmore » is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.« less
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T (U) or, equivalently, of the density of states g(U) over a wide range of energies.
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-01-01
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-01-01
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559
NASA Astrophysics Data System (ADS)
Krogh, E.; Gill, C.; Bell, R.; Davey, N.; Martinsen, M.; Thompson, A.; Simpson, I. J.; Blake, D. R.
2012-12-01
The release of hydrocarbons into the environment can have significant environmental and economic consequences. The evolution of smaller, more portable mass spectrometers to the field can provide spatially and temporally resolved information for rapid detection, adaptive sampling and decision support. We have deployed a mobile platform membrane introduction mass spectrometer (MIMS) for the in-field simultaneous measurement of volatile and semi-volatile organic compounds. In this work, we report instrument and data handling advances that produce geographically referenced data in real-time and preliminary data where these improvements have been combined with high precision ultra-trace VOCs analysis to adaptively sample air plumes near oil and gas operations in Alberta, Canada. We have modified a commercially available ion-trap mass spectrometer (Griffin ICX 400) with an in-house temperature controlled capillary hollow fibre polydimethylsiloxane (PDMS) polymer membrane interface and in-line permeation tube flow cell for a continuously infused internal standard. The system is powered by 24 VDC for remote operations in a moving vehicle. Software modifications include the ability to run continuous, interlaced tandem mass spectrometry (MS/MS) experiments for multiple contaminants/internal standards. All data are time and location stamped with on-board GPS and meteorological data to facilitate spatial and temporal data mapping. Tandem MS/MS scans were employed to simultaneously monitor ten volatile and semi-volatile analytes, including benzene, toluene, ethylbenzene and xylene (BTEX), reduced sulfur compounds, halogenated organics and naphthalene. Quantification was achieved by calibrating against a continuously infused deuterated internal standard (toluene-d8). Time referenced MS/MS data were correlated with positional data and processed using Labview and Matlab to produce calibrated, geographical Google Earth data-visualizations that enable adaptive sampling protocols
NASA Astrophysics Data System (ADS)
Ayris, P. M.; Delmelle, P.; Pereira, B.; Maters, E. C.; Damby, D. E.; Durant, A. J.; Dingwell, D. B.
2015-07-01
Tephra particles in physically and chemically evolving volcanic plumes and clouds carry soluble sulphate and halide salts to the Earth's surface, ultimately depositing volcanogenic compounds into terrestrial or aquatic environments. Upon leaching of tephra in water, these salts dissolve rapidly. Previous studies have investigated the spatial and temporal variability of tephra leachate compositions during an eruption in order to gain insight into the mechanisms of gas-tephra interaction which emplace those salts. However, the leachate datasets analysed are typically small and may poorly represent the natural variability and complexity of tephra deposits. Here, we have conducted a retrospective analysis of published leachate analyses from the 18 May 1980 eruption of Mount St. Helens, Washington, analysing the spatial structure of the concentrations and relative abundances of soluble Ca, Cl, Na and S across the deposits. We have identified two spatial features: (1) concentrated tephra leachate compositions in blast deposits to the north of the volcano and (2) low S/Cl and Na/Cl ratios around the Washington-Idaho border. By reference to the bulk chemistry and granulometry of the deposit and to current knowledge of gas-tephra interactions, we suggest that the proximal enrichments are the product of pre-eruptive gas uptake during cryptodome emplacement. We speculate that the low S/Cl and Na/Cl ratios reflect a combination of compositional dependences on high-temperature SO2 uptake and preferential HCl uptake by hydrometeor-tephra aggregates, manifested in terrestrial deposits by tephra sedimentation and fallout patterns. However, despite our interrogation of the most exhaustive tephra leachate dataset available, it has become clear in this effort that more detailed insights into gas-tephra interaction mechanisms are prevented by the prevalent poor temporal and spatial representativeness of the collated data and the limited characterisation of the tephra deposits. Future